Saturday, December 29, 2012

Using GitHub As Your Blog

I just put an entire blog post, including an ad for my book, inside the comments for one of my projects on GitHub. Technically it only happened because I was writing this comment and I was on a roll, but I predict absurdly intricate code comments will become the bold new startup marketing move in 2013.

Wednesday, December 19, 2012

Guest Post Today On The Code Climate Blog

The conversation around DCI and concerns in Rails continues today with blog posts from David Bryant Copeland and Jim Gay, author of Clean Ruby.

Last but not least, the conversation also continues with a guest post I wrote for the Code Climate blog, entitled DCI, Concerns, And Readable Code. Code Climate is a web app which provides the best way to ship better code faster. My blog post examines David Heinemeier Hansson's recent post on concerns, and shows how that post links this ongoing debate to a major theme from my book: that Rails makes fluid API design its main priority, and owes a lot of its success to that.

Although there is some funny sarcasm, because I just can't resist a good punchline, I'm aiming on the whole for a more elevated level of discussion than in the past. Here's an excerpt:

Finding “just the right amount of abstraction” requires more than just defining the most readable possible API. It also requires balancing the most readable possible API with the most comprehensible possible domain model. There’s an analogy to human writing: you can’t just write beautiful sentences and call it a day. If you put beautiful sentences on pages at random, you might have poetry, if you get lucky, but you won’t have a story.

Every Rails developer who wants to write good code can get something valuable from this post.

Sunday, December 16, 2012

Saturday, December 15, 2012

Jekyll Custom Output Directory ("destination") Quirk In _config.yml

It appears that if you want to specify a custom output directory with Jekyll's destination: parameter, in its _config.yml file, this destination: line should be the first line.

In my personal experience, today, this caused the creation of both a custom output dir and Jekyll's default output dir _site:

url: foobar.com
destination: foobar/


Whereas this produced only the custom output dir:

destination: foobar/
url: foobar.com


Thursday, December 13, 2012

Requiem For A Temporary Autonomous Zone

One of the big author names to drop in the 1990s, besides the cyberpunks, was Hakim Bey, a philosopher who wrote Pirate Utopias and The Temporary Autonomous Zone. Temporary autonomous zones empowered the kinder, gentler post-modern anarchist to create safe havens of sane behavior outside the restrictions of hegemonic societal control. The archetypal TAZ was a rave, a momentary bubble of peaceful anarchy, but other implementations existed (most obviously Burning Man and Usenet).

Another big author name to drop in the 1990s: the brilliant Benoit Mandelbrot, who remains more worth reading than almost anyone since Shakespeare. You can also find Upski's name on that list too, but only in an alternate universe ruled by truth and light.

If it seems maudlin for me to dredge up forgotten underground literary heroes, I have a reason. Anil Dash wrote a wonderful post dredging up a forgotten underground World Wide Web, one whose businesses and communities respected crucial principles of online culture like data interoperability, user privacy, pseudonymity, microformats, and remixing.

It's not a coincidence that this WWW is the one which flourished after the collapse of the first dot-com boom, nor is it a coincidence that this is the WWW which birthed Rails. Nothing made the Web a better, more idealistic, or well-built place than the venture capital exodus of the early 2000s; nothing sent more programmers off to live in their moms' basements, either.

Dash seems defensive to me:

This isn't some standard polemic about "those stupid walled-garden networks are bad!" I know that Facebook and Twitter and Pinterest and LinkedIn and the rest are great sites, and they give their users a lot of value. They're amazing achievements, from a pure software perspective. But they're based on a few assumptions that aren't necessarily correct. The primary fallacy that underpins many of their mistakes is that user flexibility and control necessarily lead to a user experience complexity that hurts growth. And the second, more grave fallacy, is the thinking that exerting extreme control over users is the best way to maximize the profitability and sustainability of their networks.

The first step to disabusing them of this notion is for the people creating the next generation of social applications to learn a little bit of history, to know your shit, whether that's about Twitter's business model or Google's social features or anything else. We have to know what's been tried and failed, what good ideas were simply ahead of their time, and what opportunities have been lost in the current generation of dominant social networks.


I'm sorry to say that my take on this is more fatalistic; in my opinion, many of the functions of the venture capital system actively thwart the production of good software, but perform marvelously if you view them as a bridge for transferring the population of an aristocracy from positions which control the social and economic systems of the past to positions which control the social and economic systems of the future.

In my opinion, another dot-com crash would purge VC influence from the Web, restoring some measure of sanity to it, and allowing it to regain many of the virtues Dash eulogizes; however, that crash would have a pretty severe downside for many people. Anarchists throw incredible parties, but they're less effective at organizing stable social systems.

Wednesday, December 12, 2012

Rails Developers Should Take DCI Seriously

I recently wrote an ebook about Rails. It covers the ways Rails breaks OOP theory, where this creates problems for Rails developers, where it reveals flaws in OOP theory, the core strategies that make Rails so delightful and powerful, and what you can learn from it all to write better code yourself. This involved crystallizing opinions I've formed while hacking Rails apps since late 2005, as well as new research. One thing I learned from writing my ebook: a lot of the strongest OO thinking on how to use Rails converges on the DCI pattern.

The idealistic perspective on DCI is extremely worth considering. DCI stands for Data, Context, and Interaction. Jim Gay's writing a book on the topic called Clean Ruby, and it really does teach an excellent new way to think about OO software. I recommend it very highly.

But there's also a cynical perspective on DCI.

You would expect to hear the cynical perspective on DCI from DCI's critics, but what you get is too weak to qualify. David Heinemeier Hansson, creator of Rails, mocked unnamed object-oriented purists on the 37Signals blog:

Inheritance is always bad, only composition can be our true savior! Only standalone command objects can give us the reusability this application needs!

Turning patterns into articles of faith takes them out of the realm of rational discussion. Code either adheres to the holy patterns or it doesn’t. If it does not, the programmer must first repent, say his 25 rose-delegations, and then swear never to let an opportunity for a policy object extraction pass by him again!


37Signals dev Jeremy Kemper joined in with a snarky tweet:



This argument is simultaneously indisputable and weak. It's a truism that you have to exercise judgement when deciding to use or not use any toolset or mental model for coding. But it's irrelevant because there aren't a whole lot of single-minded OOP zealots in the world of Rails developers. Rails takes significant liberties with OOP, so you can't really build a career on Rails development unless you're flexible about that.

So why did Hansson and Kemper even bother to post this? I was egotistical enough to imagine this served as a response to my book, and/or Clean Ruby. It's almost the argument my book makes; in some of the places where Rails violates traditional OOP rules, the smart thing to do is throw out those rules. (That's some of those places; not all.) But I soon learned the real story. It all came about as a result of some kind of kerfuffle on Twitter between Hansson and the Ruby TDD evangelist Corey Haines.

Rails developers sure love to get their kerfuff on:





For a longtime Rails developer, it looks pretty weird. Where Hansson once employed brilliant, acerbic wit to attack over-engineered failures like J2EE, back in 2005, he's in this case hobbling together a lackluster and irrelevant argument so he can put down a guy who makes Rails easier to understand and use.

The Rails world features a lot of drama, and I've been foolish enough to participate in it myself in the past. The noise factor doesn't make it any easier to hold serious conversations about the best way to use the flawed yet fantastic toolkit which Rails offers developers. (Hint: avoid STI.) It's not so much a situation where the emperor has no clothes as a situation where the emperor will make fun of you for wearing pants.

Camel corduroy pants
http://www.flickr.com/photos/verypurpleperson/5956471254/

But however snarky these very talented and accomplished coders at 37Signals were, they didn't attack DCI head-on. You won't find a lot of substance in the 37Signals post, just nonspecific mockery, as if plausible deniability mattered more in architectural discussions than clarity.

So remember the question of the cynical perspective on DCI? You'd expect to find it here, but ironically, the 37Signals blog post fails to consider the cynical perspective on DCI. It doesn't even mention DCI. If you didn't know the context, you wouldn't be able to make the connection without Kemper's tweet. And this is the downside with Rails drama; it's tiresome, yet if you don't track it, you don't know what the blog posts are really about.

So we have an interesting blind spot. The idealists are not considering the cynical perspective, because they never do, but the critics are not considering it either, because they're too busy hand-waving the entire question away.

So, be warned. Here comes the cynical perspective on DCI. I'm not endorsing it at all, I'm just bringing it to the table: DCI is a bunch of jargon we're forced to resort to because Rails, in a paradoxical twist on the way it creates beautiful code DSLs, has created a mangled nightmare of a DSL when it comes to the actual English vocabulary Rails developers use, when we actually speak to each other with human words.

Specifically, we have obliterated the distance between ActiveRecord, the Ruby gem, and Active Record, the design pattern. Here's the pattern, which Martin Fowler identified, and which Hansson named ActiveRecord (the gem) after:

An object that wraps a row in a database table or view, encapsulates the database access, and adds domain logic on that data.

Interesting question: is belongs_to domain logic? If you use ActiveRecord "models" only as data objects, and wrap those in classes which you treat as The Real Models™, are you implementing Active Record?

Prior to the discussion around DCI, many Rails developers began exploring and praising the peace-of-mind benefits which you obtain when you use ActiveRecord as a data object factory, and compose your domain logic in wrapper classes. DCI allows you to put that logic in modules and apply it to your ActiveRecord models as needed; however, in either case, you're putting domain logic somewhere other than ActiveRecord subclasses inside /app/models, and doing so to avoid monolithic Rails apps (or monorails, as they're known). Other approaches use wrapper classes in /app/models or /lib, but with DCI, you take your domain logic out of ActiveRecord classes when writing the code, store it in modules instead, and mix those modules back in to ActiveRecord objects as needed (and only as needed). So when it comes to files on a filesystem, you're not using ActiveRecord to implement Active Record, but when it comes to the action of the system and the objects in memory at runtime, you are.

Remember the pattern's definition?

An object that... adds domain logic [to] data.

So you're going to add domain logic to data. When? Most object-oriented thinking suffers from the legacy of primitive, clunky languages which could not differentiate objects from the classes they instantiated. Thus many ideas which claim to be object-oriented are really only class-oriented, and the naive interpretation of "adding domain logic to data" assumes you could only ever do that at the point in time when a class is defined. But Ruby, with its near-infinite flexibility, can add domain logic to data at any time, and Ruby DCI implementations make use of that.

Even though it has a great pedigree, DCI is an emerging trend among Rails developers. It's not the official solution. Rails officially handles bloated models with a clunkier but not useless concept known as concerns, which live in /app/concerns. DCI is basically just a smarter, better-structured replacement for Rails's haphazard and off-the-cuff concerns idea — kind of like the relationship RSpec has to "pure" Rails testing. The major difference between Rails concerns and DCI modules is that Rails concerns fail to differentiate indirection from abstraction, while DCI modules make that semantic distinction clear.

Zed Shaw wrote the post which I linked to just now, about differentiating indirection and abstraction. I hate to even link to a Zed Shaw post, because he is in my personal opinion a maniac, and it's a verifiable matter of provable fact that he blogged a description of me, Giles Bowkett, engaging in sexual activities with a dog, which was libelous, inaccurate, and, in my personal opinion, also rather disturbing. This is related to my personal opinion regarding his sanity, but I think there's enough of this type of discussion in the Ruby world as is, so rather than explain my conclusions about his mental state — which are probably obvious anyway — let me just say that whatever strange, unwelcome, and surprisingly detailed opinions Shaw might have about my penis, when it comes to the difference between indirection and abstraction, he actually makes sense for a change.

Wag the Dog
http://www.flickr.com/photos/dwinton/98861418/

It's very important to prioritize abstraction over indirection. It's the difference between re-organizing your desk by creating a system to finish all your tasks — that would be abstraction, and DCI — vs. "re-organizing your desk" by taking everything on the desk, throwing it in a box, and then hiding the box in another room. That would be indirection, and Rails 3 concerns. I would add the caveat that indirection sometimes helps you discover the abstractions you need, but overall, it's a good point.

To recap, DCI uses ActiveRecord to implement Active Record — sort of, in a sense. It avoids creating the gazillion-line User models we all know and loathe, because it puts domain logic in modules instead of on the actual Active Record (or domain-logic-enabled data object), and only mixes that domain logic in to the Active Record when the Active Record operates in a context where it needs such logic. The domain logic lives on the object, but not in the ActiveRecord class file, and only lives on the object when actually needed. Other strategies for de-bloating gigantor User models involve isolating ActiveRecord objects within wrapper classes, and considering those wrapper classes to represent the real domain model — in effect, refusing to use Active Record, the design pattern, while happily choosing ActiveRecord, the library itself, over alternative Ruby ORMs like Sequel or DataMapper.

This means ActiveRecord must logically not be an implementation of Active Record. How can it? If ActiveRecord actually implemented Active Record, it would be impossible to use ActiveRecord without using Active Record. But you can. Not only that, it might be a very good idea.

Instead of claiming ActiveRecord implements Active Record, it would be more accurate to say that ActiveRecord provides a data object which you can use to implement Active Record, if you want.

It makes Hansson's blog post pretty ironic:

Turning patterns into articles of faith takes them out of the realm of rational discussion.

He's describing the mistake that he himself made when he named ActiveRecord after Active Record. Naming gems after design patterns takes those design patterns out of the scope of architectural discussion.

But I'm not just posting this to dick around and win points in an Internet argument. Say you need to discuss this with your teammates on some project, for serious reasons. How do you have that conversation? "We need to use Active Record, but not ActiveRecord." You don't have that conversation, because you don't want to sound like Abbott and Costello. "ActiveRecord?" "No, Active Record!" "Who's on first?" Next somebody starts throwing cream pies and the dancing bear steals your bicycle.

Bears on bikes
http://www.flickr.com/photos/shaun_and_jacki/748539928/

Instead, even though The Rails Way (written in 2007) briefly discusses breaking models into modules, and even though James Golick advocated the wrapper classes approach in 2010, and even though Rails 3 provided concerns, the conversation didn't really gel (in my opinion) until people started talking about DCI. That's because DCI provides an interesting way to think about code which makes it easy to solve the problem. Before this, you could hardly even talk about the issue without sounding silly.

So here's the cynical perspective on the cynical perspective: DCI must be bullshit, a crutch, if people are only digging up DCI because Rails uses an inaccurate vocabulary to describe itself. But the idealistic perspective on the cynical perspective is to say that digging up DCI means digging up buried treasure.

Treasure Chest
http://www.flickr.com/photos/nox_noctis_silentium/3830417445/

The guy who created DCI also created MVC, and, as part of a team, he also created Smalltalk. Since Ruby is mostly Smalltalk on a command line, he's almost one of the people who created Ruby. Maybe not a grandparent, but at least a great-uncle.

So "buried treasure" is probably the correct interpretation. Exhibit A: the book Clean Ruby. You should read it.

You should totally read my book, too. The content is just like this, except with greater distance from Rails drama, and a greater insistence on accurate thinking and clarity. I also tell you an interesting story about the unusual life, and sexual problems, of a talking rhinoceros. It's kind of like the infamous Mr. Shaw, but without the terrible social skills, or perhaps _why the lucky stiff meets Hunter S. Thompson.



Here's what people are saying about it on Twitter:



















That's right, folks, master poet T.S. Eliot himself commented on my book from beyond the grave. And that's just a sampling; there are many more I didn't post.

Even the creator of Rails has an opinion! Check out what Hansson himself had to say about my book on the 37Signals blog:



If that's not an endorsement, I don't know what is.



(And finally, if you're curious, but not ready to buy, you can download a free excerpt.)

Friday, December 7, 2012

GitHub vs Skyrim

In my opinion, GitHub does the best original thinking on the Web about how to run a tech company, and Clay Shirky is right to call them one of the most important companies in the world. Also in my opinion, Ryan Tomayko wrote one of the best "how GitHub works" posts ever:

What we're learning at GitHub is that opting in to open source project constraints often results in better natural survivability characteristics for many types of business, product development, and operations activities.

It shouldn't be a surprise that a company which specializes in tools and workflows for open source development, and hires based on open source participation, finds the tools and workflows for open source development useful in its day-to-day operations. However, in Tomayko's post, he explains that GitHub, like Valve, successfully escapes the Dilbert-hell resting state which many people assume all startup activity converges to, that GitHub does so using GitHub, and that he believes other tech companies can do the same thing.

Tomayko's post includes a 43-minute presentation from a conference, and it's worth watching, although a little overlong in my opinion. In the presentation, he highlights a traditional corporate org chart:



He then compares it to a chart of open source activity in the Perl community which was generated by software which measured and correlated interaction using the GitHub API:



He says GitHub is more like the second chart, and I believe him. However, I think any healthy company is more like the second chart; the first org chart represents a dreamworld cooked up by overpaid aristocrats, pitifully focused on their hierarchy, whereas the second is founded on measurement and observation. (You don't need my political point of view to look at the first org chart and notice that it is an absurdly simplistic model for any human social system.)

Every company with more than one person working there will feature some tension between the rules people set out and the way people actually behave. Healthy companies have very little of this tension; dysfunctional ones have very much. GitHub throws away the traditional social fiction and mostly focuses on the reality. I say "mostly" because Tomayko explains that GitHub does present one new social fiction: in their recruiting, they claim they have no managers, but what that really means is that everyone is their own manager, and by signing on, you accept the responsibility of managing yourself.

I definitely recommend reading Tomayko's post and watching his presentation. If this was the type of dipshit mating-honk you sometimes find on Hacker News, I would go on to further tell you that GitHub's way of organizing projects is The Way and/or The Future. But I'm going to stop short of that, and simply call it one way and one future (albeit maybe the best way and the best future), because of an extremely powerful counterexample: Skyrim (and World of Warcraft).



Modern fantasy role-playing video games feature incredible gamified to-do lists. Their effectiveness is far beyond any to-do lists which exist in the real world. If I ever figure out how to make my own real-life to-do lists feature the same compelling blend of enticement, reward, and repetition, I'll become so productive that people will suspect me of being an entire army of clones who all happen to share the same name.

It's incredibly easy to sit down to play one of these games and not stand up again until six or eight hours later. The whole time, you're telling yourself that you're just going to go and do this one other task. People have spent entire years of their lives in these games, accomplishing far more in the fantasy worlds than they do in reality (and sometimes literally starving to death in reality as a consequence). It sounds crazy to say it, but for a very large number of people, having a computer tell you what to do next must feel really good.

I'm not the only one who wonders what it would be like to gamify my to-do lists. And because the group raids aspect of Warcraft requires a great deal of planning and coordination, fans of the game -- including Joi Ito, the entrepreneur, investor, and director of MIT's Media Lab -- claim that it's not only entertaining, it's great training for project management, including project management in technology companies:

I am in awe of Persimmon who is our raid leader. She works in a hospital in real life. She is the stabilizing force during the raids, supporting the class leaders, nudging the conversation and keeping the raid moving as fast as possible without moving too fast. I find that she reminds me of many successful open source project leaders or Jimmy Wales of Wikipedia, except that what she has to do happens much faster and in real-time. Without her fully customized user-interface and scripts she would never be able to manage what she does...

The structure and the organization required to complete missions or quests in WoW adds a great deal of focus and complexity to the community compared to a chat room and the communications and management begins to feel much more like collaboration in a work environment. I think that the ever-evolving user interface and communication tools that we are developing might impact the future of management in the real world. My feeling is that what we are doing in WoW represents in many ways the future of real time collaborative teams and leadership in an increasingly ad hoc, always-on, diversity intense and real-time environment.


Although Ito compares running a raid to running an open source project, it's clear that his model implies managerial responsibility. But if you read Tomayko's post and watch his presentation, you'll witness a strong argument against the use of any managerial staff in technology projects:

Avoid synchronization / lock points when designing process. This is DVCS writ large. We don't have a development manager that grants commit bit to repositories before you can do work, or a release manager that approves deploys, or a product manager that approves work on experimental product ideas. Work toward a goal should never be blocked on approval. Push approval/rejection to the review stage or automate it, but surface work early to get feedback.

Another major contrast between Ito's post (made in 2006) and Tomayko's is that Ito praises always-on communication, and treats its usefulness and primacy as a foregone conclusion, while Tomayko recommends that it be strictly optional.

The reason why?

This is [distributed version control systems] writ large.

Distributed version control systems decouple collaboration from coordination. This is the same benefit which Wikipedia provides to archivists and Twitter supplies to activists. It explains why Clay Shirky likes GitHub, too, because it's the same dynamic Shirky explores in his terrific book Here Comes Everybody. He finds the act of decoupling collaboration from coordination -- and removing authority figures, by removing synchronization -- at the heart of a huge range of online and social phenomena, and identifies it as the aspect of the Internet most likely to transform society permanently (just as the printing press eventually did).

It's a brilliant idea. It's an idea whose time has come. And it obviously works for GitHub. But I'm not sure how to reconcile Shirky's very compelling arguments or GitHub's wonderful success with the sheer, dimwitted enjoyment I get out of being told repeatedly to go over there and kill another orc. Again.

One thing which makes it tricky is that when you participate in open source projects on GitHub the site, you're doing the same essential process that working at GitHub entails, and all the working parts of that process are exposed. You can track issues, commits, and pull requests, and you can see how everything fits together. But in games like Warcraft and Skyrim, although the task lists are impressively addictive, whatever makes them that way is hidden below absurdly primitive user experiences. Nobody will ever hold up the Warcraft quest log as an example of beautiful UI.

A lot of companies are working on ideas like gamifying the workplace. I think a lot of them will fail, and that they will look stupid in the process, because most of these companies don't seem to think very hard about what "gamifying" means, and they don't seem to think about what "the workplace" means at all. It's possible that GitHub and Valve represent the first wave of what will become a normal, predominantly office-free way of living and working for the same people who would have been called "office workers" throughout the 20th century.

If that's the case, then the whole concept of "gamifying the workplace" might be irredeemably fucked, because "workplace" might not even be a meaningful term in a few decades. It certainly won't refer primarily to physical locations; for many people today, it already doesn't. I think a lot of these companies are putting futuristic lipstick on a Steam Age pig. Adding badges and meaningless points to a process which feels antique is no way to create the future.

If there is any future to ideas like "gamifying the workplace," it's in some merger of elements from GitHub and Skyrim, but I'm still not sure which elements, or in which proportion.

Saturday, December 1, 2012

Underpromoting Information Products For Fun And Profit

On the night before Thanksgiving, I released an ebook on Rails, covering how Rails breaks with traditional OOP theory, where I think that creates problems for Rails developers, and where I think that reveals flaws in orthodox OOP thinking.

In the past I've aggressively promoted my information products, with cheesy titles like "Secrets Of Superstar Programmer Productivity," hype-generating moves like preview videos, countdowns to product launch dates, and trolltastic rants, plus artificial scarcity tactics like only offering my products for sale during a brief window (e.g. one weekend).

With this product, I wrote Peter Cooper a brief email, telling him I'd written an ebook -- but not what it was about, or even what it was called -- and did nothing else to promote the book except offer free copies to people who I had quoted in the book, and retweet people who praised the book. Peter included a link to the book in his newsletter Ruby Weekly, but since I hadn't actually told him anything about it, or given him a review copy (something I've since rectified), the blurb in the newsletter basically just made fun of me and said "there's a book, I don't know what it is, but it's got something to do with Rails, and Giles wrote it."

This minimal and even somewhat embarrassing approach to marketing still sold around 100 books -- I think the exact number is 94 -- while I was enjoying a Thanksgiving vacation in New Mexico at my parents' Earthship. I'm pretty sure I spent more time exploring the art galleries of Santa Fe's Canyon Road and creating a pair of simple paintings in acrylic on small 6"x9" canvasses (which I gave to my parents) than I did promoting the book.

I will of course market my book some more in the near future, probably in a much more serious and energetic way, but I want to recommend underpromoting your information products for the feedback factor. Word-of-mouth marketing is the best kind; when people tell their friends to buy your stuff, you know you've got something worthwhile. It's kind of like the Google Ads market research phase which so many marketers recommend, but with income.

When you sell with hype, your sales tell you how good your hype is. When you sell via word of mouth, your sales tell you how good your product is. This is of course a massive oversimplification, but as long as you remember to take it with a grain of salt, it's true enough that you can feel a little extra proud of your product when it sells without hype.

I also want to say that if you're building on ideas which you got from other people's work, giving them free copies of your product is a great way to not only get tweets and mentions from them, but also to find out if what you said actually makes any sense in the first place. I think the latter benefit is more worthwhile. Any time you create a book or a video or whatever, you're participating in an ongoing conversation, and the more you contribute to that conversation, the better off you are, both in terms of verifying that what you want to sell is worth paying for, and in terms of karma.

(Finding out if your product is worth paying for is harder than it sounds, because a good book has to enlighten the ignorant, without boring the well-informed.)

Anyway, expect hype in the future, because it works, but for now, just check my Twitter favorites if you want to see some hype of the classic, Web 2.0 flavor.

Sunday, November 25, 2012

How I Wrote My eBook

In a "How I Work" interview on LifeHacker.com, Tim Ferriss recommends the OS X app Scrivener:

Scrivener, the word processor I've used for the last two books. Unlike Word, it doesn't crash every five minutes, and I can look at multiple docs at once in the same window. [It's] minimalist and great.

In my opinion, Scrivener's certainly better than Word, but what isn't? I haven't been naïve enough about computers to go anywhere near Microsoft Office for at least a decade. I can't call Scrivener a word processor, either, and I especially can't agree with "minimalist and great." I recommend buying Scrivener anyway, though.

I'd describe Scrivener as writing software, which includes a word processor, an outliner, and an innovative UI mode which models a whole category of time-honored writing workflows based around index cards and corkboards. I believe it's actually designed for screenwriters, but modified to be useful for all writers generally, although I could be wrong about that.

That's why I can't agree with "minimalist" or "word processor." I believe "great" would be true if "minimalist" was true. As it is, I experience all kinds of minor UI bugs and excessively assertive auto-formatting whenever I use Scrivener, and consequently I can only call it good.

However, I really like the workflow which Scrivener models -- I think that core part of it really is great -- so I created a simple copy of it with Ruby and Markdown. I threw all the notes in one place and threw all the actual finished writing in files which looked like this:

00_foo.md
01_bar.md


All those files lived in their own directory. I churned out a simple HTML file from that:


I think this kicks Scrivener's ass in the "minimalist" department, but I can't call this method "great" either. Once the writing was done, I copy/pasted the HTML output into Pages and then did a lot of formatting by hand. It took a while.

But minimalist writing tools rock. I want tools that get everything but content out of my way. Style is great, but I believe my products live or die based on how well-written they are. So I believe I'm much more likely to add a few layers of sophistication and CSS to this workflow than I am to return to Scrivener.

If you're really serious about writing, though, I recommend buying Scrivener anyway. I've written many screenplays, and the one I wrote with Scrivener was easily the best. The implementation is flawed, but the ideas are great. I see Scrivener as a very good set of training wheels, and I plan to continue developing this minimalist approach of mine into a fully-fledged mountain bike. I'm hoping that the only adjustment I'll have to make for screenplays is to use Fountain instead of Markdown.

"The implementation is flawed, but the ideas are great" is also one of the key things I say about Rails in my new book. I'm planning to blog about that soon.

Tuesday, November 20, 2012

Travelling And Eating Right: An Unusual Solution

I eat a very restricted range of foods: basically just vegetables, beans, and fruit. I say "basically" because no starchy vegetables are allowed. Because of this, travel is usually a pain in the ass for me. Often the best an airport restaurant can do to satisfy my rules is an anemic salad for $14, and I have to pick the bits of bacon and cheese out by hand before I can eat it.

I am not a patient person when it comes to food. Typically when I encountered this problem, I caved in and ordered fries, burgers, whatever I could get, and just figured I'd get back to eating right when I got home. But getting into an old habit of eating a familiar kind of food makes it hard to switch back to eating right, so in practice, every time I went to a conference or saw my family on a holiday I would be breaking my nutritional rules not only during the trip, but also for weeks or even months afterwards.

It was easy to convince my family to make allowances for my health, of course, and a few times I tried renting a house with a kitchen when I went to a tech conference, instead of eating hotel food. But the airplanes and airports always killed whatever progress these other efforts brought me.

However, I recently figured out a solution, and I got to test it today. I was in airports and airplanes from 8am to 9pm, owing to some travel snafus. Since the airports and airplanes had almost nothing to offer me which I could eat, I ate almost nothing.

It worked out fine. I ate a few bananas, a few grapes, and a few pieces of pineapple and melon. I also made a small breakfast at home of chickpeas, kale, and mushrooms.

Before I began eating the way I now eat, I could not last more than a few hours without eating. I got shaky, cranky, and queasy (coincidentally the names of some of Snow White's lesser-known dwarven friends). None of that happened today. In fact, most of the time I forgot all about feeling hungry.

I wasn't surprised by this result. In 2010 I did a little experiment where I did deliberate fasts, not eating anything except water for a day or two at a stretch. I did this because I had read that my style of eating promotes remarkably stable blood sugar, and that many of the symptoms people identify with hunger -- shakiness, crankiness, and queasiness, for example -- are not actually due to hunger at all, but to fluctuations in the levels of blood sugar, naturally-occurring toxic chemicals found in meat, and/or man-made toxic chemicals found in processed foods. My nutritional rules ban meat and processed foods.

In other words, if you eat the way I eat, you don't have to eat every day, and it's not a big deal. I did my short series of experimental fasts because after I read about this, I was very curious to discover first-hand if this was true. It was, and is.

This simple tactic of just fasting my way through the airplanes and airports thing means I can go to tech conferences again. I stopped going to tech conferences because I burned out on it -- I delivered at least 15 presentations at user groups and conferences in 2008, and exhausted myself -- but the reason I never started back up again was this dietary problem.

In 2011 I went off my nutritional regimen and gained a staggering amount of weight. I've gotten back on track with it, and lost around 20 or 30 pounds since my worst moment in 2011, but the point is that my health was more important to me, and tech confs were not worth the risk.

So, I'm pretty stoked about that, and may start going to conferences again.

Monday, November 19, 2012

BritRuby Kerfuffle: Silly Even By Ruby Standards

BritRuby (mild) critic Josh Susser: I don't think adding diversity at the end works. You have to start with it as one of your goals. Who wants to be the token female?

BritRuby organizer Sean Handley: Adding a token minority speaker is offensive to that speaker, it says "You're here because you tick a box - not because you're skilled."

Note how the "critic" who allegedly destroyed the conference expressed the exact same idea as one of the people who put it together. They were saying the same thing, for one small, tiny moment, which in real life might have led to some kind of common ground, but there was a whole lot of freaking the fuck out, and there was no searching for common ground.

Rubyists need to get better at having reasonable conversations.

Avdi Grimm wrote a blog post about this. I couldn't bring myself to read it. Instead, I counted the words using copy/paste and vim: 2,995.

That's a lot of words.

(I was able to read this post by Elisabeth Hendrickson.)

I sat this thing out, uninstalled my OS X and iOS Twitter clients, and even though I occasionally logged into the web client (making sure to log back out again, because anything which makes Twitter harder to use is good for my productivity), I basically wrote an entire ebook on Ruby on Rails and its deviations from classical OO theory (both its misguided deviations and its inspired ones) in the ensuing free time.

Now this is somewhat an exaggeration, because I had the idea for the book on Wednesday, and I already had an outline when I started writing it Thursday night, and there were several shots of espresso involved, but it's not much of an exaggeration to say that I wrote an entire ebook with the energy I saved by not participating in a discussion about how some mild criticism completely destroyed a 500-person conference.

OK, actually, that is a big exaggeration, but still. What I'm saying here is I got a lot done without paying attention to the drama, and I don't think this tempest in a teapot means anything.

Rubyists need to get better at choosing their battles.

Wednesday, November 14, 2012

Hollywood Delivers Perfectly Accurate Representation Of Startup Life

Like most people whose work stems from the Internet, my parents wonder what it is I actually do for a living. Wonder no more:



This helpful documentary contains no exaggerations whatsoever.

Sunday, November 11, 2012

Inevitable Convergence Of Porn And Reality TV

Playboy TV is casting for a brand new comedy game show called "The Man" We are currently looking for attractive males and females ages 21-30 who are comfortable with on camera nudity and possible sexual interactions. Men will be competing to prove to women that they have more "game" than the other contestants. Pay for women is $500-$700 for the day. There is the possibility of females being used on more than one episode. Pay for males ranges anywhere from $300-$600 depending on which round they are eliminated.
Compare and contrast: PG Porn, porn-style short films without the sex; a reality show about porn stars raising children; what appears to a be "Loveline meets the Internet!" show called Love In The Time Of Robots; and an actual person having sex with an actual robot (NSFW).


The above was posted in July 2012. The show appears to be live.

I'm actually very curious what effect the widespread availability of Internet porn will have on film in general. In the 1920s, people were making porn; by the 1950s, censorship had completely crushed porn. When it came back in the 1960s and especially the 1970s, it was part of an overall sea change in cultural attitudes around sex, influenced to no small degree by the sudden availability of chemical birth control.

At that time, almost nothing except nudity and actual sex differentiated porn films and "real" films. Last Tango In Paris could be categorized as either one with equal validity. Today, porn actors are not "real" actors. Porn films are filmed differently, distributed differently, and only a few filmmakers (either "real" or otherwise) have taken it upon themselves to regard porn as cinema.

One notable exception comes in Mike Judge's Extract, where a paranoid husband allows a friend to persuade him that the best way to find out if his wife wants to cheat on him is by dangling temptation in front of her:





By incorporating a visual style reminiscent of porn, Judge implicitly acknowledges here that pornographic film is film. This fairly obvious fact is very conspicuous by its absence in the field of film criticism. I strongly suspect that failing to regard pornographic film as a type of film constitutes some type of ghettoization of sex. It's especially strange when you contrast it with how French film treats sex.

The exploitation aspects of Playboy's The Man series are obvious, but what's interesting is that its pornographic aspects make it nominally less respectable in American culture than its "legitimate" cousin in "real" reality TV, Jersey Shore, which is well-known for an episode where a girl gets punched in the face. Neither show is awesome in my opinion, but it's strange to me which one is labelled more respectable (albeit by a very thin margin).

It's possible that this hierarchy of respectability, where violence is acceptable but sex is not, operates like a semiotic poison. I also find it very interesting that the two periods most associated with increased availability of porn in the United States -- the 1920s and the late 60s/early 70s -- also saw substantial increases in female political power. I really doubt it's any kind of coincidence; consider, for example, that Islamic and Christian fundamentalists, who both seek to restrain female sexual expression in their respective cultures, are also united in their antagonism not only to female political power, but also to men who act feminine. Robert Anton Wilson wrote an excellent book about this (and originally wrote it for Playboy, in fact).

However, the issues are so far from decided that you could build an entire academic career on arguments for either side, so I'm not going to tackle that here. But insofar as my blog has any theme at all, Internet entertainment which may in some way reshape or alter society has got to qualify, so I think this is an interesting trend. Porn disappeared completely in the 1950s and is now everywhere. It's highly unlikely that this will not either exert some transformative effect on society, or reveal some transformative dynamic already in operation.

Saturday, November 10, 2012

Daddy, Somebody Hacked Teddy Ruxpin!

When I was a small child, my family went to DisneyWorld. For some insane reason, at DisneyWorld, they set up plastic trees instead of real trees, and even beyond that, the plastic trees had plastic birds on them. So I saw a bird in a tree, and I got excited to be so close to a bird without it flying away, but then when it continued not to move, I thought it was dead.

But then I noticed the tree was plastic, and the bird was plastic too, and I started wondering what was wrong with people. Frankly, I'm still wondering.

ToyTalk, a "family entertainment" startup that makes a "smart," internet-connected, artificially-intelligent teddy bear, has secured a total of $16M million in institutional funding. The company was founded by former Pixar CTO Oren Jacob, and ToyTalk CTO Martin Reddy previously worked at the organization that built virtual personal assistant Siri.



The sheer opportunity for clever children and especially teenagers to create inappropriate hacks here is just extraordinary. You could wire it up to a dildo, you could make it talk in a creepy voice and tell small children tales of murder, and that's without even really getting creative. I love Pixar, and I'm sure in some way this technology hits some Diamond Age apex of cyberpunk inevitability, but the only way I would ever buy this thing for its intended purpose is if my kids had severe mental disabilities and nobody else in the world had ever seen a computer before.

I have never seen anyone fail so hard at failing to anticipate negative consequences and/or overestimating their ability to prevent kids from being kids. (It's kind of odd to see an unprecedented level of fail which remains too ambiguous to entirely classify.)

This startup idea has genius to it, for sure, but in my opinion, it represents such an awful and utterly unrealistic choice about differentiating users and customers that Facebook looks like 37Signals by comparison.

Don't Repeat Yourself: Fundamentals Edition

The text editor is the operating system.

The browser is the operating system.

The operating system is the operating system.

Thursday, November 8, 2012

My Los Angeles JS Talk On Automated Refactoring

I spoke recently at js.la about an automated refactoring system called Wheatley (which lives on GitHub). Here's the talk:

Wheatley from JSLA on Vimeo.

Monday, November 5, 2012

Saturday, November 3, 2012

Improve Your Writing With A Checklist

Here's an easy way to write better blog posts.

First write out your idea, without concern for quality. Just express the thought at hand.

Next go through the text, eliminating the passive voice anywhere you find it.

Do it again for adverbs, and a third time for run-on sentences.

Making these changes always improves writing, no matter what the topic, style, or context. For bonus points, do a fourth review pass, looking for ways to eliminate repetition.

Finally, copy-paste your text into a word processor to get automated spelling and grammar checks for free.
Yes, I'm aware this blog post contains an adverb. These are rules of thumb.

Wednesday, October 31, 2012

Hey Loopmasters: I'd Buy MIDI Loops Too

I often get loops from Loopmasters Artist Series -- in fact, I've probably spent more than a thousand dollars on that series -- but there's one thing I hate about loop-based production: loop libraries don't ship with MIDI representations.

If you find an amazing bassline (both Nick Thayer products are full of them) but you want to hear it on a different instrument, there's not much you can do.

On a good day, I can figure out the notes on a guitar or a synth, and even on a bad day, I can probably run it through Coda note-by-note, but it'd be a lot more fun if sampling libraries just came with MIDI loops.

Update: looks like the industry-standard solution to this problem is Celemony Melodyne, and Ableton 9 will feature something very similar as well.

Tuesday, October 30, 2012

Atwood: Learn The First Thing About Open Source

Somebody who apparently enjoys Internet drama, run-on sentences, and the passive voice recently said:

Standards processes are not to be missed. They are grand spectacles that unfold in real-time. They are fraught with personalities, big egos, and grand ideological dialogs it's the West Wing, but like the parts that are between the snappy dialog.

All this is of course in reference to Jeff Atwood. I owe Jeff Atwood a lot. I used to aspire to being a real name in the world of tech blogging, until Jeff Atwood conclusively demonstrated that this is not necessarily something to be proud of. The enormous success of his banal and pointless nattering inspired me to find a more interesting goal. He's kind of like Kafka meets the Cookie Monster: he doesn't appear to have ever encountered a method of boring people to death which he didn't relish to the point of near abandon.

Several years ago, I compared Atwood to Jerry Springer:

Watching Jeff Atwood pick a fight with Paul Graham was marvelous entertainment. Not only did we get to see a baboon flinging its doodoo at a lion, we got to see how silly and befuddled the lion had gotten in the winter of its old age. Paul Graham first defended himself, then wrote a post called How To Disagree - which Atwood pretty accurately described as "an EULA for disagreeing with Paul Graham" - and finally had Atwood over for a Y Combinator dinner where they presumably hashed out their differences.

Atwood launched a similar attack on DHH as well. I can only assume he was angling for a dinner with DHH but may be willing to settle for lunch. Or in fact a photo opportunity. However, DHH never responded at all.


This appears to be something Atwood does a lot. He may have even met his business partner this way, and the last time I dissed him, he sent me these "let's be friends" emails, which sounds great in theory but for some reason actually creeped me out a little. So, just to be clear, Jeff, if you're reading this, I have no interest in starting a company with you, and I'm not going to invite you to dinner.

Atwood's latest stalk-target: John Gruber. I'm a fan; in fact, it's agony to me that the "canonical" book on Steve Jobs didn't come from the guy who writes the canonical blog, because if you want to understand Jobs, you go to the blog, not the book. I've always hoped Gruber would one day write his own book on Apple. But instead, I may now have to witness a Gruber/Atwood collaboration. Atwood trolled Gruber, seeking to rope him into some standards committee bullshit around Markdown, instead of doing the respectable thing and just being awesome.

As a fan of Gruber's, I'm ashamed to say that Gruber failed to reach DHH-level disdain, and responded to Atwood's dull trolling -- as indeed I am doing now -- but I can happily report that Gruber understood the purpose of the conversation:

@codinghorror When you tell me to jump, should I ask “How high?”

@codinghorror Next step is for you to offer a $5m donation if I release my college records, right?

I'd love to call the blog post at the root of Atwood's latest shenanigans "a tale told by an idiot, full of sound and fury, signifying nothing." But that's a better description for the movies of Michael Bay, who may actually be a pretty smart guy. Atwood specializes in tales told by an idiot which are nonetheless both sound- and fury-free. I hope he skips straight to the signifying nothing for the sake of expediency, but I suspect he does it because he lacks imagination.

Atwood trolls as a form of recruiting and introduction. It is the modus operandi of a bully. That probably makes Atwood himself a bully of some kind -- perhaps a verbal bully, but one whose words make for incredibly dull weapons -- but he's so fundamentally unintimidating that it's probably more reasonable to call him an aspiring bureaucrat.

Atwood blogged about Markdown a few days ago, but also back in 2009:

The biggest problem with Markdown: John Gruber.

...the fact that there has been no improvement whatsoever to the specification or reference implementation for five years is kind of a problem.

There are some fairly severe bugs in that now-ancient 2004 Markdown 1.0.1 Perl implementation. Bugs that John has already fixed in eight 1.0.2 betas that have somehow never seen the light of day. Sure, if you know the right Google incantations you can dig up the unreleased 1.0.2b8 archive, surreptitiously posted May 2007, and start prying out the bugfixes by hand. That's what I've had to do to fix bugs in our open sourced C# Markdown implementation, which was naturally based on that fateful (and technically only) 1.0.1 release.

I'd also expect a reference implementation to come with some basic test suites or sample input/output files so I can tell if I've implemented it correctly. No such luck; the official archives from Gruber's site include the naked Perl file along with a readme and license. The word "test" does not appear in either. I had to do a ton more searching to finally dig up MDTest 1.1. I can't quite tell where the tests came from, but they seem to be maintained by Michel Fortin, the author of the primary PHP Markdown implementation.

But John Gruber created Markdown. He came up with the concept and the initial implementation. He is, in every sense of the word, the parent of Markdown. It's his baby.

As Markdown's "parent", John has a few key responsibilities in shepherding his baby to maturity. Namely, to lead. To set direction. Beyond that initial 2004 push, he's done precious little of either.


This is patently ridiculous. Creating an open source project in 2004 does not set you up with obligations to do anything in 2012 whatsoever. The entitled whining here is so absurd you have to read the whole thing to really get it, but I'll just summarize for you here:

"John Gruber created something. I want that thing to be different. Therefore, John Gruber hates babies."

I wish I was kidding, by the way. He literally included a picture of a baby in the blog post, and the title refers to "responsible open source parenting." He totally implies that Gruber hates babies because he didn't do work that Atwood is clearly capable of doing himself.

GitHub responded to imperfections in Markdown by creating GitHub-flavored Markdown. Some people call that being awesome; some people call it not sucking; everybody who works in open source knows that it's the only socially acceptable response. Harassing people to demand that they do things for you is not socially acceptable in open source. What is socially acceptable is creating solutions for your problems and then sharing them.

Also, as far as I'm concerned, GitHub-flavored Markdown is now the canonical implementation. That means Atwood's whole effort is barking up the wrong tree anyway. But there's a much more fundamental problem here.

Open source is about giving. It's not about obligations. It's not about "you have to create the fixes I want, or you're a bad person." Jeff Atwood does not understand the most fundamental thing about open source.

Less bullying, please. More giving.

Thursday, October 25, 2012

Some Feminist Movies

There's a lot of internet noise about sexism in the tech world.

You might wonder, if you're a dude, what the fuck am I even supposed to do about all this shit, except for not be a dick?

One option is to volunteer with an organization like RailsBridge.

Another option is to watch these movies, and think about them.

The Scream series of horror films
There's a staggering amount of feminist film theory surrounding horror films, especially slasher films, with their Final Girl trope. This is a controversial field, but the Scream series is stuffed to the gills with strong women who pass the Bechdel test with flying colors.

Ruby Sparks
A brilliant deconstruction of the Manic Pixie Dream Girl trope. Eternal Sunshine Of The Spotless Mind is popular for similar reasons, but where Eternal Sunshine scores a few snark points by dissing the Manic Pixie Dream Girl trope as cliché, Ruby Sparks completely demolishes the concept, exposing its fundamentally destructive, sexist foundations. It's like the difference between a spitball and a sledgehammer.

Somersault
An indie film from Australia which tells a very relevant coming-of-age story. Lowest popcorn entertainment factor of all the movies listed here, but illuminating.

Wednesday, October 24, 2012

New Tablet Reviews, Summarized After Painfully Extensive Research

Every Review Of The iPad Mini

It's an iPad, but smaller. Five to fifteen hundred additional words.

Every Review Of The Microsoft Surface

Microsoft has made a mostly useless device, built on faulty assumptions, which is nonetheless very interesting to play with for a certain type of curious geek (or would be, if the iPad didn't exist). In other words, they have finally caught up with the Newton. Five to fifteen hundred additional words.
Admittedly, the research was more painful than extensive.

Saturday, October 20, 2012

Damn Inflation

Damn inflation is a problem faced by programmers, and probably many other types of people. I don't mean damn inflation in contrast to regular inflation. I mean inflation in the value of an individual damn.

When you are very new to code, you have to give a damn just to indent it. But once you adopt good coding habits, and acquire skills in coding tools which will indent your code for free, indenting your code is not something you have to give a damn in order to do. You just do it, either way.

Steve Jobs achieved an amazing damn value in his lifetime. By Apple's standards, nobody at Microsoft gave a damn about Unix, typography, CSS, HTML, color schemes, usability, or numerous other things. Microsoft is and was cash-rich, but damn-poor.

Steve Jobs was lucky to be running Apple, because with so much passion invested in so many aspects of technology, it would have been otherwise impossible for him to find a computer worth a damn. This is not because computers today are not amazing; it's because Mr. Jobs had very valuable damns.

This is the same reason it's hard for web developers to find a Twitter client which is worth a damn. If you're a web developer with a background in fine arts and graphic design, it's even worse. And if you're not just a web developer with a visual background, but also one with skills in online marketing, who's studied the research around distraction and productivity, the only way you can measure the effort invested in most Twitter clients is with the picodamn, a very modest unit of measure which normally only sees use when somebody wants to measure how much opinions are worth on Hacker News.

Damn inflation is why you have to be careful what you invest your energy in.

Thursday, October 18, 2012

Mock Bad Code (Because Fractals)

Everybody encounters bad code sometimes, and wonders what to do.

The answer is simple.

You should mock it, in the sense of making fun of it, because it's often funny.

You should also mock it, in the sense of testing its implementation details, because you will need to rewrite it, and you will need to verify that it works the way exactly how you think it does. Because it probably doesn't.

The first thing that people do to figure out code is they read it. In the case of bad code, it's a mistake.

Bad code never does exactly what it says it does. Bad code is hard to read, or wildly misguided, or both.

You only have three ways to find out what code does: read it, test it, or test it in production. Testing code in production is obviously a bad idea. But when code is bad, reading it is also a bad idea.

The worse code gets, the more likely it becomes that you're better off testing the code than reading it. Bad code uses variable names which are incomprehensible; really bad code uses variable names which are inaccurate; truly awful code uses indecipherable variable names to describe an inaccurate model of the business logic.

When it comes time to test bad code, you have to operate on the assumption that whoever wrote it didn't really understand what they were writing. I have seen better programmers than me implementing call/cc while calling it event bubbling, or totally missing the fact that they implemented a Factory pattern when they thought they were cleverly hacking an inheritance tree. I have also seen much worse programmers than me referring to "length" as "lenght" every single time in a code base, not just in comments and variable names, but in class names as well. A Factory pattern does not resemble OOP inheritance, and "lenght" is not the correct spelling of any word in any language I'm aware of.

One easy method of burning out, as a programmer, is reading bad code when you could be testing it. It pollutes your brain and gives you extra work just figuring out the difference between the bad code's terminology and the actual business logic. It is especially frustrating because the work does not pay off. Filling your brain with the bad code's terminology rarely helps you understand anything, and always constitutes unnecessary mental overhead.

It is a wasteful expenditure. You'll be happier if you start with tests or specs written in extremely simple language. Use these to define exactly what the system does.

Once you have good code which defines the system, it starts making sense to read that code. And once you've read a few solid descriptions of what the code actually does, take aim at the naming and see how it clears things up.

Where you take the specs after this point is a matter of style, but the way you get there is not just with specifications, but also with specifics. It is always better to find out what the system really does, and this goes double if you inherit the code from anybody else.

Code written by other people who understood the logic they were attempting to describe is a rare joy. But bad code written by other people, when it fails to accurately model the business domain, can then only be understood if you have (or write) some kind of glossary which tells you that when the bad code says (for instance) "OOP inheritance," it means "a Factory pattern."

Save yourself the stress.

By the way, that "lenght" thing is a true story. On this project, I persuaded the head of engineering (who was, in my personal opinion, an insane dipshit) to move automated testing from the lowest priority to the highest priority. I wrote integration tests, and I could literally run the test suite ten times and get different results every time. Eventually, as I cleaned the code up, patterns emerged in the test suite's unpredictable counts of which tests passed and which tests failed. It was literally like hacking a Feigenbaum sequence generator and modulating the variable which stands for robustness.

Bit of a tangent, I did that in high school on my graphing calculator.

A Feigenbaum sequence begins as an iterating equation, e.g.:



I don't actually remember what that means, but I copied it from either this book or this one and figured out the code to run it into my graphing calculator.

This is a graph of the Feigenbaum sequences for values of x and i between 0 and 1.



This is what I tried to draw on my graphing calculator, but its hardware had very little RAM, and its implementation of BASIC did not support lambdas, so I had to settle for drawing individual sequences, while modulating by hand the values of x and i.

An individual sequence could look like this:



If it looks like this, repeating the iterating function with those particular values for x and i zeroes in on one consistent return value.

Or this:



If it looks like this, repeating the iterating function with those particular values for x and i produces an oscillation through multiple return values.

Or this:



If it looks like this, repeating the iterating function with those particular values for x and i produces a very noisy and unpredictable range of values which appears random, but which is more accurately understood as chaotic, which in mathematical terms stands for situations where apparently random data results from deterministic input.

When I started, the tests I had for the legacy code I brought under control would, as I said, give me different results every time I ran them. If I had graphed the passing or failure of any given test x against the number of times I ran it i, I would have gotten a graph that looked like this:



Once I got the code under control, a predictable rhythm started to emerge in the mapping of any given test's success or failure x against the number of times I ran it i:



As I refactored the code, I got closer and closer to a situation where the spec's passage or failure x mapped against the number of times I ran it i would converge on a single value -- either passing or failing every time -- like this:



You might think it impractical to write a Feigenbaum sequence generator in BASIC, but a) I had a lot of free time in my classes, because I never listened to my teachers, and b) it turned out to be a useful model for fixing extremely unpredictable software.

That's right, motherfuckers. When you're this good, you don't have to make sense.

Entrepreneurs Are Not Necessarily Aristocrats

Republican economic dogma in a nutshell: "In the land of pioneers and self-reliance, the only way we can encourage entrepreneurship is by making sure the ultra-rich can make wildly speculative investments."

Exhibit A

Exhibit B

Exhibit C

Tuesday, October 16, 2012

Stop Breaking The Back Button


Undoubtedly overkill, but the time for sacrificing basic usability to personal cleverness was the 1990s, when the Web was new and its user experience fundamentals were unknown enough to justify experimenting. Breaking the back button two decades later is just shameful, especially now that we have the HTML5 History API.

Sunday, October 14, 2012

A Downside To Crowdsourced "Journalism"






Yes, I'm talking about antirez making a fool of himself and all the resulting noise on Twitter.

Saturday, October 13, 2012

A Refactoring Opportunity Within Rails 3


Rails 3 contains a textbook example of the need for a Replace Method With Method Object refactoring.

Consider this question:

I'm wondering what is the difference between these two methods: ActionView::Helpers::UrlHelper.url_for and ActionController::UrlWriter.url_for?

In addition to two versions of the same method, similar but not identical, the documentation for the ActionView::Helpers method link_to states that link_to accepts the same options which the ActionView::Helpers version of url_for accepts.

(Can you believe newbies find this confusing? What a bunch of morons.)

Anyway, the difference between these two methods with the same name is that the ActionView::Helpers version of url_for accepts a subset of the options which the ActionController::UrlWriter accepts. link_to also accepts that same unnamed subset.

If only there were a mechanism for capturing this pattern of highly similar methods, where one method's possible parameters are a subset of the other method's possible parameters. I can't imagine how such a mechanism might operate, or what it might be called, were it to exist.

Luckily we can discover it by applying the refactoring I mentioned earlier:

Turn the method into its own object so that all the local variables become fields on that object. You can then decompose the method into other methods on the same object.

In other words, whenever the code requires a set of options in more than one place, you can make the code more concise by capturing that set of options in an object. You could, for example, name the object Url or (if you have a fondness for Legend Of Zelda games) Link.

Apologies to the entire open source community, because I'm raising this on my blog rather than participating in the open source development process around Rails, especially since it's entirely possible this is already fixed in Rails 4. However, I've often gotten the impression that specific key members of that process were unpleasant to interact with for me personally, and I'm not interested in participating in a process which requires soliciting their approval. I do however hope that my insight here is useful to someone, somewhere.

Trademarks used here are the property of their respective owners, and appear without permission, but with the full, total, and obvious protection of fair use doctrine in trademark law. The web comic containing these trademarks operates as social commentary on open source culture, represents my opinions and subjective impressions only, operates partly as overstatement for the sake of contrast, should be interpreted with a certain degree of irony, and makes absolutely no claims of factual or historical accuracy (or indeed inaccuracy) whatsoever.

The Dark Side Of Trolling

Anyone who enjoys stirring up trouble on the Internet should give this a read:

When it comes to mods, the political model of Reddit is not so much a vast digital democracy, as it's often framed by fans and users, as online feudalism. Moderators like Violentacrez are given absolute control over their turf in exchange for keeping the kingdom of Reddit strong. Moderators become more or less powerful in direct relation to the number and popularity of the subreddits they moderate, so they try to take over other subreddits to boost their profile in the community. Inevitably, Reddit's administrators develop relationships with the most influential moderators. Like feuding medieval lords vying for the king's favor, moderators form alliances or wage epic flame wars over power struggles.

This is how Violentacrez, Reddit's creepiest user, also became its most powerful.


My rule for trolling is that it has to deliver insights or entertainment equal to or greater than the attention it grabs. Of course, it's impossible to calculate that with actual precision, and there's another important rule: don't be evil. I think there are a lot of trolls who've never considered this one.

Thursday, October 11, 2012

Why Does Twitter Suck? Believe It Or Not, It's The Fault Of Republicans

Twitter's deteriorating a great deal these days, both in terms of its UI/UX and in terms of the misguided choices it makes in defining its business model.

"I think of the company as a technology company that is in the media business," [Twitter's CEO] told a room full of editors and reporters. "Our business is an advertising business, we don't sell technology."

This is driven by the economics of venture capital; a modest success, in terms of starting a software business, is an abject failure in VC terms. Thus Twitter never even condescended to compete with the profitable, and customer-charging, but otherwise highly similar business Yammer, opting instead to aim higher, in financial terms, and aspire to becoming Crappy Television 2.0. An enterprise service business which sold for $1.2 billion is just not worth it from a VC point of view.

And why does social media revolve around startups defined under VC terms?

Because Republican policies over the past few decades have overwhelmingly favored the superwealthy, specifically with capital gains tax law, which makes it inevitable that most people with very large amounts of money will invest it in risk-seeking, speculative ways.

This is also why it's incredibly easy to find work if you build tech startups, but very difficult otherwise: Republican economic policies have inflated the profits of risk-seeking, speculative investment so much that it's now a very over-represented form of investment.

In a different economic environment, Twitter -- which began as a very cheap project -- might have made different strategic decisions. If the playing field was not so steeply tilted in favor of capital gains, the more sober economic environment would favor smaller investments, and products which delivered more immediate economic impact.

Twitter sucks because of the Republicans.

Tuesday, October 9, 2012

When Pixar Makes A Horror Movie, The World Will Be A Better Place

I want to see Pixar make a horror movie, but it will be a long time before that ever happens. Here in America, we assume any animated film is for children. This horrible belief not only blocks me from my Pixar horror movie, it also marginalizes anime.

I hate this so much I want to destroy it. It is an insult to artists everywhere: "Your drawings have no power to terrify. Your drawings have no power to touch hearts or provoke thoughts. Your drawings can only amuse children."

I believe there is a link between this horrible cultural norm and another, equally horrible American cultural norm: anti-intellectualism.

Compare two images of a Baltimore oriole:





One of these images presents a literal recording of an oriole; another presents an idea of an oriole.

One of these images could appear in a serious American film intended for adults. One of them could not.

Our cultural norm, that animated films are for children, sends a message: "A literal recording can be a serious thing, but an idea can only ever be trivial and harmless."

This is, ironically, a very harmful idea.

You Fuckers Are Adorable












Monday, October 8, 2012

I Think This Would Be Better

Exhibit A



Exhibit B



(About 2.5 minutes in.)

Exhibit C


Stylization And Abbreviation In Asian Cinema (And Elsewhere)

Kung fu movies use editing and cinematography which is stylized and abbreviated, rather than literal. This is especially evident in fight scenes.

Anime uses character designs which are stylized and abbreviated, rather than literal.

I don't see that a lot in Western film (by which I mean all Western film, but in practice, mostly French, American, and British film). I see it a lot in Western comics, video games, and user interface design, but not in film.

Thursday, October 4, 2012

How Do I Find Out If rake db:drop Failed?

bundle exec rake db:drop 2>&1 >/dev/null | grep 'skipping.'

Monday, October 1, 2012

LSD Effect In Experimental Video

I threw this together as an experiment, and I'm happy how it turned out. Thanks of course are due the actress in this, Kristin McCoy.

Thinking Out Loud: Clients For Twitter And For GitHub Notifications

Twitter's too useful not to use at all, but I can't use it as intended, either. I get a lot of unanticipated communication on Twitter. I also get a lot of unwanted communication on Twitter. The first major problem of Twitter usability is differentiating unsolicited communication from unwanted communication.

A few years ago I came up with a plan for fixing this: a Twitter summarizer which, in addition to screening out any tweets from people who I don't follow, would also recognize similarity within tweets and summarize highly similar tweets, making each one optionally visible but hidden by default.

For instance, if 100 people you follow all tweet a link to the same blog post, all you really need to know is the link and how many people retweeted it. You might also want to know some of the specific individuals in that crowd, but you absolutely don't need to see that same link 100 separate times. This feature would be even more useful in eliminating the annoyance inherent to seeing the same joke or witticism retweeted or rephrased countless times.

That feature is definitely worth building. But banning tweets from people you don't follow -- that's a very blunt instrument. A superior alternative might be segregating people you don't follow in a kind of quarantine, or subjecting them to negativity screening -- i.e., filter those tweets through sentiment analysis, and only show tweets from strangers when those tweets meet or exceed minimum levels of politeness or positivity.

When your friends are mad at you, you need to know about it. When some random Internet dickbag has a grudge against you, the absolute most information you might need is a temperature-like ambient Internet hate meter.



If the hate gets truly severe, you might want to take a peek to find out what people are so worked up about, but nine times out of ten it's not worth any attention at all.

Basically, most people who design Twitter clients work on the assumption that they're building windows onto a stream of data you want to watch. I think the best way to design a Twitter client is to build a robot receptionist.

Much of the above applies to GitHub notifications, but there are special considerations for GitHub. First of all, if I'm cc:ed in a GitHub notification, that notification does not currently receive any special highlighting within GitHub's UI. I have many many times missed GitHub notifications intended for me personally due to the overwhelming volume of other GitHub notifications in the same project.

Second, GitHub allows you to turn off notifications by project, but this is a coarse-grained solution. I'm guessing plenty of people out there need to know every single thing about one branch of a project and nothing at all, ever, about another branch of a project. You have access to project, branch, message text, filenames, dates, and languages; if I'm only interested in JavaScript notifications for a given project, or if I'm only interested in notifications on X branch but not Y branch, or I want to see messages which mention me by name in a special prioritized box, this all should be trivial information to obtain, and trivial UI to implement.

Imagine for the sake of argument a GitHub notifications API, with all this data contained in JSON objects. You hit the API, you pull your notifications, and then your lovely little robot butler reads through all these notifications, searching for indicators that you are likely to give a shit.

Perhaps it's difficult to accept this, but the appropriate response to nearly all Internet communication is "fuck off and stop bothering me." This is an unpleasant thing to say in real life, which is why you should instead have software doing it for you.

In every form of Internet communication, the number of messages you give a shit about is always much smaller than the total number of messages you receive. Software design for messaging clients virtually never acknowledges this fundamental and consistent reality.

The cult of inbox zero is a ship of fools. It is the information-age equivalent of a slave religion, where you glorify the most obedient slave to an insane master. You should not get a high five and a merit badge every time you get to a state where you can calmly and intelligently choose what to do next; being able to calmly and intelligently choose what to do next should be your default state.

People really need to design messaging systems around the obvious reality that give-a-shit is a precious and rare treasure. For some insane reason, this is not what we do; most software is designed with the utterly bizarre assumption that all incoming communication receives a standard, uniform, and equal subdivision of give-a-shit. This is why your email inbox looks like Hacker News, instead of Hacker Newspaper.



The newspaper model of information design uses typography both to maximize legibility but also, and more importantly, to communicate hierarchy.



It's very, very easy to infer from this newspaper design the relative priority of the stories it presents. Twitter, GitHub notifications, and email should all look like this.



Update: the option to route different organization's notifications to different email addresses in the new GitHub notifications system is definitely awesome.