The Apple Car 

When Apple made a phone, it turned out it wasn’t really competing in the handset business; it was competing for the next dominant personal computing platform. The more I think about an Apple car, the more I think that it might be the basis of their future “computing environment”: a space that is completely aware of and responsive to its occupant(s). In that sense it might be more of a long-term competitor to the Amazon Echo (and whatever Android variant Google is pitching at the same space) than to Tesla’s cars.

From this perspective, maybe the thing that’s kept the AppleTV on hold for so long is that they were trying to go down this road, but they kept failing to pull it off (to their standards) in the living room. Perhaps they learned a few things along the way.

Just a thought.


Three points for clarification:

  • I’m actually not claiming that an “Apple Personal Space” is either the focus of any initial product or any concrete long-term plan. The iPhone almost ran the iPod operating system, and it doesn’t seem like Steve envisioned the success of the App Store (and thus iOS as a platform). I’m pointing out that Tesla focuses on cars, Uber focuses on transportation, and Google focuses on technology, while Apple focuses on experiences. If they trap a user in a physical bubble, it’s in the company’s DNA to turn that bubble into the world’s most carefully studied and controlled experience.

  • Obviously the iPhone upended the incumbent handset industry, so Nokia certainly saw Apple as a competitor. But I doubt Apple ever viewed Nokia that way, because they never saw Nokia as competing in the personal computing business. The competitors were Microsoft and Google: Apple was after the growth of a new industry; the sales in an existing one were collateral damage.

  • If Apple’s first offering is disruptive, it will most likely be due to innovations they bring to the auto manufacturing process (and are thus relatively invisible to consumers). Any truly compelling mass-market “computing environment” would evolve iteratively over years.

On Interviews and Carpenters 

The old “What if they hired carpenters they way they hire programmers?” joke/commentary didn’t sit right with me the first time I read it, and after stumbling across it again I now see why. Among other things:

It’s a complaint without a solution.

I’ve never met anyone in the software industry who is happy with the hiring process, and that includes everyone who’s designed the process. Nobody seems to have a solution to separating the potential stars from the mehs, and anyone who claims they do either doesn’t have enough perspective to understand the difficulty of the problem (young interviewers who have been trained in one particular hiring style seem to be blessed with the arrogance of blind faith), or they’ve perfected the art of hiring the mediocre (a sufficiently rigorous process can probably rule out almost all the disastrous hires, but will likely also lose a few stars…and it’s finding the stars that is the problem).

Pouting that interviews suck without suggesting any improvements is just childish, and doubly so if you’re complaining not about the bizarre “puzzle question” or “culture fit” interviews, but about being questioned on knowledge and experience. Technical interviews can be annoying and they can be done badly, but I’d still much rather work in an industry that does tech interviews than one forced to rely solely on CV reviews and personality-driven poking at “soft skills”.

Engineers aren’t carpenters.

There’s always a terrific slight of hand going on when software developers try to draw analogies to other fields. Blue-collar credentials and being treated like a unique, creative, and highly-paid professional just aren’t compatible. “Programmers” are the architects and structural engineers who design the buildings; they get programming languages and frameworks and IDEs to hammer the nails. I have no doubt that the industry is full of coders banging out one CRUD app after another, but their work bears a lot more relation to architects customizing a house design to a particular site (or, a better analogy, 19th-century railroad engineers applying the standard truss designs to design bridge after bridge) than it does to contractors framing house after house based on the designs they’re handed. The exceptions—coders who really want nothing more than to follow some formula and take no responsibility for the result—are exactly who interviewers are trying to weed out.

It’s disrespectful to carpenters.

Of course, there are carpenters who are creative craftsmen of the first order. Those aren’t the guys you’re going to bend over backwards to hire to frame your walls. The whole story seems to be built on the premise that the only skill a carpenter has is the ability to drive a nail straight, making any notion of an “interview” farcical. (Returning the first point, I suppose the implication is that driving a nail is the fizzbuzz of carpentry.)

The interviewee is worse than the interviewer.

Let’s just cover the first few questions:

So, you’re a carpenter, are you? How long have you been doing it?

If the only way you can describe your work is “I’m a programmer. For ten years now.” then I just don’t believe you. Yes it would be friendlier if the interviewer led a bit with “What kind of work have you been doing?” or “Tell me about some of your favorite projects.” but you’ve got to meet a weak interviewer in the middle. The main premise of this complaint about programming interviews is that a programmer is a programmer is a programmer, and the details don’t matter, and that’s straight-up bullshit. Have you worked on high-performance systems? Distributed applications? User interfaces? Large-scale software? There’s a hell of a difference between a framer, a cabinet-maker, and a furniture-maker. As an interviewer I’m open to the idea that someone good at any one of these probably has great potential for any of the others, but if you’ve got nothing more to say about your career than that you’ve done general things in a general sort of way, you can’t exactly blame me for taking my own direction on what details I’m going to dig into.

First of all, we’re working in a subdivision building a lot of brown houses. Have you built a lot of brown houses before?

I don’t see a lot of brown paint in the world. Some, but not much. There is, however, a lot of brown stain, and brown shingling, and brown brick. And all those kinds of brown would seem to be of major interest to a carpenter: if something is being stained instead of painted then I’d think that would affect the choice of wood. Maybe even how it’s joined. I don’t know; I’m not a carpenter.

Questions like this are exactly how a good interviewer separates a blinkered newbie from an expert with perspective. If you’re building a software library that will be called by a UI, then responsiveness matters. If you’re writing an order processing system open to the public, then you need to consider denial-of-service issues. If the overall software system will be distributed, then the architecture needs to take rollout into consideration. Shrugging off context is only a professional qualification for field-goal kickers.

What about walnut? Have you worked much with walnut?

In this hypothetical, we’re talking about a job building houses. Houses are most commonly built using platform framing of stud walls made from spruce, pine, or fir. Soft woods. Relatively cheap. Walnut is an expensive hard wood. I don’t think it’s used much (if at all) for stud wall construction, but it is occasionally used for post-and-beam construction, which involves either metal brackets or traditional cut joinery, and for nonstructural finishings. Any real carpenter would know the differences between varieties of wood, between the two major types of wood construction, and between the different roles wood can play in a project. And he’d definitely know which projects he’d worked on that involved which.

If a programmer walked into an interview and gave answers this evasive about how many projects he’d done in Java, he’d be an obvious no-hire. Not having certain experience is one thing; not even knowing what experience you have is another matter entirely. We can argue about the extent to which an employer should balance hiring for existing skills and hiring for potential to learn, but you can’t claim the latter unless you can point to prior success at learning new skills.

The punchline is not a joke.

The punchline is that the interviewer hires a car salesman who’d sold brown cars with walnut interiors. I’m with the interviewer on this one. Our hypothetical carpenter was effectively arguing that even if he’d only ever hammered together pine stud walls he could easily learn to do finish carpentry with walnut for a client very particular about his browns. If learning this stuff is so easy, then I’d rather hire someone who understands what the goal is of finish carpentry. And ideally someone who showed some interest in the project and the skills required to do it, not just the job.

The whole anecdote smacks of entitlement.

Given all of the above, the true subtext of this “joke” is that calling yourself a programmer entitles you to a job. But the really galling part is that the “calling yourself a programmer” bit needn’t even require relevant programming skills or experience. It’s effectively a declaration that “programmers” are a different class of people in possession of some unquantifiable gift, and it’s beneath them to justify their value.

It’s “I’m smart; pay me” brattiness.

2015 Miss Universe National Costumes 

The costumes may change, but my 2011 commentary remains remarkably relevant. No need for a full play-by-play; we can skip straight to the awards.

Best national theme winner: Germany

A wall!

Miss Germany national costume

Runner-up: Canada

Slightly overplayed the hockey theme by turning her vagina into the goal…

Miss Canada national costume

Loser: France

Throwing on a beret does not a national theme make.

Miss France national costume

Best non-national theme winner: Venezuela

Did not expect anyone to be able to pull of a “tree” theme this well.

Miss Venezuela national costume

Loser: Gabon

This is what I expected a tree theme to look like.

Miss Gabon national costume

Dishonorable mention: Tanzania

Rope? That’s your theme?

Miss Tanzania national costume

Disney princess audition winner: Spain

Miss Spain national costume

Runner-up: Ethiopia

Miss Ethiopia national costume

Disney with scissors winner: Lithuania

Miss Lithuania national costume

“Is that today?” winner, who put her outfit together from what the other girls could spare: Kosovo

Miss Kosovo national costume

Runner-up: Colombia

Miss Colombia national costume

Average-looking lady in supermarket “winner”: New Zealand

Miss New Zealand national costume

Runner-up: Nigeria

Miss Nigeria national costume

Honorable mention: Slovenia

Lost the award with those shredded arms.

Miss Slovenia national costume

Hat-based hobbling winner: Peru

Miss Peru national costume

“I don’t have the confidence to pull this off” winner: Jamaica

Miss Jamaica national costume

Runner-up: Nicaragua

Miss Nicaragua national costume

Of course, there are a few categories that certain countries continue to dominate:

Pedophile winner: still Croatia

Miss Croatia national costume

In the running: St. Lucia

Disqualified for actually turning me on. And for hedging her bets by qualifying for the hat-hobbling category.

Miss St. Lucia national costume

Loser: Mauritius

Miss Mauritius national costume

“Doesn’t realise she’s being graded on this” winner: still Curacao

Miss Curacao national costume

Unfair advantage winner: still Greece

Miss Greece national costume

“Pretty sure your national costume is actually a woolen jumper” winner: still Ireland

Miss Ireland national costume

“National character goes back no farther than the mid-oughts” winner: still Serbia

Could we be witnessing the start of a generation-long leadup to contention in the hat-hobbling category? Serbia plays the long game…

Miss Serbia national costume

No national character winner: still Belgium

Any country on earth could have gone with this. And lack of arms doesn’t count as character.

Miss Belgium national costume

I’m forced to admit that again this year, there are a few outfits I actually don’t mind:

Actually kind of nice winner: Haiti

Overall hotness trumps the cheesy leaves.

Miss Haiti national costume

Runner-up: India

More nudity going on here than you notice at first glance.

Miss India national costume

Honorable mention: Kazakhstan

Beautiful fashion-wise. A little more skin next year and you’ve got a chance, Kazakhstan.

Miss Kazakhstan national costume

Finally, we have a couple of new awards for 2015:

“They have black people?” winner: Switzerland

Miss Switzerland national costume

Runner-up: Singapore

Miss Singapore national costume

“What’s with the rabbit?” winner: Hungary

Miss Hungary national costume

Fallacy of the Single Cause 

There’s a specific form of logical fallacy or cognitive bias that I’ve never seen explicitly listed in collections of such fallacies or biases. It is related to the “Fallacy of False Cause” and to the “Illusion of Control” bias. I call it the fallacy of causation, or the fallacy of the single cause.

I don’t think we’re wired very well to reason about outcomes that result from many different inputs. My experience is that most people have a natural intuition that every event can be traced back to a prior event that caused it. This is even seriously proffered as a self-evident axiom of our reality: the “there is no effect without a cause; there is no creation without a creator” trope is a standard justification for creationist stonewalling.1

What is notable is that we are biased to think in terms of a single prior event as a cause. When there is a scandal or disaster we immediately try to find a villain. Inevitably the media seizes upon a single person, or a cohesive group all of whom are described as conspiring together to cause the event. Blame seldom (if ever) falls on multiple completely independent villains: the finger should point in one direction and one direction only.

In addition to this, we seem to naturally want to mark people as either responsible or not responsible for some outcome, with little space for gradations of responsibility. This is used just as often for exoneration as for vilification: disasters involving bureaucracies are often chalked up to “systemic problems”, with every actor claiming that because they weren’t completely responsible for the disaster they can’t take the blame.

It should go without saying that such an intuitive model is fundamentally wrong—every event has many causes, and responsibility for an outcome is shared by many people whose choices led to that outcome—but that doesn’t make it any less appealing. Religion, our legal system, and Freudian analysis all seem to be built upon the assumption of single causes. In many cases I’m sure the assumption of single causes is a reasonable simplification, but such simplifications become less tenable for outcomes dependent on complex interactions between multiple actors. As society has become more complex, outcomes only seem more dependent upon more complex interactions between more actors.

Job Creators

The meme that has brought this to mind lately is use of the euphemism “job creators” in place of “rich people”. The idea seems to be that someone making a million dollars a year (the modern definition of “millionaire”) is likely to hire a maid, a nanny, a personal assistant, etc.;2 a middle-class family making under a hundred thousand dollars a year is unlikely to have any full-time employees. Giving a millionaire an extra few hundred thousand a year might mean they hire a new chauffeur, and that certainly feels like “creating a job”. Giving a few hundred families an extra thousand a year would likely mean only that they have a few more meals a year out at a restaurant; the few minutes of work each such meal creates for waiter, busboy, and cook don’t have quite the same resonance. A single rich person gets to claim the title of job creator all on their own; middle-class families earn the title as a group as thus nobody claims it at all.

(Note that one need not take a stance on trickle-down economics to note the asymmetry of the “job creator” label. All I claim is that extra middle-class income clearly creates some jobs; whether the effect is larger or smaller than the same total sum distributed to wealthier families is a question for economists…although I can’t pretend I don’t have my own expectations on the matter.)

  1. My main problem with creationism is not religious, but intellectual. Saying that complex intelligent creatures were created by another complex intelligent creature is no more interesting than explaining that people give birth to other people. Creationism is thus not an intellectual theory; it’s an excuse to stop thinking about the issue of origin. 

  2. I’m explicitly addressing personal income here. While the profits of small businesses can be taxed at the same rates as individual income, the term “job creators” is being applied to individuals, not businesses. (Given that employee salaries are not taxed as profit, any connection between tax on profits and hiring by businesses is much less direct.) 

Why the iPad Succeeds 

MG Siegler recently opined that the reason Android is having some success against the iPhone but little against the iPad is because of support from mobile carriers. John Gruber linked to Siegler’s piece, adding:

My hypothesis has long been that Android has very little traction in and of itself. What has traction is the traditional pattern where customers go to their existing carrier’s retail store to buy a new phone, listen to the recommendations of the sales staff, and buy one of the recommended phones… There is no such traction for the idea of going into your phone carrier store and buying a computer. That’s why carrier-subsidized netbooks didn’t take off, and that’s why carrier-subsidized Android tablets haven’t either.

I don’t disagree with the idea that the sales dynamics for phones and tablets are different, but I’d come at it from a different angle. The reason the iPad is so dominant in the tablet space is precisely because of the biggest criticism it got before launch. Nobody needs one.

Everyone these days needs a mobile phone. In fact, most people need one that can also do email and Facebook as well. This is true of people who don’t have much interest in phones, or technology in general. It’s even true of people who actively dislike their phones. Having a smart phone is the price of living in modern society.

Nobody needs a tablet. You’re not excluded from modern society if you don’t have one. If you actively dislike using a tablet, then you won’t use one.

When I was interviewing at Apple, there was one thing that one of the senior engineers in the iOS group said to me that I’ll never forget. We were talking about how management and engineering work together, and he was telling me that sometimes it goes wrong:

…so we were working on what became MobileMe, and management came up with a set of features, and everyone knew that they could be implemented, and we did our best to implement them. But when we gave what we built to users, they weren’t delighted. It was a problem. Management wasn’t happy, and engineering wasn’t proud of the product, and everyone was trying to figure out what went wrong…

I don’t think that phrase—“[users] weren’t delighted”—would have come out of the mouth of an engineer at any other company. It wasn’t meant as a euphemism; his entire point was that the product was good but not great, and that the company couldn’t figure out how a network synchronization system could delight users, no matter how well it was implemented. When I mentioned his phrasing later this engineer claimed he hadn’t realized he’d said those words. He didn’t even see why they might sound odd. In his world, success was measured by user delight. Everything else was a side note.

The point is that more than any other technology product I’ve ever seen, the success of the iPad stems from user delight. Other products from PCs to laptops to digital cameras to mobile phones have solid practical justifications behind them. I have no doubt that we’ll get there with tablets—someday a tablet will be as indispensable as a laptop—but that’s not the situation today.

I’ve only played with display models of Android tablets in stores for a few minutes at a time, and I haven’t tried the newest batch running the latest software version, but the big difference between Android and iOS for me is that Android has never made me grin. Even if an Android tablet were as good for web browsing and email and reading as an iPad, I still don’t think it would crack today’s market. If you want somebody to slap down $500 for a gadget they know they don’t really need, you have to make them grin.


I think this model also offers a useful way to look at Amazon’s Kindle. I don’t think the hardware or software have put many grins on many faces, but the e-reader market has had practical selling points from the start. Amazon isn’t going after user delight; they’re just trying to suck less.

Science and Neutrinos 

Is there any better demonstration of scientific culture, and the ways it differs from other fields, than the faster-than-light neutrino flap? Consider:

  1. Some scientists notice a pattern in their data that looks a little odd.

  2. They are unable to explain the pattern using existing laws of physics, and come up with a new theory to model the pattern. The new theory contradicts some of the foundations of modern physics.

  3. This new theory is shared with the scientific community.

  4. Other scientists are extremely skeptical of the new result, but don’t dismiss it out of hand.

  5. The community considers how the experiment could be replicated, what weaknesses and sources of error exist, and how those issues could be addressed with further experiments.

  6. The follow-ups demonstrate that the original data was probably flawed, and identify the likely cause.

  7. Experimentalists learn a little bit more about avoiding similar sources of error in the future.

This is precisely the process that happens every day in scientific research. The ideological commitments, grandstanding, and rhetoric that are the hallmarks of both political debate and the humanities are the exception in the sciences, not the norm.

The Privacy Advantage 

Just a couple of quick thoughts after reading Sven Schmidt’s post about how to use S/MIME on iOS, and then getting and installing a free certificate for my own personal email:

  1. My biggest motivator for using encrypted email isn’t privacy (or paranoia); it’s politics. The TSA and other agencies consider any attempt to retain privacy strong evidence of terrorist activities. By using encryption—particularly for completely mundane correspondence—you strengthen the culture of privacy among honest citizens. (I’m not suggesting that the case for privacy is entirely clear-cut and won’t dwell on the complexities of the arguments here. Suffice it to say that I believe protection from unreasonable search and seizure is one of the the things the US Constitution did get right.)

  2. If Apple really does consider itself in competition with Google, then privacy tools are weapons Google can’t defend against. Gmail loses all value to Google if all its messages are encrypted such that only the sender and receiver can read them. Apple Mail loses nothing if encrypted mail becomes the norm…in fact pervasive use of certificates, keychains, and encryption all increase people’s reliance on personal computing devices at the expense of stateless web interfaces.

Translating Climate Op-Ed 

A few days ago, the Wall Street Journal published an op-ed written by a collection of scientists claiming distortion of science in support of climate alarmism. I don’t necessarily agree with everything they wrote, but their central point seemed quite sensible: whether or not “drastic actions on global warming are needed” is not something on which all scientists agree. I’d go farther and say that it’s clearly not even a scientific question; it involves a great deal of politics (i.e. how do we balance different values as a society) and economics (how will various types of “drastic actions” affect our ability to address these different values). But the main thrust of the letter is that a climate change orthodoxy is being imposed upon the scientific community to support this political stance. The letter provides a few examples.

The Journal has since published a response, which I will attempt to translate for ease of comprehension:

Do you consult your dentist about your heart condition? In science, as in any area, reputations are based on knowledge and expertise in a field and on published, peer-reviewed work. If you need surgery, you want a highly experienced expert in the field who has done a large number of the proposed operations.

Science is based on appeal to authority.

But all those signature drives that claim to establish a “consensus” on climate change by collecting names from anyone with a higher degree? Those still count.

You published “No Need to Panic About Global Warming” (op-ed, Jan. 27) on climate change by the climate-science equivalent of dentists practicing cardiology.

Science is also based on ad-hominem attacks.

While accomplished in their own fields, most of these authors have no expertise in climate science.

The only people allowed to comment on “climate science” are those who publish papers supporting drastic action on global warming. Neither physicists nor meteorologists may comment. Nor may statisticians, geologists, or chemists.

The few authors who have such expertise are known to have extreme views that are out of step with nearly every other climate expert.

Nor may anyone who fails to bow to the climate change orthodoxy. There is an overwhelming consensus among those who agree with that consensus.

This happens in nearly every field of science. For example, there is a retrovirus expert who does not accept that HIV causes AIDS. And it is instructive to recall that a few scientists continued to state that smoking did not cause cancer, long after that was settled science.

There have been cases when the prevailing interpretation of the data has turned out to be right, and those who questioned it were wrong.

Climate experts know that the long-term warming trend has not abated in the past decade.

The warming trend has not become less intense or widespread in any way whatsoever.

In fact, it was the warmest decade on record.

We don’t know what a warming trend is.

Observations show unequivocally that our planet is getting hotter. And computer models have recently shown…

We created an imaginary world where the following is true:

…that during periods when there is a smaller increase of surface temperatures…

The warming trend has become less intense in some areas. Or less widespread but just as intense. Or less intense and less widespread.

…warming is occurring elsewhere in the climate system, typically in the deep ocean. Such periods are a relatively common climate phenomenon, are consistent with our physical understanding of how the climate system works, and certainly do not invalidate our understanding of human-induced warming or the models used to simulate that warming.

The orthodoxy is that global warming is happening even when you can’t see it in surface temperatures. Pointing out that you can’t see warming in surface temperatures will be interpreted as a denial of this orthodoxy.

Thus, climate experts also know what one of us, Kevin Trenberth, actually meant by the out-of-context, misrepresented quote used in the op-ed. Mr. Trenberth was lamenting the inadequacy of observing systems to fully monitor warming trends in the deep ocean and other aspects of the short-term variations that always occur, together with the long-term human-induced warming trend.

There is data we don’t have, and we wish we had it. But we don’t need it because we know exactly what it is. It’s “data” in the same sense that a computer program is the planet Earth.

The National Academy of Sciences of the U.S. (set up by President Abraham Lincoln to advise on scientific issues), as well as major national academies of science around the world and every other authoritative body of scientists active in climate research have stated that the science is clear:

The role of scientific bodies is to craft simple nuance-free statements on behalf of their members. Please ignore that the op-ed to which we are responding opened with Nobel Prize-winning physicist Ivar Giaever rejecting such a statement (and resigning from the relevant society in protest).

The world is heating up and humans are primarily responsible.

We are equally certain of global temperature trends and their precise causes.

Impacts are already apparent and will increase. Reducing future impacts will require significant reductions in emissions of heat-trapping gases.

“Apparent”, “increase”, and “significant” may be interpreted by politicians as required.

Research shows that more than 97% of scientists actively publishing in the field agree that climate change is real and human caused.

Research shows that 97% of papers published by climate-change research journals do not go out of their way to claim that climate-change research is unimportant.

It would be an act of recklessness for any political leader to disregard the weight of evidence and ignore the enormous risks that climate change clearly poses.

Anyone who disagrees with our values or economic priorities is reckless.

In addition, there is very clear evidence that investing in the transition to a low-carbon economy will not only allow the world to avoid the worst risks of climate change, but could also drive decades of economic growth.

Climate scientists are well qualified to make macroeconomic predictions.

2012 Predictions 

My predictions for 2011 were my worst yet. Like last year, I don’t have many unique insights into the next twelve months. Unlike last year, however, I’m not going to try quite so hard to pretend I do. Here are six modest predictions for 2012:

1. Apple releases iPad 3

I feel almost silly doing the prediction since all the rumor sites are already treating all these details as a given, but I’ll repeat them anyway, partly because I very often don’t believe many of the things spouted by rumor sites: the iPad 3 will be released around March or April, and the major new feature will be a retina display (i.e. double the resolution of the current crop of iPads). I don’t have any real prediction on whether LTE support will be available, but I’d bet slightly against it.

2. Price of gold way down

This is my major financial prediction for 2012, and for once I’m actually backing it with nontrivial amounts of my own money—I’m shorting gold. I currently work in a building with a lot of people with strong backgrounds in finance. They all tell me I’m wrong and give a number of compelling reasons for the price of gold to keep rising. And yet every such argument both assumes the efficient markets hypothesis and utterly negates it. What’s more, all the worries I hear about inflation—even from those who make their living in finance—seem to completely misunderstand what the Federal Reserve does (and tries to do). Gold can’t continue to outperform everything forever. My personal bet isn’t restricted to 2012 (I’m willing to wait two or three years for the price to collapse), but I think chances are good that the slide will begin this year.

3–5. Microsoft and RIM get new CEOs.

There are three items here, so note that I’m predicting that neither Jim Balsillie nor Mike Lazaridis will be CEO at RIM by the end of the year. I’m also taking another stab at Balmer stepping down, which I’ve been predicting for a while. I have difficulty understanding how management teams so demonstrably unable to navigate the current technology landscape have survived so long. So for the record, this is a prediction based on lack of understanding. Always a solid foundation…

6. Obama wins

Meh; what do I know about politics. But it seems to me that a rich corporate candidate is quite vulnerable to Obama’s populist style. I don’t think “government is always job-killing” is going to fly in this election cycle, and without that the Republicans are in trouble.

2011 Prediction Results 

This site is clearly in need of some serious updating (in terms of both content and layout—I only recently discovered the atrocious rendering under IE). Hopefully I’ll manage to get around to that at some point soon. For now, however, I’m just back for my oh-so-brief annual self-shaming: I’m reviewing the results of my predictions for 2011.

1. Patriots win Super Bowl (difficulty 0.6)

Not so much. Moving on.

2. Celtics or Heat win NBA Championship (difficulty 0.5)

Also not so much…although I was much happier seeing the Mavs win than I would have been getting the prediction right.

I think the lesson from the above two is that I just shouldn’t make predictions about major sports outcomes. It’s a mature enough market already, I have nothing in particular to add to the conversation, and there’s no fun or glory in getting it right, anyway. Live and learn.

3. Weakened filibuster and secret holds in US Senate (difficulty 0.4)

I haven’t been following politics as closely in 2011 as in some prior years, but I don’t think this happened either. In fact it appears that the Senate is even using a new procedural trick to prevent recess appointments. 2011 predictions not going at all well.

4. Immigration reform passed (difficulty 0.8)

Listen, if you’re going to get your predictions wrong, you may as well make outlandish predictions.

5. Charges brought against Assange in US; he avoids extradition (difficulty 0.6)

Well he avoided extradition, so that’s something. The US seems to be happy just letting the WikiLeaks thing blow over. As I wrote even before making the prediction, it’s entirely possible that WikiLeaks made it easier for governments to hide information in the future, not harder.

6. iPad remains most popular tablet (difficulty 0.3)

Low difficulty, but I nailed it. Want commentary? provides far more cogent analysis of the iOS and mobile marketplace than I ever could.

7. New iPad released (difficulty 0.3)

I listed a low difficulty, but I did nail all the details: released around April with a camera but no retina display. My accuracy on computing-related predictions is marginally better than my foresight regarding sports or politics.

8. Blackberry loses spot as most popular mobile OS (difficulty 0.7)

Oh what a difference a year makes. RIM went from over 35% of the market in 2010 to under 17% this past November. In the same time iOS grew slightly from the mid- to the high-20s, and Android appeared to gobble up most of what RIM, Microsoft, and Symbian dropped: that platform went from 23 to 46 percent of the market.

Clearly tech industry predictions are my sweet spot. Oh, wait…

9. Ballmer no longer CEO of Microsoft (difficulty 0.7)

I was wrong. But, seriously? Are the shareholders of Microsoft just as complacent as the company’s management? I admit Metro looks good, but MS has lost every shred of leadership and credibility at this point. Sigh.

10. Another good year for stocks (difficulty 1.0)

I predicted 13500 for the DOW, 3150 for the NASDAQ, and 1500 for the S&P 500. Actual opens on the first of the year: 12400 (8.15% below prediction), 2660 (15.56% below prediction), and 1280 (14.67% below prediction). My “partial credit” formula scores this as a hit at difficulty 6.2, which seems a tad high for me. It wasn’t a particularly good year for stocks (but nor was it an atrocious one).

Final tally

Definitely hit three, definitely missed five, and two were middling. In truth, I should have seen this coming; I felt pressure to publish 2011 predictions and so I forced myself to a lot of guesses I didn’t feel very confident about. Under 50% again. Live and learn.

Miss Universe National Costumes 

Let the objectification begin! (/continue!)

Miss Albania, Xhesika Berberi

Miss Albania national costume

I hadn’t realized Prince of Persia was set in Albania.

Miss Angola, Leila Lopes

Miss Angola national costume

A blue christmas tree adorned with plastic dolphins. Interesting choice.

Miss Argentina, Natalie Rodriquez

Miss Argentina national costume

You’re not going to convince me this dress wasn’t ruined by a producer backstage who said “whoah; way too much cleavage. Let’s just stuff a few feathers down there…”

Miss Aruba, Gillain Berry

Miss Aruba national costume

“I don’t actually have the confidence to pull this off. Whatever.”

Miss Australia, Scherri-lee Biggs

Miss Australia national costume

Trying to come up with snarky comment that in no way mentions camels or their feet. Failing.

Miss Bahamas, Anastagia Pierre

Miss Bahamas national costume

I do have the confidence to pull of Miss Aruba’s outfit, but I was on a budget.

Miss Belgium, Justine De Jonckheere

Miss Belgium national costume

This contest unfairly discriminates against countries with no character.

Miss Bolivia, Olivia Pinheiro

Miss Bolivia national costume

Is she trying to further emphasize those creepy eye things on either side of her head by squinting? Questionable strategy.

Miss Botswana, Larona Motlatsi Kgabo

Miss Botswana national costume

I don’t know; it’s a shovel or something. Let’s just say this costume doesn’t conjure the care-free self-indulgence of some of the other costumes.

Miss Brazil, Priscila Machado

Miss Brazil national costume

I’m not even going to pretend those aren’t stripper boots.

Miss British Virgin Islands, Sheroma Hodge

Miss British Virgin Islands national costume

Least functional hat ever.

Miss Canada, Chelsae Durocher

Miss Canada national costume

I like the headdress, but the gown looks suspiciously like it was made from 80s-era Star Wars footie pajamas.

Miss Cayman Islands, Cristin Alexander

Miss Cayman Islands national costume

The outfit does a good job of delaying the realization that this contestant is about as a attractive as the average woman at an upscale suburban supermarket.

Miss Chile, Vanessa Ceruti

Miss Chile national costume

Oh, mad props to Vanessa, who is definitely my favorite so far. She heard “National Costume” and decided “Halloween” is close enough. “Sexy trapped Chilean miner” is a solid costume choice.

Miss China, Luo Zilin

Miss China national costume

“I’m got a rockin’ bod under here. Really. Trust me.”

Miss Colombia, Catalina Robayo

Miss Colombia national costume

I’m starting to suspect that each contestant was forced to use exactly the same amount of fabric, so if they wanted to be naked they needed to find something else to do with the material.

Miss Costa Rica, Johanna Solano

Miss Costa Rica national costume

Some contestants go for care-free, others go for “I will chain you to an altar, slice you open, and eat your heart.” Yes this thought turns me on. A lot.

Miss Croatia, Natalija Prica

Miss Croatia national costume

Miss Croatia is apparently hoping that most of the judges are pedophiles.

Miss Curacao, Eva Van Putten

Miss Curacao national costume

Does she know she’s being graded on this?

Miss Cyprus, Andriani Karantoni

Miss Cyprus national costume

There are a few Disney auditions thrown in every year. B-.

Miss Czech Republic, Jitka Novackova

Miss Czech Republic national costume

“If you make me angry, I turn green and it fits!”

Miss Denmark, Sandra Amer

Miss Denmark national costume

Cleavage is make-or-break in a Disney audition. A-.

Miss Dominican Republic, Dalia Fernandez

Miss Dominican Republic national costume

It’s like she’s a mermaid, except dressed in a stupid outfit!

Miss Ecuador, Claudia Schiess

Miss Ecuador national costume

I didn’t realize that “plastic-man arms” was a real fetish, but I think I have it.

Miss Egypt, Sara El Khouly

Miss Egypt national costume

Interesting cross between Cleopatra and Beethoven.

Miss El Salvador, Mayra Aldana

Miss El Salvador national costume

“The next person to offer to help me find my sheep gets punched in the balls.”

Miss Estonia, Madli Vilsar

Miss Estonia national costume

“National costumes? Fuck that. I’m hot. Buy me something pretty.”

Miss Finland, Pia Pakarinen

Miss Finland national costume

“I’m hot too, but I’ll at least make a gesture. They’re kind of like fins, right? As in Finland?”

Miss France, Laury Thilleman

Miss France national costume

Is she really bribing the judges with cupcakes?

Miss Georgia, Eka Gurtskaia

Miss Georgia national costume

Once you convince yourself there’s an army of midgets under there waiting to swarm the stage, you can’t stop thinking about it. Let’s move on.

Miss Germany, Valeria Bystritskaia

Miss Germany national costume

She got a tip from Miss Croatia about the pedophile judges.

Miss Ghana, Erica Nego

Miss Ghana national costume

Another victim of the fabric quota system.

Miss Great Britain, Chloe-Beth Morgan

Miss Great Britain national costume

Listen, the fabric thing was a joke. If you want to wear a miniskirt, then just wear it. This is getting ridiculous.

Miss Greece, Iliana Papageorgiou

Miss Greece national costume

“Austerity measures meant I got nothing but a sheet. But we all know I won this round anyway. I respect Miss Curacao for not even trying to beat me.”

Miss Guam, Shayna Jo Afaisen

Miss Guam national costume

Late in the design stage, it became clear that the “Guam” message had been somewhat diluted by the commitment to mermaid authenticity. Solution? A sign.

Miss Guatemala, Alejandra Barillas

Miss Guatemala national costume

“Is the pirate craze still going on, or am I five years late?”

Miss Guyana, Kara Lord

Miss Guyana national costume

“Yeah, sticks coming out of my neck and a coiled snake on my head. Why are you looking at me like that?”

Miss Haiti, Anedie Azael

Miss Haiti national costume

“My mother made it for me. The dress has handles, see?”

Miss Honduras, Keilyn Gomez

Miss Honduras national costume

The fabric quota has been lifted! Thank god!

Miss Hungary, Betta Lipcsei

Miss Hungary national costume

Purple vampire bunny outfit. Classic.

Miss India, Vasuki Sunkavalli

Apparently Miss India’s costume didn’t make it through customs to Brazil. Let’s just assume that she was going to wear a leather bikini and stripper boots, give her an A, and move on.

Miss Indonesia, Nadine Alexandra

Miss Indonesia national costume

I actually really like this. Got to start rationing the snark.

Miss Ireland, Aoife Hannon

Miss Ireland national costume

I was sure the national costume of Ireland included a thick woolen sweater…

Miss Israel, Kim Edry

Miss Israel national costume

“Our national costume is a set of army fatigues, so I just decided to do a second eveningwear round.”

Miss Italy, Elisa Torrini

Miss Italy national costume

“This way I can spill tomato sauce on myself and nobody will know.”

Miss Jamaica, Shakira Martin

Miss Jamaica national costume

“It looks a lot better if you’re high. But doesn’t everything?”

Miss Japan, Maria Kamiyama

Miss Japan national costume

“I’m a geisha, but I’ll still cut you! [giggle]”

Miss Kazakhstan, Valeriya Aleinikova

Miss Kazakhstan national costume

This is just close enough to a nun’s habit that I refuse to find it sexy. And I find everything sexy.

Miss Korea, Sora Chong

Miss Korea national costume

You know how on diorama day at school there was always that one kid who showed up not realizing the project was due, and he had to throw something together from whatever all the other students could spare from their dioramas? Plan ahead next time, Sora.

Miss Kosovo, Aferdita Dreshaj

Miss Kosovo national costume

This must be a trick of the light, because I know no Miss Universe contestant would wear shorts. This is a sad day for pageantry.

Miss Lebanon, Yara El Khoury-Mikhael

Miss Lebanon national costume

Someone said something stupid and got sent to the corner…

Miss Malaysia, Deborah Henry

Miss Malaysia national costume

Hey Deborah, could you get that book from the top shelf? Just reach way up there. Yeah, just like that…

Miss Mauritius, Laetitia Darche

Miss Mauritius national costume

She knows nobody knows shit about Mauritius. We’ll take her word that that’s their national costume. Moving on.

Miss Mexico, Karin Ontiveros

Miss Mexico national costume

What’s sexier than a giant skull? Nothing, that’s what.

Miss Montenegro, Nikolina Loncar

Miss Montenegro national costume

“Well if Miss Greece is going to win, then this should get me second, right? No? Oh.”

Miss Netherlands, Kelly Weekers

Miss Netherlands national costume

A lot of girls would actually wear that crown. I respect Kelly for knowing her place as a servant and sticking with the toy boat as headgear.

Miss New Zealand, Priyani Puketapu

Miss New Zealand national costume

“Blankets for sale! Ten dollars each! Blankets for sale!”

Miss Nicaragua, Adriana Dorn

Miss Nicaragua national costume

We’re blurring the line between hat-wearing and hobbling at this point.

Miss Nigeria, Sophie Gemal

Miss Nigeria national costume

Is the Nigerian national costume really “Twizzlers”?

Miss Panama, Sheldry Saez

Miss Panama national costume

The gray feathers look too much like Doctor Octopus’s adamantium arms for me to offer any fashion commentary. Let’s just say that if Spider-man wants to get from the Atlantic to the Pacific he should take the long way around.

Miss Paraguay, Alba Riquelme

Miss Paraguay national costume

Fail on cleavage. Fail on shoes. Fail on sexy shoulders. Alba is not winning this contest; that’s for damn sure.

Miss Peru, Natalie Vertiz

Miss Peru national costume

Excellent combination of skin, cheap souvenir art, and weaponry.

Miss Philippines, Shamcey Supsup

Miss Philippines national costume

It’s like a poorly-dressed princess decided to frolic in a pile of autumn leaves.

Miss Poland, Rozalia Mancewicz

Miss Poland national costume

Another Disney contestant. Hard to judge the cleavage from this angle, but I’ll be generous and offer a solid B.

Miss Portugal, Laura Goncalves

Miss Portugal national costume

“We all know Miss Greece is going to win, so I just threw on something colorful from my closet.”

Miss Puerto Rico, Viviana Ortiz

Miss Puerto Rico national costume

An awesome body and terrible taste. You’re looking at Rob’s target dating demographic.

Miss Romania, Larisa Popa

Miss Romania national costume

Is she trying to dress like both a vampire and his bloody virgin victim?

Miss Russia, Natalia Gantimurova

Miss Russia national costume

I like to imagine that she’s naked under that hat.

Miss Serbia, Anja Saranovic

Miss Serbia national costume

Serbia gained independence in 2006, so witness the exotic fashion stylings of the mid-oughts! Good to see that in their five years they’ve managed to come up with a couple of logos and a flag, though. Five or six centuries and this is going to be a really good look.

Miss Singapore, Valerie Shu Xian Lim

Miss Singapore national costume

Anyone else get the feeling that we caught her halfway through a magic trick, and she’s about to step aside to reveal that the assistant who just stepped behind her is now gone?

Miss Slovak Republic, Dagmar Kolesarova

Miss Slovak Republic national costume

She got everything at Primark for under £20. She’s totally ready for the “national costumes” bop. (Apologies to all the non-Oxonions who have no idea what I just said.)

Miss Slovenia, Ema Jagodic

Miss Slovenia national costume

“I’m hot. That’s all that matters.”

Miss South Africa, Bokang Montjane

Miss South Africa national costume

“You may be hot, but I know what I’m doing. Judges: vote for me and I will rock your worlds.”

Miss Spain, Paula Guillo

Miss Spain national costume

“I’m too good for pageants, and I refuse to be gawked at. How the hell did I get here?”

Miss Sri Lanka, Stephanie Siriwardhana

Miss Sri Lanka national costume

In mourning apparently.

Miss St. Lucia, Joy-Ann Biscette

Miss St. Lucia national costume

“I was expecting the wings to be bigger. Whatever.”

Miss Sweden, Ronnia Fornstedt

Miss Sweden national costume

“The backstage producer made me put on underwear. What a bitch.”

Miss Switzerland, Kerstin Cook

Miss Switzerland national costume

She was planning on entering the Disney category, but after seeing Miss Sweden she grabbed a pair of scissors and spiced it up a little.

Miss Tanzania, Nelly Kamwelu

Miss Tanzania national costume

Who says national costumes have to be about the past? This will totally be the national costume of Tanzania in the year 2782, when swords and sorcery rule the earth.

Miss Thailand, Chanyasorn Sakornchan

Miss Thailand national costume

The costume is all well and good, but let me just interject here with a more general statement on fashion: pantyhose are awful. Just a terrible, terrible invention. Stockings? Yes. But pantyhose are an affront to all that is right and good about women’s fashion. Thank you for your attention.

Miss Trinidad & Tobago, Gabrielle Walcott

Miss Trinidad & Tobago national costume

Gabrielle is the clear winner of this year’s “mostly naked with a feather or fabric background” contest. Thanks for playing, rest of South America and the Caribbean.

Miss Turkey, Melisa Asli Pamuk

Miss Turkey national costume

Another Disney princess with a pair of scissors.

Miss Turks & Caicos, Easher Parker

Miss Turks & Caicos national costume

That country is made up. No wonder she got her outfit out of a dumpster.

Miss U.S. Virgin Islands, Alexandrya Evans

Miss U.S. Virgin Islands national costume

And this year’s loser of the “mostly naked with a feather or fabric background” contest. Failure from the knees up.

Miss Ukraine, Olesia Stefanko

Miss Ukraine national costume

If I want a girl with turnips hanging from her belt, I’ll give you a call, Olesia. But don’t wait by the phone.

Miss Uruguay, Fernanda Semino

Miss Uruguay national costume

Did we catch her adjusting her codpiece?

Miss USA, Alyssa Campanella

Miss USA national costume

Sexy Napoleon! U-S-A! U-S-A!

Miss Venezuela, Vanessa Goncalves

Miss Venezuela national costume

The dragon theme is compelling but not overdone; the hat is indulgent but not ridiculous or distracting; the body is smoking. Winner of the “random mythical creature” costume contest…unless Miss Greece’s goddess outfit counts.

Miss Vietnam, Hoang My Vu

Miss Vietnam national costume

I’d like to ring her gong, if you know what I mean. No; seriously—I used to be a percussionist and I enjoy ringing gongs. But after that some sex with Miss My Vu would be nice, too.

Cameron vs. Freedom Of Expression 

Three questions for David Cameron:

  1. Would the absence of social media have prevented these riots?

  2. Would copycat riots in cities beyond London (and in different corners of London) have occurred without television and newspaper coverage?

  3. Does organization via social media instead of traditional word of mouth make it easier or harder to track down and prosecute “organizers”?

And one for Nick Clegg:

  1. Do the Lib Dems intend to remain part of a government that claims the right to prevent citizens from communicating with each other?

Software Patents 

There’s been plenty of digital ink spilled over the patent system in the wake of Google’s shameless hypocrisy on the matter. The conversation seems to move pretty fluidly between discussion of patents in general and discussion of software patents. My understanding is that the portfolio Google is complaining about involved more than just software patents, but I may be wrong; the point is that arguments about software patents is at least partially orthogonal to the Google situation.

Like the vast majority of software engineers I know, I’d much prefer that software were not patentable. But my main complaint with those defending the status quo is over arguments like this (quoted here):

Nobody has ever told me exactly why the patent system is fundamentally broken.

Whatever the underlying point, this is a textbook example of framing the discussion such that major catastrophic problems are necessary to overturn a product of history. In my experience, “fundamental” is an extremely ambiguous term, particularly where brokenness is concerned. By some definitions, any problem that can be hidden, at least temporarily, does not count as “fundamental”. Arguing that something is broken can degenerate into a whack-a-mole game, where you point out a negative consequence of some issue you consider fundamental, a way to hide that particular manifestation is devised, another similar negative consequence of the same issue is discovered, and another patch devised, and so on. Finding bugs is almost always harder than fixing them, so there’s a serious asymmetry of effort here. If the rules are that we play until someone is exhausted, the defender will inevitably win. After all, the defender is happy to invest their time in a system they want and expect to continue on to the future; if the attacker wins the argument and the “fundamentally broken” system is scrapped, then they get no benefit from all the investment they’ve made understanding the system. This situation arises in everything from terrible software architecture to ad-hoc reasoning frameworks.

But as for software patents, the focus of the argument shouldn’t be on whether a system for software patents could be fixed; it should be on whether there is any value to software patents at all. The patent system was created as a way to foster innovation, at the price of a well-understood evil: monopoly power. There’s never been any debate that patents have negative consequences. Thus it’s worth asking what the positives are.

I can’t speak to drug patents, or manufacturing patents, or wireless technology patents. Maybe in those fields a big chunk of innovation is performed for the purpose of getting patents. But I’d argue that there is more innovation in software than in every other field put together—new algorithms, human interfaces, programming languages, and engineering methods are being experimented with constantly. And 99.9% of all this innovation never gets patented in any way, shape or form. I don’t mean that 99.9% isn’t patentable; I’m saying that 99.9% of software that probably could be patented under the current system—precisely the types of innovation the patent system was created to reward—isn’t patented. Thus the absolute upper limit on what we’d lose without software patents is 0.1% of innovation.

But I’ll go farther than that. I have never encountered a single piece of software that was created primarily for the purpose of claiming patent rights. The vast vast majority of the 0.1% of software that does get patented—and quite possibly all of it—would still have been created if there were no patent rights to software. Companies file patents on software they happen to have built for other reasons; they don’t build software to file patents.

Software patents are “fundamentally broken” because they do harm, and they provide absolutely no benefit whatsoever. As has been pointed out so often, the monopoly rights granted by patents actually stifle innovation, and in software there is no increase in innovation to even slightly counterbalance this. The contribution of software patents is net negative to innovation.

When Google writes a letter to Congress demanding legislation to prohibit the patent office from issuing any software patents in the future, and invites other large tech companies to sign it, then they’ll have claimed the high ground on the patent issue. Until then, they’re just whining because they lost the last round in a game they’re perfectly happy to play.

Update 2011-08-16

At risk of sullying my opinion with facts, I’ve stumbled across a working paper detailing a few actual numbers for software patent filings.

Crisis as Cover For Reform 

Most of the coverage of the current debt limit debates that I’ve seen has suffered from what I consider the single greatest problem with mainstream news reporting. It focuses so much on the here and now that larger story lines are overlooked.

Again and again I’ve heard the situation called “crazy” and the politicians completely incompetent. There’s no question that they’re playing a very dangerous game, but I think some of the underlying reasons are quite sane. While both parties agree that the debt limit should be raised (the Tea Party crowd are a lunatic fringe not worth considering to be rational actors here), there is also general consensus that entitlement reform (e.g. Social Security) has been put off for far too long. Arguments that either of these is a partisan issue don’t really hold up; every rational actor in Washington, and every economist, agrees.

The trouble is that entitlement reform is normally impossible for politicians. Anybody who votes for cutting Social Security benefits is vulnerable to outsiders who claim they would have opposed such cuts.

The current “crisis” looks like it’s shaping up to be the perfect (potential) solution. If the only possible way to avoid catastrophic default is to set up a deal guaranteeing major spending cuts, but deferring the details of such cuts until later, and making the default source of cuts entitlements, then everyone can credibly claim that they never supported cutting entitlements at all. When later negotiations to find other cuts fail and entitlement spending goes down, both parties can blame “partisan gridlock” for what happened.

Americans tend to loathe Congress (approval ratings are currently around 20%, with 70% disapproval), they simultaneously like their own congressional representatives (who usually have quite good approval ratings). A “blame the system” excuse is thus a perfectly reasonable re-election strategy, and the most rational approach to passing unpopular but necessary legislation.

Obviously this is wildly inefficient. Obviously it’s a way to end up with cuts that haven’t been thought through (in fact that’s an important part of the strategy). But it’s not crazy and it’s not incompetent. It balances short-term political incentives and long-term national priorities.

Note that I’m not directly suggesting that the rancor is pure theater to cover for Machiavellian back-room dealing. I think the rancor is real, and much of the confusion is real, and the danger of catastrophic economic consequences is real. But there’s a reason (beyond suicidal stupidity) that fiscal conservatives starting wielding the debt limit as a weapon, and there’s a reason politicians will (hopefully) be able to converge on some very unpopular cuts in a last-minute cloud of calamitous panic. There is a way out of this everyone can live with.

Finally, I should point out that my argument above doesn’t mean the process really arrives at something resembling a “moderate consensus”. There’s been a huge amount of collateral damage in the attempt to make any progress at all on the unpopular entitlement consensus, and it’s been inflicted primarily by those on the right of the debate. It’s still entirely possible that the attempt at progress still fails, while the collateral damage still takes place, and the nation really does hit the debt limit. So there will almost certainly be huge amounts of blame to spread around.

I’m just arguing that there’s substantially less pure unadulterated crazy than most of the headlines would suggest.

My Home Office 

I moved into a small unfurnished apartment in Brooklyn a few months ago, so I had a relatively blank slate to start from in setting up a work area. I’m in the company office most days, but try to work from home one day a week, in addition to the hours I spend working at home on nights and weekends.

One major factor in my setup is that I’m in a studio apartment. While I think I have plenty of space for one person, there isn’t room for lots of semi-redundant furniture. Ideally, I wanted one good surface I could use for computer work, for pencil-and-paper scribbling, and for dinner. (My semi-annual moves have also converted me somewhat to the “minimalist” philosophy. I’d rather have one good table and one good reading chair than a whole stack of mediocre furniture.)

I tend to spend long hours working at the computer, and I can’t entirely dismiss the recent concerns about sitting for long periods. I’ve also been through one (thankfully minor) bout of RSI when I was younger, and the universal piece of advice I got from everyone—doctors, computer users, and musicians—was that variety of posture is the best way to avoid recurrences. I thus decided that I wanted to try a work setup that allowed me stand for at least part of the day.

My basic setup looks like this:

Work table

The table is about 41 inches (104 cm) high, which is about right for a standing desk for most people. I’m fairly tall, a keyboard right on the table surface is perfectly usable when standing but still a few inches below what I’d need for the recommended 90 degree elbow angle. I wouldn’t want a higher table, but raising the keyboard with a few books is trivial.


The table itself is an IKEA Utby frame with a Numerär countertop. I particularly recommend the Utby frame: unlike standard IKEA table legs, this is good tubular steel fully cross-braced at both the top and the bottom with good quality hardware; it provides a very solid platform with no wobble at all. What is more, the crossbar at the bottom of the Utby frame is a sturdy and convenient footrest when standing or working from a high chair.


Of course the most striking thing about my setup is probably the arm I’m using to mount my monitor:

Monitor arm

The articulated part of the arm is an Ergotron LX LCD Arm, however that arm comes with a very short mounting post (the pipe sticking straight up from the desk) which leaves the monitor much too low for work while standing. Ergotron claims you can replace the provided post with your own length of pipe, but their post is of a very hard-to-find diameter, and it screws into the desk clamp using a different kind of threading than any pipe I could find in any home improvement store. I even showed the bits to a machinist in the neighborhood, and he said it would be next to impossible to find an off-the-shelf replacement, and that any custom solution (welding on an extension, or machining a custom thread onto a piece of cut pipe) would be prohibitively expensive. After a few emails to Ergotron, they finally agreed to send me the longer post from their dual stacking arm for nothing more than the price of postage. I appreciate their customer service, but it’s a huge shame that this longer post isn’t normally available to the general public, even as a special-order part.

The articulated arm provides movement in all directions, and a 14-inch (35 cm) vertical height range. Mounted at the top of the longer post, this lets me position my monitor either right down at desk level or at full standing height:

Low monitor

High monitor

It also lets me swing the monitor right out of the way, leaving the table clear for other uses:

Monitor arm

The one catch to mounting the monitor post to the desk is that you need a relatively big lip to the table top. This is yet another advantage of the IKEA frame+countertop setup over standard tables: I could simply mount the countertop asymmetrically to the frame, leaving a big lip on the left-hand side and a smaller one on the right.

The monitor itself is a 24-inch Viewsonic with integrated speakers. It was relatively cheap (under $200) and meets my needs, but it’s definitely not a high-quality piece of hardware: the integrated speakers are weak and sound tinny, and I consider the text all over the front ugly and distracting. What’s more, there’s no integrated webcam or microphone, so this is definitely not a videoconferencing setup (I need to pull the laptop out for Skype and Facetime calls). If Apple ever comes out with a desktop retina display, I might consider paying the premium to replace this piece of commodity kit.


I wanted to keep my work surface as clear as possible, so I came up with a way to mount my laptop under the table. I bought a large undershelf basket from The Container Store, sawed off the shelf mounting arms, and used the little plastic loops The Container Store hands out for free at the counter to screw the basket to the underside of my table:

Laptop basket

A basket an inch shallower would be a bit better, but I’ve never encountered any problems with my knees knocking against it so the current setup is fine. Sliding the laptop into position and attaching power, monitor, and sound cords is hassle-free. Obviously I could do something about cleaning up the messy cables, but I don’t have to look at them when I’m working so they don’t bother me.

Standing and sitting

As you can see in the background of the first photo, I also have a couple of “bar height” IKEA chairs (the Henriksdahl). I did sit on these to work for a few weeks, but hours and hours of shifting around on one eventually works the joints a bit loose, leading to some worrying play in the legs. After tightening the hardware back up they seem perfectly sturdy again (and are great so far as guest and dinner chairs), but I’m glad to have moved on to something else for working.

Standing right on my wooden floor left me a little sore, so I bought a cheap anti-fatigue mat:

anti-fatigue mat

It was the lowest-end option I could find, yet it works great—a bit of texture feels nice in bare feet, it’s soft enough to eliminate the soreness, and it’s held up perfectly to abuse under my work chair, so I never need to bother moving it.

My chair itself was my primary indulgence, and the purchase I was most worried about, because I didn’t even get to see it in person before ordering (and waiting over a month for delivery). It’s a HumanScale Freedom Saddle Seat:

HumanScale saddle seat

I’ve only had it for a few weeks now, but so far I’m very happy with it. It was expensive, but it definitely feels like the highest-quality piece of furniture I’ve ever used, and that includes the Aeron chairs I’ve had at various offices.

It’s clearly a stool, not a typical chair, and it’s quite big—its shape is a triangle over two feet (60 cm) to a side, with the corners cut off. The size was the biggest surprise to me, and is a huge advantage. In addition to perching on a side or sitting on the chair like a saddle, it’s even big enough for me to sit on cross-legged, which is how I end up spending much of the day. The versatility fits my goals perfectly: I change postures constantly throughout the day, switching as soon as I start to feel at all uncomfortable, but I never need to interrupt my work to do so.

In terms of design, it’s also worth noting that the chair is perfectly symmetrical. As you can see, there’s a lever for height adjustment under the corner. In fact, there is such a lever under each of the three corners. There’s no “correct” orientation of the seat and it rotates freely, so there’s nothing to think about when hopping on. I hadn’t even realized that rotating your chair the right way round before sitting in it presented any cognitive load at all until I didn’t have to worry about it.

I got the “high cylinder” model, which allows seat heights from 33 inches (84 cm) at the top, higher than the Heriksdahl chairs and perfect for working, all the way down to 22 inches (56 cm), the height of a “normal” chair, which is an astonishing range. The foot ring height is also fully adjustable, and the foot ring is easily strong enough to take my full weight with no give at all.

I also got “gliders” instead of caster wheels, and this is a crucial feature: a high stool with wheels would be unusable for perching. In fact, working on gliders at home and casters at work has made me think that I might choose gliders for my next office chair; any chair on wheels (which are always free to pivot, regardless of whether you’re rolling or not) provides a much less stable platform. Again, I hadn’t realized the annoyances that come with standard office chairs until they were taken away.


Of course, every geek needs a whiteboard. Luckily, I have a wall facing my little work area but no other part of the apartment, so I was able to mount a large whiteboard (and an additional pinboard strip) without it intruding on any other spaces. (It does overlook this space when it’s a “dining area”, though, which is a bit of a shame.) Here’s a view from the opposite side of the table from the other photos:


Re: Praise 

David Pogue doesn’t think much of the Samsung Chromebook, but does like the attempt:

For now, though, you should praise Google for its noble experiment.

John Gruber counters:

Would everyone have praised Apple for its “noble experiment” if the $500 iPad had been too big and heavy, felt like it was worth only $180, and was “a 3.3-pound paperweight” when offline? Fuck that. This is the big leagues. There is no credit for trying.

That’s not just glib, it also ignores the fact that Apple did create a device that was too big, too heavy, and too expensive, with crucial make-or-break features that just didn’t work. It was called the Newton. And I suspect even Gruber would concede that Apple deserves at least some praise for that effort, which probably nudged the industry further along on a number of fronts. A dismal failure as a product, but an interesting and educational failure.

Taking Gruber’s side, however, times are different now. The Newton wasn’t trying to replace anything; it was an entirely new category. The Chromebook is going head to head with both the iPad and the laptop, both robust and popular products. It’s one thing to release a product and have customers realize that it isn’t good enough for them to use much. It’s quite another to ask them to choose it over an alternative and leave them crippled as a result.

By this logic, the pre-iPad tablets could still merit praise even if they did suck. And if the iPad had sucked, it also could have been worthy of praise—I suspect that Gruber himself might have given Apple the same kind of kudos that Pogue offers Samsung/Google. In a post-iPad world, however, there’s no A for effort.

The expensive and flaky horseless carriages of the early 1900s do merit praise for blazing what turned out to be a crucial trail; those that came after the Model T don’t.

2011 Predictions 

My predictions for 2010 didn’t turn out so well, but I’ve delayed long enough. Time to come up with some predictions for 2011. As usual, I’m assigning difficulties between 0 and 1 to each prediction.


1. Patriots win Super Bowl (difficulty 0.6)

One of the things that makes the NFL so compelling is that even an overwhelming favorite seldom has a better than 70 to 80 percent chance of winning any given game (see, for example, last week’s Seahawks victory over the heavily-favored Saints). As a result, even if one team is clearly better than all others, the chances of winning two playoff games and the Super Bowl are slim. My difficulty is roughly in line with the 8-5 odds given by professional sports books.

2. Celtics or Heat win NBA Championship (difficulty 0.5)

I wanted to just predict a Celtics championship, but they only have a realistic chance if they’re healthy for the playoffs. Again, my difficulty is vaguely in line with the current betting odds.

Two quite boring predictions, I know, but these are the only sports I’ve been following lately.


3. Weakened filibuster and secret holds in US Senate (difficulty 0.4)

The public is getting a bit tired of the partisan gridlock (even those who like the results), and there’s currently a political situation that makes changing the Senate rules feasible: the minority party in the Senate controls the House, so there’s no danger of legislation being passed against party policy, and they are also threatening to become the Senate majority in the near future, so weakening the power of the minority is appealing. I expect the Democrats to pass rules changes weakening the filibuster somewhat (e.g. requiring 40 votes to extend debate instead of 60 votes to close it) and limiting secret holds by individual senators.

4. Immigration reform passed (difficulty 0.8)

Despite the inflammatory rhetoric surrounding immigration, I think it’s one of the few issues where the two parties agree that reform is needed, and I (perhaps naively) believe that Congress does want to demonstrate some significant accomplishments during this session, before the 2012 election wars. The current system of allocating a tiny quota of green cards by lottery is absurd on a number of levels; a points-based system more like Bush’s old proposal seems like it could pass.

5. Charges brought against Assange in US; he avoids extradition (difficulty 0.6)

Building an espionage case against Julian Assange and WikiLeaks in the US seems straightforward. I seriously doubt that countries are in the habit of granting extradition for such cases, however. Whatever happens with the sexual assault charges, I expect that Assange can avoid his US legal troubles by simply staying out of the US.


6. iPad remains most popular tablet (difficulty 0.3)

2011 will finally see some competition for the iPad. That competition will appeal only to those with religious objection to Apple, however. (The iPad’s price, in particular, leaves very little room for competitors.)

7. New iPad released (difficulty 0.3)

Released around April; camera(s) included; otherwise very similar to the current model. I’d love to see a retina display on an iPad and think that its new glass technologies might make it to the iPad (i.e. the display is directly on the back of the glass, and not under it like current laptops), but I don’t expect the resolution to change in this iteration.

8. Blackberry loses spot as most popular mobile OS (difficulty 0.7)

With the iPhone coming to Verizon there’s a very good chance that the iPhone will retake a lead over Android, but both platforms have far more momentum than the Blackberry. RIM is fighting to keep hold of its current (quite loyal) user base; Apple and the Android crowd are going after new smartphone customers.

9. Ballmer no longer CEO of Microsoft (difficulty 0.7)

It’s been a long time coming. There is no question at this point that Microsoft has lost its leadership position in technology. Worse, even among normal consumers its reputation is mediocre at best. The company is flailing in the consumer space, and its success is becoming more and more confined to the enterprise space. You don’t have to call this failure—when IBM made a similar shift to enterprise consulting its profits grew—but shareholders aren’t going to put up with a CEO who has squandered such a dominant position in so many markets.


10. Another good year for stocks (difficulty 1.0)

Europe may be facing some major institutional problems (more bailouts and defaults are inevitable, although none should be disastrous enough to allow anyone out of the Euro), but the US is looking pretty strong—even the employment numbers should finally start perking up in 2011. I expect another 15% rise across the board: Dow to 13500, NASDAQ to 3150, S&P 500 to 1500.

As usual, I’ll take partial credit for this prediction: a hit at difficulty 1.0 if I’m within 1%, 0.7 if I’m within 10%, 0.5 if I’m within 20%, etc. (Difficulty is given by e^(0.03963(1 - percenterror)).)

2010 Prediction Results 

Another year is dead and gone, so I suppose it’s time to review my predictions for 2010:

1. LeBron signed by Nets (difficulty 0.7)

Wrong; LeBron James went to the Miami Heat, and only Chicago seemed at all competitive. On the bright side, I was right that he’d leave Cleveland.

2. Semenya and Pistorius ruled ineligible (difficulty 0.4)

Another swing and a miss. After a series of semi-secret tests and dealings, Semenya has retained all her medals and been cleared to compete. It’s possible that she’s agreed to undergo some form of treatment (as I suggested she might to regain eligibility), but I can hardly claim victory based on guesses about what’s going on behind the scenes.

As for Pistorius, there was a devastating report from two of the scientists who helped to overturn his initial ban: they claim that the science shows his prosthetics give him as much as a 10 second advantage over normal runners in a 400-meter race. Despite this, I don’t believe any ban has been handed down—I haven’t been following the story closely, but I don’t think Pistorius has tried to compete against traditional runners this season, so the issue hasn’t arisen. Still, my prediction missed the mark. 0 for 2 so far this year…

3. Obama ends “don’t ask don’t tell” (difficulty 0.7)

Just got this one in under the wire: the DADT policy has been repealed. Technically the military now has discretion to set its own policy and DADT is its legacy position (presumably to be phased out quite quickly), but the government policy is now over.

4. Android becomes most popular OS; iPhone remains most profitable (difficulty 0.7)

This was quite a vague prediction, and I came really close to getting it sort of right, but actually got it really, really wrong.

The “almost right” part was that Android would catch the iPhone: comScore’s numbers to 31 October showed a dead heat between Android and the iPhone, and if trends have continued then Android has already passed the iPhone in US market share. Even if a newer report could verify this, I never specified US market share only, so I’m only “almost” right here.

The “really wrong” bit is that I completely ignored RIM’s BlackBerry phones, which still have higher market share in the US than either Android or the iPhone. Whoops.

5. AT&T loses iPhone exclusivity in US (difficulty 0.6)

This is a tough one; Apple has inked the contracts with Verizon (amazingly enough, this “secret” information is so widely known that even I have inside sources to confirm it), but CDMA iPhones didn’t ship in 2010. Can we parse the words and say that AT&T’s “loss” occurred in 2010?

6. Microsoft buys Pre (difficulty 0.9)

I knew it was an outlandish guess at the time. My theory that Microsoft’s internal projects were doomed was heavily supported by the Kin fiasco, however Windows Phone 7 has received quite positive reviews, so I no longer think technology is the main problem. What’s more, the Pre has lost what little momentum it had when I made the prediction. We’ll see how/whether that platform translates to tablets.

In short, it was HP who snapped up Palm’s Pre platform, and even if it were up for sale today I don’t think it would be a good match for Microsoft.

7. Apple Tablet released (difficulty 0.5)

Considering how little we knew about the iPad at the time—and the number of different theories—you’ve got to give me credit for the specificity of my predictions here. I even hit the price range within $100 either side.

8. Microsoft fades even more… (difficulty 0.3)

Keep in mind that the efficient markets hypothesis suggests this should have had a difficulty of 0.5. Yet it seems rather obvious in hindsight.

Since last year Google’s share price has declined from $620 to $604 (-2.5%). Apple went from $215 to an astonishing $325 (+51%). Microsoft declined from $30 to $28 (-6.5%). I got this one right.

9. iTunes gets live events (difficulty 0.7)

I was wrong about this, and I think the reason I was wrong is clearer as the vision for the iOS platform has become clearer. While I still think that live events through iTunes would be a great feature, Apple seems comfortable with third parties creating their own streaming apps for DRM-laden media. I loathe that this means Flash on the desktop, but iPhone and iPad apps for watching live sports can be done decently, and iOS is more important to Apple right now than the desktop. As a result, there’s no great urgency for Apple to step in, and lots of worry from media companies about handing too much control to Apple’s media store empire.

With the Apple TV starting to emerge as more than a hobby for Apple, however, I must wonder at what point the media companies start thinking that handing over some control and doing content delivery via iTunes would be worth avoiding the need to maintain their own apps (and distribution infrastructure) for four different platforms (Windows, Mac, iOS, and Apple TV).

10. Price of gold declines (difficulty 0.5)

My Microsoft prediction demonstrated by financial acumen; this prediction not so much. Gold was around $1100/oz when I made the prediction. It’s now around $1420/oz: a 30% rise.

My conclusion is that I just have no idea how to value gold. All the major currencies are doing weird things I don’t understand right now, so I guess gold could keep going up. But it’s been going way up for a long time, so maybe it should come down. Whatever.

11. Stock markets perform well (difficulty 1.0)

This one requires some arithmetic. I predicted rises of about 20% across the board: DOW at 12500, NASDAQ at 2700, S&P 500 at 1350. In fact, the DOW is at 11670 (6.7% below my prediction), the NASDAQ is at 2691 (just 0.3% below my prediction), and the S&P is at 1271 (5.9% below my prediction), for an average error of 4.3%. I call that pretty good, and according to my formula it counts as a hit at difficulty of 8.77. If I had written some bullshit explanation of my guess and avoided such round numbers for the targets then I’d look like more of an expert than all the investment advisors. Which, for all my ignorance, I probably am.

Final tally

I missed six (1, 2, 4, 6, 9, and 10), nailed four (3, 7, 8, and 11), and was kind of close on one (5). This is significantly below the 50% mark, but I focused on outlandish predictions for 2010, so I don’t feel that terrible about it. Live and learn.

Christmas and Carbon 

I’d like to congratulate Oxford’s environmentalists for another outstanding effort at Christmastime carbon reduction. The between-term travel of Oxford’s huge student body causes an absolute explosion of emissions—a single return flight from Oxford to the US, for example, represents roughly 20% of an average person’s annual emissions—and so the focus that this period receives is well-deserved.

Admittedly, making Oxford students feel more comfortable staying in town over the holidays isn’t nearly as difficult as forcing them to stop eating meat or switching the university to renewable energy. All it takes is a few well-circulated messages to organize get-togethers for the reduced population still in town, to help make them feel less isolated in a deserted university. Links with the various clubs representing overseas students are also easy to build. And of course students forgoing expensive airfares save quite a bit of money, some of which can be put toward one or two really memorable Oxford experiences. Who wouldn’t be a bit tempted by a formal Christmas day feast alongside fellow students in one of Oxford’s grandest halls?

Most importantly, this is one of the rare opportunities when active participation by environmentalists willing to make a small personal sacrifice in service of the cause actually makes a difference—sustaining a community instead of condescending to one. Staying in town to organize (and socialize) instead of heading home for the holidays is a far cry from refusing to take hot showers or tumble dry clothes.

Such organization, planned and promoted well in advance, certainly doesn’t stop every overseas student—or even a large proportion of students—from traveling home for the holidays. A small population, however, is happy to stay in town if there is a community to support them, and the emissions savings from even that small population is enormous when compared with the savings from other environmental initiatives. Kudos to an efficient, practical, and well-considered approach to carbon reduction.



Oh, wait. There is no such effort at Oxford; the environmentalists all hopped on their flights home to eat tofurky with the parents.

Never mind.

Religion and Social Issues 

(Preface: I use the term “social issues” here to refer generally to issues of concern to society, including everything from civil rights to economics to natural disasters. Such topics are sometimes categorized as either current affairs or political issues; I feel that both those terms carry baggage—reaction-driven in-the-moment decisions and electoral tactics, among other things—that is best introduced independently from the underlying issues.)

As social issues have become an increasing focus of public attention—in some sense a new entertainment industry—there has been a trend towards “religious perspectives” on these issues. Religious advocates are quoted in newspapers, appear on talk shows, and even hold their own public meetings to discuss social issues. I’d like to take the opportunity to offer some nuanced and carefully considered advice to such advocates:

Shut the fuck up.

This isn’t a defensive attempt to silence those with whom I disagree. In fact, I agree with the doctrine of humanism which modern advocates publicly claim forms the core of their religions, and this doctrine is highly relevant to many social issues. What is more, I’m perfectly comfortable with advocates arguing ignorant anti-scientific rhetoric when they are given the opportunity; my expectation remains that the more explicit and well-understood such positions are the more they will fall out of favor.

I’m advising religious advocates to shut the fuck up for a different reason: I find their marketing offensive. It’s one thing for the Pope to offer unhelpful or counter-productive advice on dealing with earthquakes; it’s quite another for him to embrace an earthquake as an opportunity to pitch his product. I totally get that he thinks his product makes the world a better place, and that it makes people happy, and that every single person would be better off if they bought it. But I suspect Steve Jobs feels the same way about his products, too.

The most egregious cases of poor taste in marketing occur when it is the religion itself which has precipitated the problem under discussion. When discussing the abuse of children by priests, it only makes sense to invite someone from the Catholic church to participate. Appropriate participation, however, is limited to answering the concerns that are raised. Instead Catholic advocates consistently choose to argue that these incidents should not be “reasons to turn away from the Church”, digressing to long rants about all the good the church does. For all of Tony Hayward’s tone-deaf PR after the 2010 BP disaster in the Gulf, he didn’t try to turn every interview into a plug for big, gas-guzzling cars.

Suppose a chainsaw is recalled because the chain occasionally comes loose and amputates a user’s limbs. This is not the time for the company spokesman to argue that amputation is no reason to turn away from the company’s chainsaws. It damn well is a reason, and we all know it. So shut the fuck up and tell us how you’re going to stop it from happening again.

Even if the chainsaw is perfectly safe—the best chainsaw ever made, even—I’m not okay with chainsaw advocates in the news proclaiming “AIDS is terrible. We hate AIDS. Buy chainsaws.” If you can’t contribute to a discussion of AIDS without trying to sell chainsaws, then just shut the fuck up.

Religion has sunk so low that we don’t even bother to register disgust at its marketing tactics any more. That says a lot about an industry that claims the moral high ground.

WikiLeaks, Transparency, and Privacy 

The news has been dominated this week by “cablegate”. In short, 250,000 classified reports from US diplomats to the US State Department were leaked to a group called WikiLeaks, and WikiLeaks is publicizing the entire set. Although the policy revelations contained in the reports released so far have merely helped to confirm long-assumed truths of international diplomacy, the extremely candid assessments of foreign officials given by diplomats are quite embarrassing to all concerned. Many politicians and government officials in the US consider the release espionage (or even terrorism) and are demanding legal action.

WikiLeaks’ Motivations

The motivations of Julian Assange, the editor-in-chief and spokesperson of WikiLeaks, are described in a pair of essays he published in 2006. Assange begins with the premise that open, transparent government is good and that any form of secrecy is bad. The point of releasing an organization’s secret internal communications, however, is not that the release itself increases the organization’s openness—quite the opposite. Assange argues that releasing any secret information that becomes available forces the organization to become more closed, taking even more care than before to protect its secrets. This extra care reduces the efficiency of the organization, weakening it and making it more vulnerable to its “opponents”. The assumption seems to be that these opponents will be more open, and thus “better”.

If these essays are accurate portrayals of Assange’s (and WikiLeaks’) motivations, then the direct goal motivating the release of US government secrets is not democratic reform of the US government. The goal is instead the weakening of the US government’s ability to do its job such that all opponents of the current regime—other nation-states, advocates of secession and internal revolution, and presumably also mundane electoral processes—are more likely to topple it. In this model, it doesn’t matter whether the contents of the diplomatic messages are inflammatory or not; it is sufficient merely to induce fear within the US government that future (possibly much more inflammatory) information will be disclosed.

I consider it quite likely that Assange is simply wrong about the effects of such leaks. The key is to recognize that the only “opponent” of the current regime with any realistic chance of reforming/displacing it is internal democratic reform in favor of transparency, and I argue that these leaks have hugely weakened such a movement. The information released has repeatedly brought to public attention the benefits of occasional secrets. It is obvious, for example, that frank and detailed profiles of foreign leaders are useful, but equally obvious that these will frequently be unflattering and thus best kept private. The cables also reveal that Yemen was willing to cooperate with the US in attacking terrorist cells in its territory, but was unable to conduct such attacks itself and felt that allowing US attacks would make the Yemeni government look weak; they agreed to allow US attacks on the condition that the Yemeni government can claim responsibility—a compromise that few Americans, at least, would fault, but one dependent on the ability to keep secrets. Further, the cables demonstrate consistent best-faith efforts to consider all sides of nuanced cultural, political, and moral issues in ways that are simply not possible in public partisan political discourse. I expect the vast majority of Americans following the story in much detail would become less, not more, supportive of a fully-transparent US government.

But perhaps more important is the number of Americans who really do follow such stories in any detail. It is disingenuous to claim that WikiLeaks is merely “releasing” information—they are actively driving publicity and press coverage of the information in a carefully-crafted media strategy. Despite the fact that the documents being released were classified, and despite the “government secrets!” hype that’s been drummed up, the documents released so far contain few if any genuine revelations, only embarrassing paper trails for information that’s always been available to journalists willing to cite “unnamed sources”; most of the interest in cablegate is actually interest in WikiLeaks itself, not the content of diplomatic cables. Assange’s philosophy that even leaks with no direct impact increase the fear of future leaks may in fact be exactly backwards: voluminous leaks with no impact could result in a public less interested in the content of future leaks, and the government’s discomfort may be slightly eased by the expectation that each future release from WikiLeaks is likely to receive less attention.

Other justifications for leaking of government secrets

A common political justification for WikiLeaks’ actions is that democratic governments must be held to account and that transparency is necessary for this to happen. My reading of Assange’s essays is that he does not see this as WikiLeaks’ primary role—he does not see such leaks as a sufficiently powerful tool to provide full transparency—but it is frequently given as a defense of leaking government secrets in general.

Frankly, I’m still having trouble following the logic. Democratic governments are held to account by their democratic processes, which decide (among other things) what level of transparency to provide. More transparency potentially offers more accurate democratic decision-making, while less transparency could potentially offer increased governmental efficiency in some areas. The argument that democracy is unworkable without voter omniscience is one against democracy, not in favor of slightly more transparency. I strongly suspect that this argument is really just the currently-relevant line of defense in the “transparency is good for everyone but me” game: democratic governments are fair game, as are tyrannies since they could still be subject to democratic overthrow, and maybe big rich corporations that people can choose not to support (unless it’s a business I work for or do business with), and then maybe nonprofits (ditto), and then maybe powerful educational or medical organizations (except that I almost certainly have done business with some of them, and they hold a lot of my own personal data…), but releasing information owned by individuals is clearly off the table. I tend to think that no simple answer exists as to what level of transparency is appropriate for which organization, and post-hoc justifications are worse than useless.

Among the general public (or at least the technorati) an extremely popular reaction is that “information wants to be free”, that the technology for universal release of any and all information is already pervasive, and that there is little point in debating or judging cablegate, since such releases of information are inevitable anyway. I consider such an outlook to completely misunderstand the new landscape of information availability, which is not entirely black and white. This topic merits more space than I can devote here, but the most obvious refutation is that this really is a story about WikiLeaks. The full set of diplomatic cables could have been leaked via dozens of web sites and BitTorrent servers around the world; that’s not what happened. It is the journalistic structure of WikiLeaks that is giving this release of information its impact and it’s not at all clear that such a structure is inevitable.

Culpability of WikiLeaks and its associates

I’m fascinated by the knee-jerk defenses of WikiLeaks and its associates; apparently the media industry’s assault on fair use of copyrighted materials and puritanical censorship of media has made defense of redistribution purely instinctive. Worse, the concept of free speech is being distorted beyond all recognition: Amazon is being accused of violating WikiLeaks’ rights by refusing to host its web site.

On the latter point, Amazon is an independent company with the right to choose what it hosts, and they have made clear that WikiLeaks was removed for violating its terms of service (which require that customers have full rights to the content they serve). One could make a free-speech argument in the case where a monopoly (whether an individual company of a coalition of companies) is able to block all access—if ISPs chose not to carry the information, for example—but this is absolutely not the situation for web hosting services.

As John Gruber asks, if you do not think that WikiLeaks has the right to redistribute this information, then are the Guardian, the New York Times, and all other news outlets similarly culpable? The line here seems fairly straightforward to me: WikiLeaks is actively choosing to make secret information public, which is a blatant violation of US law. If the New York Times obtained classified documents, then I don’t believe they would necessarily have the right to publish them, either.

Once WikiLeaks has made the information irrevocably public, however, news outlets are free (or, in practice, obliged) to comment on this newly-public information. The New York Times is in no way a co-conspirator with WikiLeaks in the crime of making these secrets public; they merely have the same free-speech rights as anyone else to comment on public information. The Guardian is in a marginally dicier position than the New York Times because they did actually enter an agreement with WikiLeaks to obtain prior access to the documents, but it’s difficult to imagine there’s anything they could have done to prevent the public release, so they can’t be considered a conspirator in the crime, either.


WikiLeaks: conspired to leak classified documents, which is a violation of US law

Julian Assange: rather paranoid, sophomoric in his view of conspiratorial organization, and interested in hurting the US and other large organizations

US State Department: has been doing the awkward things we’ve always expected (keeping an eye on visiting diplomats, for example), but there’s no evidence for the very unsavory things that some of us wonder about (assassinating political enemies, etc.); lots of evidence that it’s doing the nuanced and thoughtful analysis we’ve always hoped occurred in spite of simplistic public political rhetoric about US international policy

Amazon, The New York Times, The Guardian: still just doing their jobs

UK Higher Education and US Health Care 

It’s hardly a new phenomenon, but the public “debate” over health care reform in the US focused primarily on opposition that took the following form:

  1. Misrepresent the new proposal.
  2. Present existing problems as newly-introduced problems.
  3. Provide no alternatives; suggest that the choice is between the current proposal and some abstract principled ideal (instead of the status quo).
  4. Ignore (or misrepresent) approaches that have been robustly implemented elsewhere.

I followed that debate in the US media, but I was living in the UK at the time and this rather unproductive rhetorical pattern—particularly tactic 4—was frequently cited as “typically American”. There seemed to be an assumption that British politics were less susceptible to such insular ignorance.

It took less than a year, but we’ve already been provided with an example of the exact same tactics being used in an attempt to block policy reform in Britain, this time with respect to funding for universities.

As a quick primer for non-Brits, university students currently pay only a fraction of the true university tuition cost—the balance is funded by the government. Further, students are entitled to government-provided loans for living expenses while they are enrolled at university. The new proposal is for students’ tuition fees (which would be capped at £9000 per year) to be paid by the government but recorded as a loan to the student. Every year after graduating the student would be obliged to repay a part of this loan dependent upon their income (anyone making under £21,000 need repay nothing in that year); after 30 years any outstanding debt would be forgiven. Under both current and proposed plans, government costs are paid out of the general budget (i.e. general tax revenue).

As with the health care debate, this proposed policy shift highlights a number of interesting issues. What drives the real cost of a university education, and how can this be controlled? What motivates people to pursue university degrees, and what discourages them? Most importantly, what is the “value” of a university degree, whether economic, social, or otherwise? Is that value delivered primarily to the student, or is there an external benefit to society of having more graduates? How do these values differ between universities, courses, and students?

I have read a fair amount about the new funding proposal, and I have not found a single discussion in the mainstream media about any of these issues. Instead, we have a parallel of the health care debate:

  1. Plenty of implication that students will now need £9000 cash in hand to go to university. In fact, the new proposal reduces the money a student needs when they start university; the only increase is the amount they must repay afterwards.

  2. Arguments that low-income students will be discouraged from attending university because they don’t want to get into debt. Low-income students are already piling up debt with loans for living expenses during university, and that’s not to even mention the years of real income students forsake by studying instead of working. These are not new problems, and it’s not at all clear whether the new proposal will make them worse or better.

  3. Student protests are demanding that government retain “free education” for all students—despite the fact that education has never been “free” either for society as a whole (someone is paying for it) or for students themselves. There is an attempt to make this proposal a referendum on the notion of social mobility, which everyone on every side of the debate supports anyway.

  4. British awareness and understanding of the US university system (which, despite its many many problems, remains the envy of the world) is far worse than American understanding of British health care. I would not personally support adopting the American model wholesale, but a rational assessment of needs-blind admissions at private universities would be refreshing.

As is probably evident, I tend to support the new proposal on the grounds that it’s inherently unfair to force those who don’t attend university to pay for those who do and later go on to make enough money that they can afford to pay for it (retroactively). But beyond the policy itself, the rhetoric surrounding it has underscored two things that I discovered during my decade in the UK. First, the smug condescension heaped upon US politics and culture by the Brits is much less well-deserved than I thought before I left the US: the US really is that bad, but the Brits are seldom better. Second, the UK remains fixated on notions of “class” in ways that the US simply isn’t. This latter point merits far more attention than I can provide here, but it’s worth considering how Americans would react to the assertion that Harvard students deserve special dispensation because the country is so much better off for having them.

Advancement and Cultural Relativism 

There’s been an amusing back-and-forth over a relatively recent book, Disrobing the Aboriginal Industry: The Deception Behind Indigenous Cultural Preservation, which questions whether “cultural preservation” is doing more harm than good for aboriginal populations.

As is often the case in these things, both sides succeed in demonstrating their opponents’ points. One of the book’s co-authors is caught dismissing the significance of the pre-Columbian city of Cahokia on the Mississippi a bit too glibly, and the Cahokia fans are a touch too eager to spin the existence of colored plaster into tales of a bustling metropolis.

But it’s a response from Christopher Powell, a Canadian sociology professor, that demonstrates why “native studies” are considered so ridiculous to those outside the field. Powell effectively dismisses any notion of progress or advancement, arguing that colonization has wiped out every definition of advancement that doesn’t rank European civilizations first, so there is no remaining objective measure of progress. Given the negation of any definitional foundation, Powell is able to draw equivalences between aboriginal Tasmania and the modern global economy:

The Aboriginal peoples of Tasmania cultivated their ecosystem’s resources sustainably for 12,000 years, while industrial food production produced a situation of global food insecurity in under 200. Which society is the more advanced?

I rather like the notion that aboriginal Tasmanians enjoyed food security, while occasional pockets of hunger resulting from political obstacles to modern global food distribution are proof that we don’t.

Powell’s question, however, is the bigger concern here. Most academics attribute at least some of their motivation to the belief that improved understanding of your topic provides a net benefit to society. If you really don’t think that the transition from stone-age tribalism to digital-age secular liberalism represents any kind of quantitative advancement, then I honestly wonder whether you value knowledge and understanding at all.

Powell goes on to consult dictionaries, finding that synonyms for “uncivilized” are often pejorative:

When we use words like ‘barbaric’ and ‘savage’, these negative connotations come bundled up with supposedly value-neutral connotations. We can ignore this complexity, but it doesn’t go away. While nineteenth-century anthropologists like Lewis Henry Morgan had some genuine sympathy for the Indigenous peoples that they studied, they still took it for granted that European peoples were superior, not only technologically but culturally and morally as well.

Putting aside Sam Harris’s contention that empirical evaluation of different moral systems is possible, even within a single moral framework cultural advancement is definitely quantifiable. An action or belief can be unanimously morally condemned by a culture, despite the fact that its prevalence in that culture is not zero. Murder and rape have been condemned in every culture ever studied (although the definitions of both vary), and yet they have always remained problems; individuals constantly fail to live up to their moral ideals. A culture which succeeds in closing the gap between its professed ideals and its actual behavior has “advanced” by its own standards.

There is simply no question that European peoples have made progress over the past 12,000 years at reducing murder and rape, and I’ve never seen any argument from a cultural relativist that such reduction is a direct result of compensatory cultural degeneration. One may rail against capitalism and social mobility and all the other cargo carried along by modern secular liberalism, but questioning the relative worth of values which differ between cultures does not negate the progress that has been made on values that are shared. Even within modern societies, cultural advancement is both easily defined and readily apparent: since the categorization of racism and sexism as moral evils, the prevalence of both has steadily declined; one can see the same trend for homophobia. Even if you think that a culture accepting of gays is inferior to one stigmatizing them, there is no doubt that a culture trying to accept gays but failing is worse than one trying and succeeding.

Apologists for current aboriginal policies condemn books like Disrobing the Aboriginal Industry for advocating Eurocentric solutions to social ills like substance abuse, poverty, and violence. This seems like a disastrous conflation of aboriginal people’s right to choose their own goals and values (e.g. is preserving a particular culture and lifestyle worth much higher rates of poverty and violence than their colonial neighbors? is the pretense of sovereignty worth gross inefficiencies in service provision?) with the contention that traditional aboriginal practices are just as effective in achieving particular outcomes as Eurocentric approaches.

Modern government and social policy are as crucial and versatile a technological tool as modern medicine, and disinformation about efficacy should be treated with just as much contempt.

David Paterson is a dick. 

New York governor David Paterson has been trying to get the Park51 project to change their plans to build a mosque in lower Manhattan. Like a true politician, his reasons involve no principle other than the avoidance of an unpalatable political debate. He is reportedly “trying to bring people together on the issue”.

In the wake of the stabbing of a Muslim cab driver, Paterson has suggested that the current “debate” over the mosque could be among the causes. His official statement:

In the wake of the alleged hate crime against a New York City taxi driver, I must take this opportunity to remind New Yorkers that we cannot and will not allow bias and ignorance to infect our communities and deny our hard working, innocent residents the respect they deserve.

The potential for this kind of violence is one of the reasons why I have called publicly for a respectful and unifying conversation about the Park51 project. I continue to offer my assistance for an open dialogue that I believe will help to bring New Yorkers together.

Additionally, I’d like to thank the New York Police Department and first responders for their quick response to the scene and speedy apprehension of the suspect.

I have two things to say about this:

  1. The conclusion that the stabbing is a consequence of the mosque plan is similar in many ways to the argument that 9/11 was a consequence of US policy in the Middle East. There’s probably a connection in both cases, but you’ve got to be careful about blaming people for promoting integration because of the resentment it may foster among parochial psychopaths.

  2. Calling for “unifying conversation” is misunderstanding the fundamental point of liberal societies: it’s not about unanimity. The American ideal is not a society in which everyone agrees; it’s a society which is able to function in spite of disagreement. That Paterson doesn’t understand this—that he seems to believe there is a solution to which nobody will object—demonstrates a very worrying grasp of political theory.

iPad-Type Tablets 

An email that just went round to Oxford’s entire computing laboratory:

Dear All

We have been in conversation with a commercial organisation regarding the launch of a new product. The are about to release an iPad type tablet, and want Android based applications especially in the medical and life science fields which are novel.

If you have or know of anyone who has such applications.... we would assist with any licensing work for you…can you let us know? I can put you in contact with [XXX].

With best regards


I’m not entirely sure what to make of this, but it’s certainly a contrast to the way people decide to make apps for the iPad.

On the (In)Significance of P ≠ NP 

As has been linked on various tech sites, a possible proof of one of the most important outstanding problems in theoretical computer science is currently under peer review. In light of this, it has quickly become the fashion for everyone to pretend that this has significance for the larger tech community.

It’s certainly an “important” result from an academic point of view and some very novel and inspiring work from Vinay Deolalikar (whether holes are found in the proof or not), but I have trouble seeing why anyone not doing research into complexity theory would care about the result.

I’m sure there will be lots of attempts at accessible descriptions of the P versus NP problem written in the next couple of days, with varying levels of success. Instead of dwelling on the particular problem, I’d like to try to explain the significance of this result by extending an analogy I proffered on Twitter.

Suppose a researcher announced a proof that a theoretically perfect gasoline engine would create greater power per unit volume than a theoretically perfect steam engine, including boiler. What consequences would that have?

First you’ve got to dig into what “theoretically perfect” means: in this case I’m imagining an abstraction where the materials to build the engines have no weight or volume, and combustion/expansion operate at 100% efficiency. It’s already a bit obvious that the researcher’s result isn’t necessarily telling us much about which of the two engine types is “better”; if gasoline engines need to be built out of iron but steam engines work fine with aluminum or plastic then steam may outperform gasoline in terms of power to weight ratios.

(The analogy to complexity is that although problems in NP are theoretically “harder” than those in P, the details of solving specific problems in each class can make all the difference. There are plenty of NP-complete problems that can be solved quickly for realistic problem sizes, and plenty of problems in P that are hopelessly intractable in practice. My own research focuses on algorithms to solve a certain type of 2NExpTime problem—i.e. much, much harder than NP—while scaling not much worse than linearly on the types of input data that occur in practice.)

Next, you have to acknowledge the bounds of what the researcher studied: gasoline and steam engines only. Again, if it’s power-to-weight ratio we’re talking about, then things like the weight of fuel matter. If we found some amazing technology for generating heat—and thus steam—using little to no fuel (cold fusion, anyone?), then steam engines would clearly beat gasoline engines until we got into the realm of engines far far bigger than the amount of fuel they need to carry, like drag racers.

(In the few practical situations where P versus NP matters, NP-complete problems are usually just one component among many. In cryptography, for example, the “hard problems” used to protect information are seldom the weak link in the security chain. The fact that a key cannot be directly cracked in polynomial time is neither a necessary nor a sufficient condition to ensure that a cryptographic system is secure.)

But most importantly, you only have to take one small step back to realize that this theoretical result about engine types is telling us (or, as described above, merely hinting at) something we already know: gasoline engines tend to be more powerful than steam engines. The industry figured this out decades ago. If the inverse had been found—that steam engines are theoretically more efficient than gasoline—then the result could have provoked a huge amount of new work on steam engine design. But in fact all we gained was further confidence in our existing assumptions, so work on gasoline engines will continue just as it has done.

(If it had turned out that any problem in NP could be transformed into a problem in P, then there would have been a huge scramble to come up with practical transformations for the major NP-complete problems. But the claimed result is that no such transformations are possible, so there’s no point looking for them. And the truth is, nobody really had been looking: everybody has long assumed that the two classes were unequal. I’ve read quite a few papers containing sentences that include “…and if we assume that P ≠ NP, we can conclude that…”; there’s an acknowledgement in the literature that we’re depending on P ≠ NP, which is not known for sure, but no attempt to explore what the consequences would be if P = NP. The significance of a P ≠ NP proof is that it could remove these caveats from a litany of academic results.)

On the tech community reaction

I’m intrigued by the type of reaction the P ≠ NP proof attempt is getting in the tech community. For the most part the tech sites with a more journalistic approach (strong editorial control; content from staff writers) have stayed away from the story, perhaps waiting for the results of peer review. The more democratic sites that rely on user submissions, such as Digg, Hacker News, and Slashdot, however, are making a big deal of it. Despite the lack of any practical impact, geeks the world over are pretending that this matters to them; suddenly random programmers who took a course or two in computing are reacting as though they were complexity researchers. I don’t recall all the accountants who took a few college classes in mathematics jumping on the Poincaré conjecture bandwagon…

A Simple Rule for Competition Design 

There is a scale of commitment for any competition. At the low end, it’s not terribly relevant who wins a friendly contest—sometimes the participants barely notice who wins a game like charades or pictionary, in which scoring is mostly an afterthought. In many other casual games people do attach some importance to winning, but it’s usually kept in some perspective: you want to beat your buddies at poker, but it’s not worth anyone losing their house and life savings over.

There is a point, however, at which winning becomes a matter of life and death, and it is passed far more often than is widely acknowledged. Most serious fitness training, for example, requires a conscious choice to ignore discomfort and keep going even when your body tells you to stop. I’d argue that among elite athletes the natural desire to stop can be almost entirely tuned out: it is active analysis (and professional coaching advice) underpinning the decision about what signals to heed. In competition, athletes regularly ask more of their bodies than they’ve ever asked before, and they are prepared to dismiss at least some of the warnings their bodies feed back. The lesson every athlete must internalize is that even one’s own body can underestimate its true limits.

The flip side is that sometimes third-party analysis of your body’s limits is wrong, and the instinct to stop is correct. In most cases the failure mode isn’t terribly catastrophic. A runner who sets too fast a pace simply won’t be able to hold his speed to the finish line: conscious choices to run quickly are trumped by limited physiology. In other cases injuries can occur: while sometimes a weightlifter’s muscles may quietly fail to deliver enough force, it’s also possible that that the tissue will tear. This is accepted as a necessary risk of pushing the limits of your own strength.

The problem is that once you get above a fairly rudimentary level of commitment, competitors are going to push all the way to failure. So here’s the rule:

  • When designing a competition, ensure that the failures that result from competitors pushing themselves past their limits are both well-known and acceptable.

There’s a reason that the game of “chicken”, where two cars drive straight at each other and the first to swerve away and avoid a deadly crash loses, is not taken seriously as a sport: anyone who did take it seriously would simply never swerve away, and any contest between two such elite competitors would result in two deaths.

The World Sauna Championships, which are clearly nothing but a game of chicken involving hyperthermia instead of cars, have just killed their first elite competitor. If the failure mode of such a competition were that the loser lost consciousness and could be revived without permanent injury, then that would be one thing. In reality, however, these people are being cooked alive.

I suppose we should be grateful that the failure modes for competitive eating are just choking (which can be addressed with medical personnel on hand) or vomiting (which rather ruins any possible appeal of such contests for me).

Lying about solar 

Some media and the usual cohort of environmentalists have once again decided to disengage their brains and embrace some bullshit to bolster their narrative, as is their wont. The latest is a story that solar power is cheaper than nuclear, based on a report that some stories are calling “a new study by two researchers at Duke University”. Despite that sciencey description, this is not a peer-reviewed paper and it’s not from independent researchers. It’s a position paper published by NCWarn, an environmentalist organization whose primary goal is the elimination of nuclear power.

The paper is at heart nothing but rhetoric arguing in general terms that nuclear is bad and solar is good. The only hard numbers to support the contention that solar is cheaper are relegated to an appendix, where it is revealed that completely different methods were used to calculate the costs for solar and nuclear. They took the costs of nuclear from a single hand-picked study (and uprated them, assuming that nuclear would get more expensive as time went on), but they came up with their own formula for capital cost per kilowatt for solar.

Their formula incorporates the project cost, an “amortization factor” which is effectively how much of this project cost is attributable to any given year (they estimated about 7.8% based on a 25-year lifetime), the generating capacity of the project, and the capacity that could actually be used—solar only generates 18% of what it could if it were high noon on a clear day every second of the year. Then they included federal and state subsidies to knock the price down by another 65%. Their sample calculation shows an $18,000 system generating 3 kW at 85% efficiency and 18% utilization, for a real cost of 35 cents per kilowatt-hour, or 16 cents per kilowatt-hour after subsidies.

As I said, they don’t apply this formula to nuclear plants, however they do include an appendix of recent nuclear plants, their capacities, their original estimated cost, and the upward cost revisions, in an attempt to show how the cost of these plants is always underestimated. Of course, this means we can plug these numbers into their formula and compare solar and nuclear apples-to-apples. With no subsidies (so comparable to the 35 cents per kilowatt-hour number for solar), here’s what we get for their list, using only the higher revised costs (after overruns exceeding 300% in some cases):

  • Florida Power & Light Turkey Point Reactor: 7.2 cents per kilowatt-hour
  • Progress Energy Shearon Harris 2 & 3: 3.7 cents per kilowatt-hour
  • Progress Energy Levy: 9.1 cents per kilowatt-hour
  • CPS South Texas Project: 6.0 cents per kilowatt-hour
  • S. Carolina Elec. & Gass V.C. Summer: 5.0 cents per kilowatt-hour
  • Duke Energy William Lee: 4.5 cents per kilowatt-hour
  • PPL Bell Bend: 8.3 cents per kilowatt-hour
  • TVA Bellefonte: 7.1 cents per kilowatt-hour
  • Atomic Energy of Canada, Ltd. Darlington (cancelled): 9.6 cents per kilowatt-hour
  • Constellation Energy Calvert Cliffs: 5.3 cents per kilowatt-hour

Even using this list of the most egregious nuclear plant cost overruns, the authors’ own formula concludes that completely unsubsidized nuclear power is half the price of heavily subsidized solar. If a nuclear plant approaches even a quarter the cost per kilowatt of a solar project it is cancelled as uneconomic. And there’s a reason for this: residential US electricity prices average around 10–12 cents per kilowatt hour. Even the subsidized 16-cent price given for solar is way more than people are currently paying for power.

I should say for the record that I consider their formula to be absurdly naive, and that I’m a big fan of solar technology for a lot of reasons (distributed infrastructure chief among them). But I’m extremely concerned about the willingness of environmentalists to embrace lies and brand them as science. There’s a reason I’ve learned to question any numbers reported by environmentalists: in most cases where I’ve checked, they’ve turned out to be fraudulent.

Film Criticism: On the Irrelevance of Armond White 

SlashFilm recently conducted an interview with Armond White, the notoriously contrarian critic who panned Toy Story 3 (destructive consumerist themes) but loved Transformers 2. Reaction from David Chen of SlashFilm is here.

I think White’s view of Roger Ebert gives the most insight into his approach to film critism:

I do think it is fair to say that Roger Ebert destroyed film criticism. Because of the wide and far reach of television, he became an example of what a film critic does for too many people. And what he did simply was not criticism. It was simply blather. And it was a kind of purposefully dishonest enthusiasm for product, not real criticism at all…I think he does NOT have the training. I think he simply had the position. I think he does NOT have the training. I’VE got the training. And frankly, I don’t care how that sounds, but the fact is, I’ve got the training. I’m a pedigreed film critic. I’ve studied it. I know it. And I know many other people who’ve studied it as well, studied it seriously. Ebert just simply happened to have the job. And he’s had the job for a long time. He does not have the foundation. He simply got the job. And if you’ve ever seen any of his shows, and ever watched his shows on at least a two-week basis, then you surely saw how he would review, let’s say, eight movies a week and every week liked probably six of them. And that is just simply inherently dishonest. That’s what’s called being a shill. And it’s a tragic thing that that became the example of what a film critic does for too many people. Often he wasn’t practicing criticism at all. Often he would point out gaffes or mistakes in continuity. That’s not criticism. That’s really a pea-brained kind of fan gibberish.

As is often the case, the real disagreement here doesn’t seem to be with reality but with a definition of terms. I’d guess that Armond White views film criticism as some combination of the following:

  1. A tool for the education of film students, with various aspects of a film broken down to see how certain effects are achieved.

  2. An opportunity to place films within an historical or societal context, commenting on how they reflect or predict trends within film-making or culture at large. (Or, if the films don’t work, how they fail to do so.)

  3. A literary venue in which a particular film is just a backdrop for an essay on whatever topic the “critic” chooses; the content of the film is largely irrelevant.

I think all of these goals are valuable. I take great exception to writing of type 3 when it is presented as type 2—when an essay pretends to be presenting factual analysis—but there’s nothing wrong with the form when it’s presented honestly. Restaurant reviews and sports columns, in particular, are terrific playgrounds for skilled essayists, perhaps because nobody takes the text terribly seriously.

But White seems to miss the distinction between his academic “film criticism” and popular film criticism—a.k.a. movie reviews. Such reviews have a very different goal:

  • Predict whether or not viewers will enjoy the movie.

That’s what Ebert, and most people with column-inches in newspapers and on web sites, do. And the truth is, it’s in many ways a tougher job than the kind of academic analysis that White chooses to undertake. It requires understanding the tastes of dozens of different overlapping audience types, as well as the ways films are marketed and to whom they will appeal. It requires putting aside the special perspective that viewing hundreds of films a year for decades gives you, and seeing what an average audience would see. This includes the “gaffes and mistakes” that would distract a typical viewer from their enjoyment, but not technical aspects that fail to draw viewers’ attention. White may claim “I’ve got the training. I’m a pedigreed film critic. I’ve studied it. I know it.”, but as a movie reviewer he’s completely clueless.

In short, Armond White is about as relevant to my appreciation of film as a linguist is to my appreciation of novels.

Cropping PDFs for the iPad 

The iPad version of iBooks got support for PDFs in a recent update. Overall support is quite good, however the iPad screen size is roughly half the size of an A4 or 8.5 by 11 page, so text can be very small. In many cases work formatted for the printed page includes huge margins, whether to accommodate binding or reformatting for different paper sizes, or just to facilitate holding a page without obscuring text. None of this is necessary on the iPad, so it’s often very useful to crop PDF pages in order to devote more of the screen to content.

I hope that Apple someday adds cropping as a core feature of iBooks, but for now the in-app zooming only works on individual pages: when you turn the page you are zoomed back out. To read a PDF without margins, you need to crop the PDF file itself.

I use the Skim application for the Mac to crop pages. The simplest approach is just to use the “Select Content” feature from the “Tools” menu, and then “Crop” from the same menu. This will eliminate the margins, however it won’t necessarily produce a PDF with the correct dimensions for the iPad screen: if your pages do not have exactly the right proportions then each page will be centered between gray borders.

To eliminate the borders, you need to crop to page sizes in the proportion 1:1.30729 (the iPad screen size, minus twenty pixels for the top status bar, is 1004×768). Further, you may want to consider the on-screen controls, which produce a 44-pixel header, a 44-pixel footer, and a page number which rises 81 pixels above the bottom of the screen. If you want your PDF to be readable even when the controls are visible, keep your content out of the top 4.38% and bottom 8.07% of the page.

In all cases, you need to tell Skim to export the file as a “PDF with embedded notes” in order for the application to re-encode the PDF and save your crop.

As an example, I wanted to format today’s McDonald v. Chicago Supreme Court decision for the iPad. While the default page size is 612×792, Skim’s “Select Content” feature produces a selection 320×536. This includes running headers, which I don’t mind being obscured by on-screen controls, but the content in this selection runs right to the bottom, which I’d prefer were still visible. I thus extend the crop box to a height of 583 (536∕0.9193). The resulting selection is narrower than iPad proportions, so I extend it 446 wide (583∕1.30729) by dragging out an extra 63 pixels on each side. The result is perfectly-sized pages with still-generous margins and text 36% larger than the original document. The converted PDF is available here.


I’ve also converted the Bilski v. Kappos decision, which may have more relevance to the tech crowd (although it doesn’t seem to clarify much about patent law).

Extended Medals 

For small competitions designed primarily to choose a single winner (e.g. elimination tournaments) it makes sense to recognize only the top few places; traditionally the top three places are awarded gold, silver, and bronze medals. In larger competitions in which a large number of teams are reliably ranked, however (e.g. contests where each individual strives to achieve the highest score, independent from other competitors), it can make sense to recognize more than the top three: “top ten” lists are not uncommon. Unfortunately, there is no agree standard for medals beyond the top three spots. For the good of humanity, I now proclaim medal types for the top ten finishers in any contest:

  1. Gold
  2. Silver
  3. Bronze
  4. Iron
  5. Stone
  6. Glass
  7. Wood
  8. Leather
  9. Cloth
  10. Paper

You’re welcome.

Arizona outlaws accents 

Well, for teachers, at least.

The faculty of the Department of Linguistics at the University of Arizona have released a statement condemning the policy, based on eight separate points supported by linguistics research. Points 6 and 7 cut to the heart of the political issue:

6) There are many different ‘accents’ within English that can affect intelligibility, but the policy targets foreign accents and not dialects of English.

7) Communicating to students that foreign accented speech is ‘bad’ or ‘harmful’ is counterproductive to learning, and affirms pre‐existing patterns of linguistic bias and harmful ‘linguistic profiling’.

But I suppose it’s a sign of my own priorities that my first and strongest reaction to Arizona’s accent ban is the one the linguists address last:

8) There is no such thing as ‘unaccented’ speech, and so policies aimed at eliminating accented speech from the classroom are paradoxical.

When we say that someone has an ‘accent’, what we are really saying is that they speak in a way that sounds ‘different’ from a particular standard, or from our own pronunciation. Speakers are fully capable of drawing inferences about any person’s place of origin, age, ethnicity, gender and socioeconomic status based on the way we talk – and this is certainly true for speakers of American English). Since all human linguistic production is characterized by particular patterns of sound that allow others to draw these conclusions, it is axiomatic that all of us speak ‘with an accent’. The standard for instruction ought to be speaker intelligibility, not speaker identity – and intelligibility is distinct from ‘accentedness’.

iPad 3G data in the UK, part one 

There are four different networks providing plans for the iPad 3g in the UK, and all of them offer either pay-as-you-go or rolling contracts. What’s more, they all offer SIM packs for free, and at least some of them give you some initial credit with your free SIM, so it’s worth giving them all a try.

When I went looking for data plans a month ago (a few weeks before the UK release) Orange seemed to be the only company taking pre-orders for SIMs, so I requested one. They offer a £10 free credit 5p/MB a la carte service, a £2 for one day or 200MB (whichever comes first) plan, and a £7.50 for a week or 1GB plan (in addition to their £15/3GB and £25/10GB monthly plans), so there didn’t seem to be any harm in trying. Went through the online order (which required bank details and such for future charges) and waited.

The SIM arrived the week after the iPad release (which would have been more than a little disappointing if 3G had been my only way to get online). It came with a pack saying that I needed to phone a number to activate it. I called and they confirmed the SIM number but had lost all of the information I put into the web site so we needed to go through it again. Ten minutes (and £1.50: my mobile company had been charging for the phone call) later the SIM was activated.

Then I tried to figure out how to choose my plan and check my balance. I went to the Orange web site recommended by the pamphlet that came with the iPad SIM pack, and it asked me to set up an Orange username and password. I did, and then the system said that it would send me a text I needed to confirm. The iPad can’t receive texts. The only way to set up a username is to call yet another number targeted at people who can’t read text messages. I called the number, and a pleasant recorded voice told me that they were experiencing exceptionally high call volumes and that unless my call was urgent I should probably try another time.

Orange’s iPad pamphlet also includes a recommendation to download their app from the App Store. It turns out it’s an iPhone app and not an iPad app, but I got it anyway. It demands your username and password, but includes a button to say that you don’t have a username. If you hit that button, you are told to visit the Orange web site using your desktop computer. Sigh.

Verdict: even if Orange had the best prices (which they don’t), they have absolutely no idea how to manage a data-only plan. One down, three to go.

Environmental Non-Recommendations 

Concern over global warming had led to a steady stream of advice on how you can cut your “carbon footprint”. Eating vegetables instead of an average serving of chicken or pork, for example, is claimed to save on the order of half a kilogram of carbon dioxide equivalent emissions. The environmentalists I have come into contact with in Oxford go so far as to recommend reducing energy consumption by taking cold showers: heating the water for a 25-gallon shower with an inefficient heater can require up to five kilowatt-hours, for 2.5 to 5 kg of CO2-equivalent. I’ve seen lengthy lists of recommendations. Most with no numbers for actual savings attached, and most including items that either save nothing at all or actually increase consumption—“drive instead of fly!” is a particularly egregious example of confusing inconvenience with energy savings.

On all these lists, I’ve never seen anything about conventional versus microwave ovens, so I looked up some numbers. Electric ovens and ranges usually draw around two kilowatts each at peak, and about half this over the course of lengthy use (the elements aren’t being driven at maximum the whole time). Microwaves are usually rated at around 700–900 watts, but let’s generously assume they’re grossly inefficient and draw around 1500 watts. If you have the option of heating something up in the microwave for five minutes or bringing an electric oven up to temperature and then warming it in that for twenty minutes, the microwave saves you 80% of the energy used—about 250 to 500 grams of CO2 equivalent. Warming a meal with a microwave instead of an oven reduces emissions by almost as much as cutting meat out of that meal entirely.

Environmentalist silence on this issue is not difficult to understand when you realize that popular environmentalism is as much about a dislike for technology as it is about protecting the environment. Of course, it’s also about self-righteous condescension, and vegetarians tend to like to cook…

Bach on Windows Mobile 

Robbie Bach, retiring president of Microsoft’s Entertainment & Devices Division, on Windows Mobile’s continued loss of market share to the iPhone and Android:

It’s one of those funny things where it depends on what metric you look at. If you just were to look at just market share, you’d say, hey, we still have some challenges. When you look at what I see in the products going forward, the engagement we’re getting from (phone makers), the engagement we’re getting from operators, I have real optimism and think the business is in a very good space.

I don’t think this can be dismissed as transparent spin. Microsoft is built on Windows and Office, which are sold mainly to computer manufacturers and big businesses, and the Entertainment & Devices Division makes all of its profit from the XBox, which depends on getting exclusives from a handful of big game publishers in order to drive sales. The Microsoft culture is focused on locking in support from other big companies; end users just aren’t relevant.

Microsoft inhabits a universe where grand economic structures are the key to success. In that world, the products themselves only ever need to be “good enough”.

How not to do climate change PR 

I went to a talk last week by Stephan Lewandowsky entitled “Climate Change: Consensus or Dogma, Hoax or Religion?” I can only assume that someone other than Lewandowsky wrote the title, because his view was fairly simple. Not dogma; not hoax; not religion. Just consensus.

Lewandowsky is a psychologist, not a climate scientist, and so his contribution is meant to be an understanding of why people believe what they do. Unfortunately, the talk I attended offered absolutely no insight into this topic. Instead, it was seized as an opportunity for climate-change evangelism, and as a chance for ad-hominem attacks on others.

Lewandowsky spent his time attacking “climate-change deniers”. He began by saying that there is a difference between skepticism and denial, but that everyone who calls themself a “climate-change skeptic” is actually a denier. Initially he claimed they fell into three different categories: tinfoil-hat conspiracy theorists on the internet, publicity-driven politicians, and those who actually publish in peer-reviewed journals. His punch-line, however, was that in fact all three groups are exactly the same, and that tinfoil-hat crazies get published in scientific journals from time to time.

One of the specific issues he addressed was the location of ground-based temperature measurements. Warming skeptics have repeatedly criticized such measurements as inaccurate because of the “urban heat island effect”: if you set up a thermometer in the woods but then chop down the trees and put up buildings and parking lots (and air conditioners and cars), the thermometer will give higher readings just because of its vicinity and not because of any underlying change to the weather. Lewandowsky described this criticism as “a bunch of people posting pictures of thermometers on the internet”, and claimed that there was lots of research that the whole idea was nonsense.

I don’t know very much about the heat island effect; I probably fall into the Google-h-D category Lewandowsky derided. But it strikes me as a reasonable hypothesis, and a bit of googling suggested that it is an effect that is typically incorporated into statistical analysis of data from such thermometers. I had a very brief conversation with someone doing atmospheric physics here at Oxford, and while he admitted that he was not an expert on the topic he said he’s seen some papers suggesting that the heat island effect was smaller than anticipated, and that the bigger problem with ground measurements was that they were almost all on land, so you need to do a lot of interpolation to get a global picture of temperature. Ground-based measurements are useful, but satellite temperature measurements can be better when they’re available, and the warming trends are the same in the satellite data.

Lewandowky also offered some very specific attacks on a few people in particular. He described Christopher Monckton, for example, as someone who “claims he won the Falklands war”, and offered up a silly photo of him (even taking the trouble to make fun of his hat). The only “skeptical” peer-reviewed paper he cited included an author without a PhD whose web site describes him as (among other things) “a travel photographer”. During the discussion after the talk, Lewandowky dismissed Bjorn Lomborg as “dishonest” and all respected academics who questioned climate change as “old” and “sad”.

There was actually a member of the audience who seemed to be a climate scientist, and who at one point mentioned that the probability models for climate change included a very small but non-zero chance that global temperature would decrease over the coming century. Lewandowsky told him he was simply wrong. The audience member protested that the chances were much, much better that temperature would go up, but that all the models he worked with did include a potential decrease with very low probability; Lewandowky told him that it simply wasn’t possible. This is a psychologist telling someone who works with climate models what those models say.

So I asked the last question of the Q&A: the case for climate change isn’t one single thread of reasoning, it’s a mountain of evidence. In this mountain, has there ever been even a single study or data set that’s turned out to be wrong? Have the skeptics ever been right about any of the details or made any significant contributions?

Lewandowsky’s answer: no. Every single piece of climate science points in the same direction.

I’ll concede that Christopher Monckton seems like an obnoxious guy, and that a “travel photographer” probably doesn’t have as deep an understanding of climate modeling as an academic, and even that Lomborg and climate skeptics have been wrong about some of their contentions. I’ll even consider the possibility that the audience member—who otherwise completely supported the premise of anthropogenic global warming—was wrong about current climate models. But it was Lewandowsky who had the chance last week to speak at length, make his case, and respond to questions. And the verdict is clear: he is the one refusing to consider evidence contrary to his position. He is the one ignoring scientific debate in favor of political grandstanding, attacking people instead of evaluating ideas. He is the one who views climate change as a religious crusade. A propensity for defensive dogma is not confined to the skeptic/denialist camp.

I also asked Lewandowky whether he thought the stridency of some climate change rhetoric might be feeding into the denialist conspiracy theories. He said he just couldn’t understand why people would develop such delusions of persecution. Even if Lewandowsky didn’t undermine my respect for climate science, he did undermine my respect for academic psychologists.

Constitution Blindness 

I think “democracy” is the worst-understood concept in politics. Or, rather, while there may be more confused notions—capitalism comes to mind—people usually have some awareness of the issues’ complexities and are at least a bit wary of invoking their names as absolutes. Statements like “mandatory health care is anti-capitalist” provoke some minimum amount of reflection on the meanings of the words used; reaction to statements like “judicial review is undemocratic” tends to skip right past such parsing and on to consequentialist and historical argument.

The only widely-agreed definition is that in order for something to be democratic it must derive legitimacy from individual decision-making by all members of a group. It’s crucial to note that this description addresses the adjective and not the noun. It’s very common to use “democracy” as a shorthand for “a democratic political/governmental system”, but I think this is the start of much of the confusion: systems are not monolithic. In particular, decisions over how a political system is organized (its constitutional structure) and decisions made in the course of its functioning (e.g. its legal code) need not be arrived at under the same framework. The structure of a democratic process can be mandated undemocratically—a king can decree that peasants vote on which crown he should wear.

The real trouble is that people conflate the process of democratic decision-making with larger sociological or epistemic notions. Making a decision according to some codified set of rules doesn’t mean that people “agree”. It doesn’t mean the decision is moral or practical or that its suppositions are true. And despite the underlying goals of democratic organization, it doesn’t even guarantee that all members of the group will be satisfied with or respect the decision. A process is just a process, and wrapping decision-making up in constitutional structure based on enlightenment ideals doesn’t change that.

Any time a decision is made that will affect many people, whether by a king or a representative body or by a referendum of all members, and whether agreed by 51% or 67% or 90%, the various impacts on different subgroups need to be considered by the people making that decision. One thousand indifferent “yea”s may carry more constitutional power than a single well-considered and forceful “nay”; that doesn’t give them moral authority.

Democracy doesn’t absolve voters of responsibility any more than capitalism obviates altruism or the legal system obviates ethics. Setting aside differences in service of a greater purpose is not an emergent property of democracies; it is a prerequisite that must be met by each individual.

Serving on an Ethics Panel 

The health-care industry and various research communities (among others) make heavy use of “ethics panels” these days. Such panels are usually mandated to take a broad view of how specific actions will impact welfare: to what extent is it permissible to mislead someone in the course of a research experiment? would the knowledge gained from a particular experiment justify killing a dozen mice? when are patients competent to make decisions about refusing treatment?

It seems clear to me that like judges, a member of an ethics panel should be obliged to recuse themself from any decisions which impacts them personally. You can’t serve on a panel that decides whether or not someone else is allowed to donate a kidney to save a member of your family’s life, for example.

Now put that on hold for a moment and consider this story, which delightfully combines two villains of late: Arizona and the Catholic Church. Doctors agreed that because of a rare medical condition, a woman’s 11-week pregnancy threatened to kill her. They felt it was necessary to terminate the pregnancy. The decision was referred to an ethics panel which included, among others, a Catholic nun:

Sister Margaret McBride, who had been vice president of mission integration at the hospital, was on call as a member of the hospital’s ethics committee when the surgery took place, hospital officials said. She was part of a group of people, including the patient and doctors, who decided upon the course of action.

The panel agreed on the abortion.

Result? The nun was excommunicated from the Catholic Church. This was officially confirmed by the local bishop, but the excommunication was automatic the moment she let the panel make its decision.

I don’t know much about Catholicism, but I seem to recall that being excommunicated has pretty serious repercussions on your long-term welfare.

If making decisions as part of an ethics panel has a direct effect on your own welfare, then clearly you should recuse yourself from that panel; you can’t be expected to make objective judgements about the welfare of others because your own welfare is at stake. True believers in any of a great number of religions should thus never be allowed to serve on such panels.

I suppose the same logic applies to judges, as well. If a judge’s own religious welfare is tied to the decisions they make, then either the court is explicitly a religious one (i.e. religious beliefs trump all other concerns), or they are always at risk of a conflict of interest (i.e. some other concern could trump religious beliefs, forcing the judge to choose between their own welfare and their duty to the court). Thank goodness Elena Kagan is Jewish…

On Betting 

It’s not given to human beings to have such talent that they can just know everything about everything all the time. But it is given to human beings who work hard at it—who look and sift the world for a mispriced bet—that they can occasionally find one.

And the wise ones bet heavily when the world offers them that opportunity. They bet big when they have the odds. And the rest of the time, they don’t. It’s just that simple.

-Charles Munger

Manufacturing Software 

Obviously the software economy is very different from that of other businesses. The most widely-acknowledged difference is in the split between overhead and marginal costs: while prices for many physical goods have historically been dictated by the costs of production, producing one extra copy of a software product is effectively free.

The “zero marginal cost” property of software is often misinterpreted, however, as suggesting that there is no software analog to manufacturing. And I think this misunderstanding leads people to miss one of the other unique properties of software.

Whether you’re producing hairdryers or hand-carved statues, the general workflow is the same: first come up with a design, then get a bunch of people and/or machines to perform the construction, sell the finished product, and repeat the last two steps for each customer. In some businesses the design stage alone is incredibly expensive—automobiles, for example. In others, there’s only one manufacturing run—e.g. constructing an office building.

The important point is that in traditional businesses there’s usually a pretty clear separation between design and manufacturing. Once the product has been sold, the “intellectual property” produced during the design is out for all the world to see. In some cases IP law may hinder others from copying your design for a while, but in general anybody else will be able to go out and manufacture a knock-off for something comparable to your own cost of construction.

In software, however, there isn’t a clear line between “design” and “manufacturing”: software is effectively a design that’s been completely rigorously specified; all the detail work and quality assurance that is typically a part of the manufacturing process is performed centrally. Once a software product has been released, competitors can take a look at the list of features (and possibly even poke around to see how bits of the implementation work), but translating this “design” into something that can be produced near the zero marginal cost still requires a huge investment.

Other businesses already have some of these properties. Setting up a manufacturing process for a new product can be very expensive and may only be practical for large volumes. But such costs still exist at the margins: producing twice as many products requires similar setup costs. The cost of “designing” the production process itself is a rounding error in traditional businesses.

The major impact from this difference between software and other products stems from the fact that manufacturing can be continuous for software: once you’ve finished manufacturing one product you can build on what you’ve already got to produce the next version: the manufacturing investment is (mostly) cumulative. This makes early leads in software much more valuable than leads in other fields, even where market share is irrelevant. Other companies may be able to invest and eventually catch up with Apple’s iPhone/iPad software (Palm’s WebOS and Google’s Android are both getting there), but if everyone keeps running at the same speed Apple will remain well out in front until the weight of their manufacturing legacy weighs them down. Waiting until a market is mature and then leapfrogging the competition is impractical in markets where manufacturing doesn’t happen at the margins. Sony take note.

Another Mohammed Picture 

In order to try to stifle criticism of their faith, some Muslims have threatened and sued and murdered. Leaders from Islamic countries (who can hardly be called fringe figures) even managed to pass a UN resolution banning any criticism of religion (Islam in particular), noting:

…everyone has the right to hold opinions without interference and the right to freedom of expression, and that the exercise of these rights carries with it special duties and responsibilities and may therefore be subject to limitations as are provided for by law and are necessary for respect of the rights or reputations of others, protection of national security or of public order, public health or morals and respect for religions and beliefs;

So the UN has recast freedom of thought as freedom from debate (“the right to hold opinions without interference”), and decreed that “respect for religions” trumps freedom of expression.

Well now the Islamic fundamentalists have gone too far: because of threats, Comedy Central won’t distribute episode 201 of South Park online. Not cool, dudes.

I wanted to draw my own picture of Mohammed and post it, but I’m bad at drawing, and these religious types really thought ahead by picking a prophet with a beard and turban, which are really hard to draw. So instead here’s someone else’s drawing (I don’t know whose):

Mohammed cartoon

I encourage everyone with a web page to publish their own picture of Mohammed.

Of course, my regular source for religious imagery is the Jesus and Mo web comic. This one is my favorite.

If you’d like to comment on my view of Islam as an oppressive, sexist, and violent religion, feel free to email me.

Why do American students choose non-mathematical careers? 

It’s widely accepted that in India, China, and other countries the best and brightest students choose courses and careers in engineering and the mathematical sciences, while in the US a much greater fraction of elite students instead choose subjects in the humanities, such as history and literature, and (largely) non-mathematical sciences, such as anthropology and psychology.

There are a number of possible explanations for this. I’d always assumed that the biggest factor had something to do with income distribution and how easy it was to be “middle class”, in the sense of meeting all major material needs with disposable income to spare: if only a small proportion of the population meets that threshold then the best and brightest are willing to invest to acquire specialized skills to separate themselves from the rest of the population. If the majority of the population can be middle class anyway, then such specialization is less necessary. (I’d argue that mathematical and engineering qualifications greatly increase the likelihood of achieving a good dependable salary, but that other factors are more likely to affect the chances of becoming extremely wealthy.) As a consequence of this, cultures with smaller and more elite middle classes attach more prestige to engineering and mathematical qualifications, with is a further incentive for students to choose those fields. I’d guess that there are amplifying factors to social prestige, as well: if one field attracts the best students and becomes more competitive, then it becomes even more prestigious, which makes the field even more attractive to the best students.

The above is an “externalist” explanation, based on the consequences of different field/career choices. As appealing as such explanations are from a classical economic perspective, I’m not sure they accurately describe how real people choose professions. While long-term consequences are factors, most decisions are made on a more short-term basis: people choose to do what is easiest or most enjoyable right now. An “internalist” account for why students might prefer mathematical sciences for their own sake is thus relevant.

The argument I’ve heard before is that different pedagogy could be a factor: other countries make mathematics more enjoyable and satisfying than the US does. This jibes with my own experience of a dysfunctional and inept US education system, however other countries’ systems appear even worse, so this explanation doesn’t hold up.

A fascinating post from a student educated in India proposes another pedagogical explanation: it’s not that the US is worse than other countries at making the sciences interesting and satisfying, it’s that the US is better than other countries at making all the other subjects appealing. History, for example, is taught in other countries as nothing but a huge collection of facts; the US instead focuses much more on narrative and analysis. According to this student:

I could never conceive of what a historian did because history seemed to me to be a body of well-defined facts without any idea of what methods a historian uses. I had no idea that there was even a science called sociology (beyond school, my reading consisted of voraciously reading pulp novels. Since access to good books in India is limited, I didn’t come into contact with it in my outside reading either). But on the other hand, I was well-aware of the “methods” of mathematics and the mathematical sciences. It was easy to imagine what scientists, mathematicians or engineers do: they solve problems! It wasn’t so easy to imagine what historians or sociologists did. (I probably thought they had to master a lot of facts in order to be historians – and who wanted to do that??!)

This theory suggests that the US focus on improving the education system has actually succeeded for elite students in some fields, but that there have been unforeseen consequences. Successfully engaging students in the humanities is a laudable goal on its own, but if we fail to engage them equally in the mathematical sciences then we’ll continue to see students migrate away from those fields.

Richard Feynman on education in Brazil 

From Surely You’re Joking, Mr. Feynman!:

In regard to education in Brazil, I had a very interesting experience. I was teaching a group of students who would ultimately become teachers, since at that time there were not many opportunities in Brazil for a highly trained person in science. These students had already had many courses, and this was to be their most advanced course in electricity and magnetism – Maxwell’s equations, and so on.

The university was located in various office buildings throughout the city, and the course I taught met in a building which overlooked the bay.

I discovered a very strange phenomenon: I could ask a question, which the students would answer immediately. But the next time I would ask the question – the same subject, and the same question, as far as I could tell – they couldn’t answer it at all! For instance, one time I was talking about polarized light, and I gave them all some strips of polaroid.

Polaroid passes only light whose electric vector is in a certain direction, so I explained how you could tell which way the light is polarized from whether the polaroid is dark or light.

We first took two strips of polaroid and rotated them until they let the most light through. From doing that we could tell that the two strips were now admitting light polarized in the same direction – what passed through one piece of polaroid could also pass through the other. But then I asked them how one could tell the absolute direction of polarization, for a single piece of polaroid.

They hadn’t any idea.

I knew this took a certain amount of ingenuity, so I gave them a hint: “Look at the light reflected from the bay outside.”

Nobody said anything.

Then I said, “Have you ever heard of Brewster’s Angle?”

“Yes, sir! Brewster’s Angle is the angle at which light reflected from a medium with an index of refraction is completely polarized.”

“And which way is the light polarized when it’s reflected?”

“The light is polarized perpendicular to the plane of reflection, sir.” Even now, I have to think about it; they knew it cold! They even knew the tangent of the angle equals the index!

I said, “Well?”

Still nothing. They had just told me that light reflected from a medium with an index, such as the bay outside, was polarized; they had even told me which way it was polarized.

I said, “Look at the bay outside, through the polaroid. Now turn the polaroid.”

“Ooh, it’s polarized!” they said.

After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!

Later I attended a lecture at the engineering school. The lecture went like this, translated into English: “Two bodies… are considered equivalent… if equal torques… will produce… equal acceleration. Two bodies, are considered equivalent, if equal torques, will produce equal acceleration.” The students were all sitting there taking dictation, and when the professor repeated the sentence, they checked it to make sure they wrote it down all right. Then they wrote down the next sentence, and on and on. I was the only one who knew the professor was talking about objects with the same moment of inertia, and it was hard to figure out.

I didn’t see how they were going to learn anything from that. Here he was talking about moments of inertia, but there was no discussion about how hard it is to push a door open when you put heavy weights on the outside, compared to when you put them near the hinge – nothing!

After the lecture, I talked to a student: “You take all those notes – what do you do with them?”

“Oh, we study them,” he says. “We’ll have an exam.”

“What will the exam be like?”

“Very easy. I can tell you now one of the questions.” He looks at his notebook and says, ” ‘When are two bodies equivalent?’ And the answer is, ‘Two bodies are considered equivalent if equal torques will produce equal acceleration.’ ” So, you see, they could pass the examinations, and “learn” all this stuff, and not know anything at all, except what they had memorized.

Then I went to an entrance exam for students coming into the engineering school. It was an oral exam, and I was allowed to listen to it. One of the students was absolutely super: He answered everything nifty! The examiners asked him what diamagnetism was, and he answered it perfectly. Then they asked, “When light comes at an angle through a sheet of material with a certain thickness, and a certain index N, what happens to the light?”

“It comes out parallel to itself, sir – displaced.”

“And how much is it displaced?”

“I don’t know, sir, but I can figure it out.” So he figured it out. He was very good. But I had, by this time, my suspicions.

After the exam I went up to this bright young man, and explained to him that I was from the United States, and that I wanted to ask him some questions that would not affect the result of his examination in any way. The first question I ask is, “Can you give me some example of a diamagnetic substance?”


Then I asked, “If this book was made of glass, and I was looking at something on the table through it, what would happen to the image if I tilted the glass?”

“It would be deflected, sir, by twice the angle that you’ve turned the book.”

I said, “You haven’t got it mixed up with a mirror, have you?”

“No, sir!”

He had just told me in the examination that the light would be displaced, parallel to itself, and therefore the image would move over to one side, but would not be turned by any angle. He had even figured out how much it would be displaced, but he didn’t realize that a piece of glass is a material with an index, and that his calculation had applied to my question.

I taught a course at the engineering school on mathematical methods in physics, in which I tried to show how to solve problems by trial and error. It’s something that people don’t usually learn, so I began with some simple examples of arithmetic to illustrate the method. I was surprised that only about eight out of the eighty or so students turned in the first assignment. So I gave a strong lecture about having to actually try it, not just sit back and watch me do it.

After the lecture some students came up to me in a little delegation, and told me that I didn’t understand the backgrounds that they have, that they can study without doing the problems, that they have already learned arithmetic, and that this stuff was beneath them.

So I kept going with the class, and no matter how complicated or obviously advanced the work was becoming, they were never handing a damn thing in. Of course I realized what it was: They couldn’t do it!

One other thing I could never get them to do was to ask questions. Finally, a student explained it to me: “If I ask you a question during the lecture, afterwards everybody will be telling me, ‘What are you wasting our time for in the class? We’re trying to learn something. And you’re stopping him by asking a question’.”

It was a kind of one-upmanship, where nobody knows what’s going on, and they’d put the other one down as if they did know. They all fake that they know, and if one student admits for a moment that something is confusing by asking a question, the others take a high-handed attitude, acting as if it’s not confusing at all, telling him that he’s wasting their time.

I explained how useful it was to work together, to discuss the questions, to talk it over, but they wouldn’t do that either, because they would be losing face if they had to ask someone else. It was pitiful! All the work they did, intelligent people, but they got themselves into this funny state of mind, this strange kind of self-propagating “education” which is meaningless, utterly meaningless!

At the end of the academic year, the students asked me to give a talk about my experiences of teaching in Brazil. At the talk there would be not only students, but professors and government officials, so I made them promise that I could say whatever I wanted. They said, “Sure. Of course. It’s a free country.”

So I came in, carrying the elementary physics textbook that they used in the first year of college. They thought this book was especially good because it had different kinds of typeface – bold black for the most important things to remember, lighter for less important things, and so on.

Right away somebody said, “You’re not going to say anything bad about the textbook, are you? The man who wrote it is here, and everybody thinks it’s a good textbook.”

“You promised I could say whatever I wanted.”

The lecture hall was full. I started out by defining science as an understanding of the behavior of nature. Then I asked, “What is a good reason for teaching science? Of course, no country can consider itself civilized unless… yak, yak, yak.” They were all sitting there nodding, because I know that’s the way they think.

Then I say, “That, of course, is absurd, because why should we feel we have to keep up with another country? We have to do it for a good reason, a sensible reason; not just because other countries do.” Then I talked about the utility of science, and its contribution to the improvement of the human condition, and all that – I really teased them a little bit.

Then I say, “The main purpose of my talk is to demonstrate to you that no science is being taught in Brazil!”

I can see them stir, thinking, “What? No science? This is absolutely crazy! We have all these classes.”

So I tell them that one of the first things to strike me when I came to Brazil was to see elementary school kids in bookstores, buying physics books. There are so many kids learning physics in Brazil, beginning much earlier than kids do in the United States, that it’s amazing you don’t find many physicists in Brazil – why is that? So many kids are working so hard, and nothing comes of it.

Then I gave the analogy of a Greek scholar who loves the Greek language, who knows that in his own country there aren’t many children studying Greek. But he comes to another country, where he is delighted to find everybody studying Greek – even the smaller kids in the elementary schools. He goes to the examination of a student who is coming to get his degree in Greek, and asks him, “What were Socrates’ ideas on the relationship between Truth and Beauty?” – and the student can’t answer. Then he asks the student, “What did Socrates say to Plato in the Third Symposium?” the student lights up and goes, “Brrrrrrrrr-up” – he tells you everything, word for word, that Socrates said, in beautiful Greek.

But what Socrates was talking about in the Third Symposium was the relationship between Truth and Beauty!

What this Greek scholar discovers is, the students in another country learn Greek by first learning to pronounce the letters, then the words, and then sentences and paragraphs. They can recite, word for word, what Socrates said, without realizing that those Greek words actually mean something. To the student they are all artificial sounds. Nobody has ever translated them into words the students can understand.

I said, “That’s how it looks to me, when I see you teaching the kids ‘science’ here in Brazil.” (Big blast, right?)

Then I held up the elementary physics textbook they were using. “There are no experimental results mentioned anywhere in this book, except in one place where there is a ball, rolling down an inclined plane, in which it says how far the ball got after one second, two seconds, three seconds, and so on. The numbers have ‘errors’ in them – that is, if you look at them, you think you’re looking at experimental results, because the numbers are a little above, or a little below, the theoretical values. The book even talks about having to correct the experimental errors – very fine. The trouble is, when you calculate the value of the acceleration constant from these values, you get the right answer. But a ball rolling down an inclined plane, if it is actually done, has an inertia to get it to turn, and will, if you do the experiment, produce five-sevenths of the right answer, because of the extra energy needed to go into the rotation of the ball. Therefore this single example of experimental ‘results’ is obtained from a fake experiment. Nobody had rolled such a ball, or they would never have gotten those results!

“I have discovered something else,” I continued. “By flipping the pages at random, and putting my finger in and reading the sentences on that page, I can show you what’s the matter – how it’s not science, but memorizing, in every circumstance. Therefore I am brave enough to flip through the pages now, in front of this audience, to put my finger in, to read, and to show you.”

So I did it. Brrrrrrrup – I stuck my finger in, and I started to read: “Triboluminescence. Triboluminescence is the light emitted when crystals are crushed…”

I said, “And there, have you got science? No! You have only told what a word means in terms of other words. You haven’t told anything about nature – what crystals produce light when you crush them, why they produce light. Did you see any student go home and try it? He can’t.

“But if, instead, you were to write, ‘When you take a lump of sugar and crush it with a pair of pliers in the dark, you can see a bluish flash. Some other crystals do that too. Nobody knows why. The phenomenon is called “triboluminescence.” ’ Then someone will go home and try it. Then there’s an experience of nature.” I used that example to show them, but it didn’t make any difference where I would have put my finger in the book; it was like that everywhere.

Finally, I said that I couldn’t see how anyone could be educated by this self-propagating system in which people pass exams, and teach others to pass exams, but nobody knows anything. “However,” I said, “I must be wrong. There were two Students in my class who did very well, and one of the physicists I know was educated entirely in Brazil. Thus, it must be possible for some people to work their way through the system, bad as it is.”

Well, after I gave the talk, the head of the science education department got up and said, “Mr. Feynman has told us some things that are very hard for us to hear, but it appears to be that he really loves science, and is sincere in his criticism. Therefore, I think we should listen to him. I came here knowing we have some sickness in our system of education; what I have learned is that we have a cancer!” – and he sat down.

That gave other people the freedom to speak out, and there was a big excitement. Everybody was getting up and making suggestions. The students got some committee together to mimeograph the lectures in advance, and they got other committees organized to do this and that.

Then something happened which was totally unexpected for me. One of the students got up and said, “I’m one of the two students whom Mr. Feynman referred to at the end of his talk. I was not educated in Brazil; I was educated in Germany, and I’ve just come to Brazil this year.”

The other student who had done well in class had a similar thing to say. And the professor I had mentioned got up and said, “I was educated here in Brazil during the war, when, fortunately, all of the professors had left the university, so I learned everything by reading alone. Therefore I was not really educated under the Brazilian system.”

I didn’t expect that. I knew the system was bad, but 100 percent – it was terrible!

Since I had gone to Brazil under a program sponsored by the United States Government, I was asked by the State Department to write a report about my experiences in Brazil, so I wrote out the essentials of the speech I had just given. I found out later through the grapevine that the reaction of somebody in the State Department was, “That shows you how dangerous it is to send somebody to Brazil who is so naive. Foolish fellow; he can only cause trouble. He didn’t understand the problems.” Quite the contrary! I think this person in the State Department was naive to think that because he saw a university with a list of courses and descriptions, that’s what it was.

Faith underlying science 

Proponents of faith as a virtue frequently argue that even science is based on blind faith—faith in causality, or faith that the universe obeys laws accessible to human intelligence, or faith in some specific underlying principles. In his book An Enquiry Concerning Human Understanding, 18th-century philosopher David Hume argues that science is based on “the principle of the uniformity of nature”—that patterns observed in the past will continue to be observed in the future—and that this principle cannot itself be logically derived. A recent post by a friend of mine summarizes some of Hume’s discussion.

All of these arguments about faith at the heart of science seem to misunderstand what science really is in practice. Science is not about discovering irrefutable philosophical truths, nor is it merely a sequence of logical or mathematical operations strung together at length. Logic and mathematics are just a few of the tools used in the course of scientific inquiry, some of which stem from our in-born intuitions about the world, but most of which were devised because we discovered that they were useful. Arithmetic with small numbers and very simple logic are intuitive; the formal generalizations of both of these (of which there are many) are entirely artificial. Scientists are constantly on guard for cases where their tools are steering them wrong. In the 20th century physicists discovered that our intuitive Euclidean geometry was a poor model for the large-scale structure of our physical world and that our intuitive view of mechanics was only relevant to objects in a very limited range of sizes. Sociologists and economists have been forced to construct incredibly elaborate models to replace simple arithmetic in the complex systems they study, where one and one don’t quite make two.

Science is, by my definition, empirical: it is an attempt to explain and predict observable phenomena. You don’t need faith that the universe is explicable to give it a try, and your predictions don’t need to be perfect for them to be useful. Even in physics, the most fundamental of sciences, we are quite sure that our best models of how the universe operates are (slightly) wrong. Physicists keep working to improve the models, and they seem to be making more and more accurate predictions, but the notion of a perfect and complete model that can be expressed in the language of mathematics is nothing more than an appealing goal. Such a model may not exist.

Good science does not even take causality for granted. It is possible to derive a great many scientific hypotheses which model the stock market (or the winning lottery numbers) as a function of the weather in Timbuktu; many of these models will even correspond with all available data. Further evidence will most likely show these models to offer little or no real predictive power, however, for a simple reason: there is little or no causality between the phenomena in reality. Scientists study causality just as they study every other phenomenon—skeptically, with an acceptance of causal models only when they appear to offer real predictive power.

While Hume makes a convincing case that the uniformity of nature cannot be proven logically, this in no way undermines its value as a scientific theory. Huge quantities of evidence show that certain types of observed patterns tend to repeat themselves. While there is no guarantee that this will continue in the future, it does provide the basis for a wide range of valuable scientific models. Hume disproves the absolute guaranteed truth of the proposition, but not its utility.

While Hume didn’t have our modern understanding of science or notions of statistical certainty available to him, he did offer a solution to his own problem. He argued that inductive reasoning based on the uniformity of nature was a capacity we had simply been granted by “Nature” as a way of allowing us to cope with the world around us. In other words, such reasoning proves useful so we make use of it even if we have no guarantee that it’s true in a philosophical sense.

Once you figure out what they’re talking about, some of these philosopher guys aren’t entirely useless.

Birthright Citizenship 

George Will, columnist for the Washington Post, is an intelligent man. His latest column raises an interesting point of constitutional law. It also demonstrates perhaps the fundamental failure of newspapers in furthering real debate on political topics: a tendency to focus on the minutia of issues instead of providing perspective.

The 14th Amendment to the US Constitution guarantees citizenship only to “all persons born or naturalized in the United States, and subject to the jurisdiction thereof”. Will argues that illegal immigrants are not fully “subject to the jurisdiction” of the US, and thus their children have no constitutional right to citizenship. As a point of law, there is a reasoned argument to be made on both sides.

The larger debate in which Will does not directly engage concerns what the implications of denying citizenship to immigrants would be. Even if the US had the right to do it, restricting birthright citizenship to children of current citizens would be a formula for a population of poor hispanic migrants ruled by a separate class of full citizens. This latter group would be relatively rich, white, old, and (even in absolute terms) shrinking—eventually becoming a minority. It’s a formula for class warfare.

Ignoring possible consequences and addressing narrow matters of law is not itself dishonest. Misrepresenting the historical context of law is. Will writes of the 14th amendment:

The authors and ratifiers could not have intended birthright citizenship for illegal immigrants because in 1868 there were and never had been any illegal immigrants because no law ever had restricted immigration.

Will goes on for six more paragraphs elaborating historical interpretations of this law as applied to immigrants. As far as I know, everything he says is factually true. Yet Will’s account omits one fairly large detail—a detail that could not possibly have escaped his notice and could not possibly be considered irrelevant. The 14th Amendment was written as a direct rebuttal to the Dred Scott decision which denied citizenship to the children of slaves.

I’m sure that my summary of the situation is at best a gross simplification; there may in fact be a solid argument that there were also other motivations for the citizenship clause of the 14th Amendment. Maybe I could be convinced that in the full historical context Will’s interpretation is reasonable. But:

Focusing on minor details and omitting any mention of major points that would be raised by anyone arguing the other side of an issue is not honest. It’s not even good debate. It’s an attempt to prey on the ignorance of a captive audience. It’s an attempt to get opponents to waste their limited access to the public’s attention on a trivial rebuttal, no better than trolling on the internet. Worst, it’s an attempt to burnish intellectual credentials by jumping to the esoteric instead of tracing a path from the simple and well-known, which fundamentally undermines the role of expertise in guiding public debate.

I’m disappointed in you, George.


I hadn’t realized this when I wrote the above, but if you really did restrict birthright citizenship to children of current citizens then of course I wouldn’t be a US citizen myself. The fact that I hadn’t noticed this just demonstrates how much I take my identity as an American for granted.

So apparently I’ve got some self-interest on this issue; take that for what you will.

Daylight Saving Time 

I hate Daylight Saving Time. It’s a huge pain to administer (I lose track of which clocks change themselves automatically and which need to be reset by hand), it’s handled inconsistently from region to region in the world (making managing international scheduling and communication more difficult), and it disrupts everyone’s sleep patterns.

Despite the diversity of confusing explanations for why so many places use DST, the underlying justification is actually quite simple: DST is based on the theory that the start of the working day should be correlated with the time the sun rises.

When the days are at their shortest in winter, things are arranged in most places so that people wake up near sunrise in order to get to work for normal business hours. In London the latest sunrise is 8:06,[^The latest sunrise actually happens almost two weeks after the shortest day—“solar noon” shifts back and forth over a half-hour range throughout the year.] which is largely compatible with a working day that starts at 9. As the days get longer in summer, however, sunrise gets far earlier; with no DST the sun would rise at 3:43 on the longest day of the year. The theory is that these extra hours of daylight before the 9 o’clock start of the work day are wasted, since people normally just sleep in to maintain their daily routine before work. Daylight hours in the evening, however, have value: the vast majority of people don’t go to bed on any day until after even the latest sunset of the year (which is 21:21 in London even with DST), so every extra hour of daylight at the end of the day is one less hour that artificial light is required. The DST system partially addresses this problem by shifting sunrise forward when the days get longer: there is a natural variation of almost four and a half hours at London’s latitude, but DST brings this down to three and a half.

If you buy this (fairly reasonable) story that people want to wake up at roughly the same time with respect to sunrise, and maintain the same morning routine throughout the year, then DST is a very crude tool: in summer Londoners are still “wasting” two or three hours of daylight before they get up in the morning. An ideal system would start the work day roughly 90 minutes after sunrise every day. Setting your clock to sunrise at 7:30 every morning would mean that on the year’s longest day the sun would not set until eight minutes past “midnight”, which would eliminate almost all need for artificial lighting on that day.

I’m currently working on such a system. Stay tuned…

Media and Packaging 

Jason Kottke makes an interesting observation about recent reviews on Amazon: people are giving the minimum possible ratings to books and movies that they actually quite like. In the case of the Lord of the Rings films on Blu-ray, reviewers are complaining that only the theatrical versions are available, not the extended cut versions die-hard fans want to see. For the recently-released book The Big Short, Kindle owners are up in arms that only the hardcover has been released, not the e-book.

This is an important phenomenon, but I think Kottke misdiagnoses it:

I’ve noticed an increasing tendency by reviewers on Amazon (and Apple’s iTunes and App Stores) to review things based on the packaging or format of the media with little regard shown to the actual content/plot.

But of course the only two examples he cites don’t demonstrate this at all—these aren’t reviews of the packaging at all. They are attempts to use user-generated review systems as discussion forums to air grievances. Nobody is talking about the cover art for the Blu-ray discs or the typography in the hardcover book. If you insist that the ratings offered are really reviewers’ opinions of the products being reviewed, then if anything this is just an example of the Osborne effect: buyers know that a much more desirable version is just around the corner, so what’s available now looks terrible by comparison.

Worse, Kottke draws completely the wrong conclusion from the situation:

Packaging is important. We judge books by their covers and even by how much they weigh (heavy books make poor subway/bus reading).


In the end, people don’t buy content or plots, they buy physical or digital pieces of media for use on specific devices and within certain contexts. That citizen reviewers have keyed into this more quickly than traditional media reviewers is not a surprise.

The Lord of the Rings reviews focus entirely on content—buyers want the extended versions of the films.

While demand for electronic versions of books does support the contention that people care about format, I take it as evidence that content is much more important than “packaging”. Kindle readers just want a book that’s convenient, even if that means sacrificing beautiful typography or the pleasing form of a hardcover.

“Traditional media reviewers”, which at Amazon usually get space under “editorial reviews”, are thus on completely the right track to focus on content. Reviews of differences between hardcover, paperback, and electronic versions of books are as irrelevant as reviews of the difference between VHS and Betamax versions of films were: customers already know what format fits them. The point of reviews is to help them choose between content.

Is the iPad high-end? 

There’s a piece from James Surowiecki in the New Yorker about the collapse of the “middle” of markets: companies do well shooting for high-end, high-margin offerings, or for low-end budget products, but not the stuff in the middle.

It’s an interesting model for thinking about business, but Surowiecki provides no evidence that it’s true. The examples of companies caught in the middle are Sony, Dell, and General Motors. The first of these has always occupied the high end of the market; recent disappointments from Sony are about their failure to meet their own standards, not the market they’re targeting. I’ve always viewed Dell, however, as a low-end manufacturer: they’re as downmarket as institutional buyers are willing to go. General Motors remains the only real example, squeezed by the Germans from the top and the Koreans from the bottom. But blaming the “mushy middle” for GM’s woes would suggest similar problems for Toyota, VW, Ford, and Honda. It’s a rough time for the whole auto industry, but it seems clear that GM is a special case and has likely been hurt more by mismanagement than a general economic trend away from mid-priced cars.

My real gripe with Surowiecki’s piece, however, is the characterization of Apple’s iPad as high-end, meaning “significantly more expensive than its competitors”; I just have no idea what competitors are being considered. If the iPad is competing with laptops, then the $500 iPad stacks up pretty well against the current top sellers at Amazon, which go for $530, $680, and $715…followed by Apple’s $2000 and $1000 laptops. Just restricting competitors to the “ultraportable” category of laptops? All the top sellers are between $550 and $750. There are certainly a few laptops available for less than $500, but it’s absurd to suggest that the iPad is anywhere near the the “high end” of the laptop or netbook markets. If the iPad is a laptop competitor then it’s Apple’s entry into the middle (or bottom) of that market.

Is the iPad competition for portable touch computers? Without contract subsidies, Google’s Nexus One goes for $530. The Palm Pre is $550. You can argue that the $630 3G model of the iPad is a closer match than the WiFi model…but even the cheapest iPad has twice the storage of the Pre and four times that of the Nexus One…but the iPad doesn’t do voice calling… The more you dig into the details the sillier the comparison becomes: phones aren’t real competitors to the iPad.

The only competitors I can think of—products that a consumer might choose to buy instead of an iPad—that actually make the iPad look like a high-end choice are eBook readers: the Sony Reader (with 420 megs of storage) is $200, and the Nook and base model Kindle (both with two gigs of storage) are each $260; in comparison to these devices there is no doubt that the iPad is feature-packed, high-end, and expensive. If these are in the same market as the iPad, however, it’s worth noting that the iPad isn’t alone in the top price bracket. Amazon’s Kindle DX, which has a screen closer in size to an iPad and an accelerometer, is priced at $490. I’m going to go out on a limb and suggest that for most users the extra $10 for an iPad would be worth it.

My view is that the iPad is a new category of device for a market that doesn’t really exist yet—at least not sufficiently to have segmented into low-, middle-, and high-end. If you consider the full range of computing devices, however, I think it is possible to identify three important bands. These bands aren’t just about price, but also accessibility, the need for user training/proficiency, and support costs.

At the top, the intellectual elite in rich countries use personal computers (with the help of technicians who jump in to sort out the inevitable problems). At the bottom, the poor with little formal education (and no local “genius bars” for support) use mobile phones. The iPad is aimed squarely at the “mushy middle” between these two. I hope it’s good enough to succeed.

About Rob Shearer 

This is the personal site of Rob Shearer, a computing researcher and software engineer.


I am currently a researcher in the Information Systems Group at the Oxford University Computing Laboratory. My main research interest is efficient and scalable reasoning systems for very expressive description logics, including the logic which forms the theoretical foundation of the World Wide Web Consortium’s Web Ontology Language (OWL). In addition to my theoretical work, I am a part of the team developing HermiT, a reasoner for OWL ontologies. More information on my current research is available at my university site.

Before coming to Oxford I did similar research at the University of Manchester as a member of Manchester’s Information Management Group and that group’s Description Logic clique. While there, I managed the “Semantic Web Language Extensions” work package (number 2.5) of the EU KnowledgeWeb project, and in most cases served as Manchester’s representative within other work packages.

Prior to that, I developed the Cerebra reasoner while leading technology development at a small start-up company (called Network Inference and located in London when I joined, and called Cerebra, Inc. and located in Carlsbad, California when I left). The company was subsequently acquired by WebMethods, which in turn became a part of SoftwareAG; I have no idea what the status of the Cerebra reasoner is these days. During my time there I had some involvement with the W3C RDF Data Access Working Group, which went on to produce SPARQL.

I also spent a few years with Transversal in Cambridge, England, working on adapting natural language processing technologies to the problems of web content management.


In addition to my research appointment, I am also a DPhil student in computing at Oxford University and hope to submit my thesis in 2010.

I am a member of Linacre College, and have been president of Linacre’s Common Room since 2009.

For reasons even I have trouble fully understanding, I remain a member of the Oxford University Athletics Club, and have represented the club in a number of competitions for which they couldn’t find anybody else willing to show up, in events ranging from hurdles to pole vault to hammer throw, although discus is the only event in which I’m really competent. I was the squad leader for Oxford’s throwers through the 2008–2009 season.

In my younger days I attended Brown University, where I founded and served as president for Technology House, the university’s first and only technology-focused residential society.

I grew up in Rhode Island but have spent most of my time in the UK since early 2001.

If you want to buy me a gift, I like motorcycles.


Feel free to contact me on Twitter or through email via; try not to sound too spammy so that the filters let you through. I can also be reached through the university and via HermiT’s corporate site.


Jeremy Beer offers a thought-provoking essay on how meritocracy is killing Middle America:

…fly-over country, by and large, has been hemorrhaging intellectual capital for decades. The most talented young men and women, the most able, the most intelligent and creative, have been leaving to go off to college — or have been lured off to college — only to return in ever-diminishing numbers.

Much of Beer’s rhetoric is based on a denial, made explicit only at the very end of the article, “that we ‘need’ to maximize our economic value”. I’d argue that “maximizing” may not be necessary, but that economic progress is in fact one of our moral obligations to society, both present and future. The need for community and social cohesion does not completely obviate the tangible benefits of prosperity, which can support attempts to foster community. If remote communities are to close the growing gap with the world’s creative hubs, I expect that they will do so on the back of technologies like telecommuting and digital distribution—both direct products of economic and technological innovation.

The point that rings most true in the essay, however, is the contention that support for meritocracy can be motivated as much by self-interest as by abstract moral ideals:

…modernity, whose distinctive political philosophies have stressed equality, has led to greater inequality than ever, precisely because it has equalized opportunity — that is, because it has unleashed talent either to sink or swim — more than had ever previously been done. To put it yet another way, modernity has created many more opportunities for the expression of inequality than ever. And it has made inherent inequality more important than ever in determining social and economic distinctions.

This sheds some light on the cries of elitism that seem to grow only louder. Whether or not elites run things better isn’t the point; it’s whether offering all the benefits of power to the fortunate few is inherently fair.

The real irony is that in the US, conservative anti-elitism is tightly coupled with laissez-faire capitalism: the fortunate few shouldn’t be allowed to run things, but there is no limit to how disproportionate their share of the benefits should be. The opposite liberal stance is in fact far more coherent: the most competent should be in charge, but oversight should dampen the inequalities that result.

Tablet Apocalypse 

If it weren’t published by InfoWorld, I’d think that this were a brilliant bit of satire:

I hate disruptive technologies. They’re antithetical to all that’s sane and stable in enterprise IT…


It’s an old story, one that traces its roots to the earliest days of the personal computing revolution. Back then, the more nefarious users would sneak their shiny new PCs into the workplace, prompting a near riot as colleagues and other departments clamored for equal consideration. Suddenly, guerrilla PC cells were popping up all over the place, forcing IT to waste literally millions of man-hours whipping these poorly thought-out devices into a semimanageable state.

The best way to understand what the industry means by “enterprise computing” is to take note that “enterprise desktop” columns make regular mention of “idiot users”. But I’ve mentioned this before.

(via The Macalope)

Number Pads 

Two points.

First: modern computers are used with a keyboard and a mouse/touchpad/trackball, and they are used predominantly by right-handed people. Right-handed people normally operate the pointing device with their right hand. The numeric keypad present on many keyboards is designed to be operated with one hand. If keyboard manufacturers put the keypad on the left instead of the right, then more people would be able to enter numbers with one hand and work the mouse with the other.

Second: all standard QWERTY keyboards have little raised bumps on the ‘f’ and ‘j’ keys. They let touch typists find the “home” position on the keyboard without looking down from the screen. On keyboards with number pads, there’s a similarly raised bump on the ‘5’, for the same reason.

On keyboards without number pads, typists have to use the numbers on the top row. Two rows of keys is simply too far to reach, so instead typists shift their hand(s) up to that row when typing lengthy numbers. If keyboard manufacturers included raised bumps on a couple of the numbers (e.g. the ‘4’ and ‘7’), then touch typists could more easily make these shifts by feel instead of needing to look down at the keyboard.

You’re welcome, keyboard industry.

Get to work.

Poker Hands 

I’ve had a hard time finding a concise explanation of the rankings of poker hands that’s suitable for use as a reference by new players. In particular, the rules for comparing between hands of the same type (e.g. if two players have flushes, which one wins) are generally described in a very ad-hoc manner.

So I wrote them up myself, in Markdown (aka text) format. I converted this to HTML and added some simple styling to create a poker hand reference that fits on a single sheet of paper, and combined it with a quick-reference listing of hand types to make a PDF reference card—print it four sheets to an A4 page, double-sided, and you can cut it down to a four A6 references. All three of these are licensed under the Creative Commons by-nc-sa license.

The New York Times Will Charge For Access 

From an article in New York Magazine:

New York Times Chairman Arthur Sulzberger Jr. appears close to announcing that the paper will begin charging for access to its website, according to people familiar with internal deliberations.

Not mentioned in the article: Apple’s new tablet or the distribution deal Apple supposedly negotiated with the New York Times.

Newspapers are committed to making people pay for the “whole package”—a single price for the whole paper, not a per-article price. If anything resembling a traditional news organization is to survive (and that’s a big if) then that kind of packaging is necessary. The expensive investigative pieces need to be subsidized by easy-to-write fluff stories. We may care more about the important news items, but we spend more time reading the silly stuff. I’ve spent more time reading about Leno and Conan than I have about Haiti over the past week, and I hate that this means more ad revenue has gone to celebrity journalism departments than to reporters on the ground in the crisis.


I’ve written about netbooks before, but a recent post from Jeff Atwood drove me absolutely crazy. Atwood is working to take Cringely’s place as the only consistently-wrong blog I subscribe to. First he quotes Dave Winer’s definition of a netbook:

  1. Small size.
  2. Low price.
  3. Battery life of 4+ hours. Battery can be replaced by user.
  4. Rugged.
  5. Built-in wifi, 3 USB ports, SD card reader.
  6. Runs my software.
  7. Runs any software I want; no platform vendor to decide what’s appropriate.
  8. Competition. Users have choice and can switch vendors at any time.

This is a bizarrely specific list—only two USB ports and it’s not a netbook? It’s also absurdly general: are “small” and “low price” just relative to laptops?

More importantly, what on earth does “any software I want” mean? Does that mean compiled code? Does it require an API for the same UI elements used by the system itself? How extensive, stable, and well-supported does that API have to be? Most importantly, how does that API interact with “competition”? What I’m getting at is that a device with a robust Javascript environment could well qualify better for points 6–8 than a linux device with a custom GUI.

But the craziest thing is that there’s a glaring omission from the list. Both Winer and Atwood seem to take it as a given that a netbook has a hardware keyboard and a PC-like screen. In fact, they both seem to take it as a given that a netbook has all the capabilities of a desktop computer. This is where I think their perspectives go off the rails, but I’m getting ahead of myself.

Netbooks are no small thing for Atwood:

Netbooks are the endpoint of four decades of computing – the final, ubiquitous manifestation of “A PC on every desk and in every home”. But netbooks are more than just PCs. If the internet is the ultimate force of democratization in the world, then netbooks are the instrument by which that democracy will be achieved.


To dismiss netbooks as like laptops, but lamer is to completely miss the importance of this pivotal moment in computing – when pervasive internet and the mass production of inexpensive portable computers finally intersected. I’m talking about unlimited access to the complete sum of human knowledge, and free, unfettered communication with anyone on earth. For everyone.

Ignoring the inane overstatement, it’s clear that Atwood is conflating two very different things. He previously described a PC that met a set of eight criteria; now he’s just talking about ubiquitous web access. I agree that such access is a big deal, but there are lots of ways to achieve it without a netbook. There’s little question in my mind that when most people on earth have web access, they will usually be getting it from devices that are not PCs.

Atwood then focuses on yet another item that didn’t appear on his list of netbook traits: a lack of ongoing subscription costs or network charges:

It’s true that smartphones are slowly becoming little PCs, but they will never be free PCs. They will forever be locked behind an imposing series of gatekeepers and toll roads and walled gardens. Anyone with a $199 netbook and access to the internet can make free Skype videophone calls to anywhere on Earth, for as long as they want. Meanwhile, sending a single text message on a smartphone costs 4 times as much as transmitting data to the Hubble space telescope.

I don’t care how “smart” your smartphone is, it will never escape those corporate shackles. Smartphones are simply not free enough to deliver the type of democratic transformation that netbooks – mobile PCs cheap enough and fast enough and good enough for everyone to afford – absolutely will.

Rich techies live in a strange bubble where high-speed wifi is pervasive and free, but cellular service is spotty and expensive. Such a bubble usually extends to home and the office; if these were the only places where network connectivity mattered there wouldn’t be much call for mobile computing in the first place.

Atwood’s argument also ignores non-netbook devices with wifi, such as the iPod Touch.

Maybe those early [netbooks] were [cheap and crappy], but having purchased a new netbook for $439 shipped, it is difficult for me to imagine the average user ever paying more than $500 for a laptop.

Spoken like a man who does his “real” work on a desktop PC. Those who make their living entirely with a laptop will certainly be willing to pay for a high-quality machine.

But if we accept the premise that the “average user” already has a desktop PC, I’ll agree with Jeff that laptop sales among this market will drop. I just don’t think they’ll be replaced with PCs shoehorned into uncomfortable form factors; I think they’ll be replaced by “internet communicators” (as Steve Jobs calls them).

The average user wants a mobile device with web access. It’s incredibly myopic to interpret that desire as a need for a full-blown PC.

2010 Predictions 

I [did it last year] (, with fair results, so it’s time for another try. These are my predictions for 2010.

Like last year, I’ve rated each prediction with a “difficulty” between 0 and 1 based on how likely I think I’ll be right; this is an attempt to distinguish between no-brainers (difficulties at 0.2 or below) and outlandish guesses (difficulty 0.8 and up). If you’re looking for betting odds, I think the scale is probably logarithmic.


1. LeBron signed by Nets (difficulty 0.7)

LeBron James’ two priorities are to become a global icon and to win NBA championships; Cleveland isn’t helping with either. The Nets have the perfect market for him, the cap space to sign him, and an ownership group with personal connections to James.

2. Semenya and Pistorius ruled ineligible (difficulty 0.4)

I’m shocked that this hasn’t already happened, and it’s an absurdity of the legal process that either Caster Semenya or Oscar Pistorius were ever allowed to compete internationally. Athletics South Africa is a complete embarrassment.

My only caveat is that Semenya could regain eligibility through medical treatment.


3. Obama ends “don’t ask don’t tell” (difficulty 0.7)

It’s been a rocky first year for Obama, but he does seem to be working through his list of campaign promises: Guantanamo is shutting down (very slowly); he pushed through a stimulus-based economic plan (although perhaps not a very good one); he’s doing everything he knows how to reform health care (but doesn’t seem capable of the major overhaul he’d like).

He’s pushed back the issue of gays in the military, but I think he’ll face up to it this year. A typical Obama disappointment would be continuing to exclude openly gay soldiers from combat duty, but any elimination of don’t ask don’t tell would be an improvement.

Mobile Technology

4. Android becomes most popular OS; iPhone remains most profitable (difficulty 0.7)

Android just isn’t good enough to compete with the iPhone on the high end, but it’s easily good enough to displace Windows Mobile on smartphones at the low end of the market. And all phones are turning into smartphones.

5. AT&T loses iPhone exclusivity in US (difficulty 0.6)

It seems clear that Apple hasn’t worked anything out with Verizon for a CDMA version of the iPhone, which suggests they’ll stick with AT&T. But the iPhone is Apple’s most important product right now. AT&T is destroying the ownership experience in big cities in the US, and it’s getting worse. Steve Jobs isn’t going to let this continue: he cares about control, the ownership experience, and the image of the company more than he cares about squeezing a few extra dollars out of a subsidy contract.

6. Microsoft buys Pre (difficulty 0.9)

Windows Mobile is a dead end, and everyone but Microsoft knows it. The Palm Pre is a promising long-term platform, but it needs time to mature and time for the hardware to catch up—time Palm might not have the cash to live through. Microsoft’s best bet to stay relevant in the mobile computing space is to simply buy the Pre.

This would, however, require an admission from Microsoft that they’ve completely failed to innovate on their own, and that their rhetoric for the past few years has been bullshit. Such a major change of direction is much easier under an unassailable dictator like Bill Gates or Steve Jobs than under someone like Ballmer, who is currently in a very dicey political position.

It’s a long-shot prediction, but I’ll say Microsoft does it.

7. Apple Tablet released (difficulty 0.5)

I don’t have any predictions that would mandate a very high difficulty rating. My wish will come true, and Apple will ship a computer with a (roughly) A5 multitouch screen and no physical keyboard. It will ship before the third quarter (probably around April); it will be a lot more like an iPhone than like a Mac; it will do web, email, movies and ebooks; price will likely be something between $400 and $1000.

And I’ll most likely buy one as soon as they’re available.

Other technology

8. Microsoft fades even more… (difficulty 0.3)

Same prediction as last year: Google’s and Apple’s stock prices will outperform Microsoft’s.

9. iTunes gets live events (difficulty 0.7)

Apple has the best content-protected video-delivery service available. I predict that this year they extend their architecture to allow for live streaming video—e.g. sports and other live television events.


10. Price of gold declines (difficulty 0.5)

Over 1100 USD/oz? Ridiculous; the historical price of gold is closer to $400/oz. I have no clear idea of how quickly prices will come back down, so I’ll just go for a moderate-difficulty prediction that gold prices are under 1100 by this time next year. My guess is something around 800.

If I had any idea how to “sell gold” via eTrade I would.

11. Stock markets perform well (difficulty 1.0)

Everything up 20%: DOW at 12500, NASDAQ at 2700, S&P 500 at 1350. I could say I know this based on careful analysis of the markets, but instead I’ll claim magical powers over time and space. Just as meaningful.

Like last year, I’ll take partial credit for this prediction: a hit at difficulty 1.0 if I’m within 1%, 0.7 if I’m within 10%, 0.5 if I’m within 20%, etc. (Difficulty is given by e^(0.03963(1 - percenterror)).)

Lehrer’s Rules 

In his last broadcast of NewsHour, Jim Lehrer presented a list of “guidelines of practice” for good journalism:

Do nothing I cannot defend. Cover, write and present every story with the care I would want if the story were about me.

I’m with him so far, but none of this is specific to journalism, or to professional endeavors in general.

Assume there is at least one other side or version to every story.

That’s bullshit. Facts are facts. If a man is shot and killed and reporters can verify the situation, reserving judgement on whether or not the guy’s dead is nonsense.

Which leads to the realization that this guideline is really an insidious endorsement of biased journalism. There definitely is another side to the story “man is shot and killed by evil and selfish gunman”, and another side to “obnoxious man forces stranger to shoot and kill him”. Neither of those are news stories.

If you’re writing a story and you think that there is contradictory version with a reasonable claim to truth, then you’re probably not writing news.

Assume the viewer is as smart and as caring and as good a person as I am.

As smart? Even if I thought that worked for Jim Lehrer, journalists who only targeted viewers as smart as they were would be shooting over the heads of the majority of the population. Journalism is the art of explaining the facts as simply and as clearly as possible, so that viewers don’t need investigative and analytical expertise to understand what’s going on. This isn’t an excuse to make the facts simpler than they really are, but simply assuming that viewers are as smart as journalists is ignoring a major responsibility of the profession.

As caring? This seems like a recipe for wasting everyone’s time, and also implicitly endorses the idea that appealing to emotion is one of the main functions of journalism. Emotion is certainly one of the reasons people get interested in certain stories, but Lehrer himself claims that he’s not interested in the “entertainment” side of journalism.

As good? The only interpretation I can find for that is that Lehrer thinks viewers share his particular moral outlook, and that his particular view of morality is superior to any other outlook.

I don’t think we can take any of this literally. Beyond Lehrer’s rare conceit about him being “smart, caring, and good”, I think this guideline really amounts to “respect the viewer”. I can agree with that.

Assume the same about all people on whom I report.

Some people aren’t as smart as you. And maybe not as passionate or “caring”. And a few have fundamentally different senses of morality. When these differences affect the rest of the population (because a nitwit runs for vice president, or a community of religious fundamentalists endorses “honor killings”), it becomes news. Even if that news contradicts your assumptions about people.

Assume personal lives are a private matter, until a legitimate turn in the story absolutely mandates otherwise. Carefully separate opinion and analysis from straight news stories, and clearly label everything. Do not use anonymous sources or blind quotes, except on rare and monumental occasions. No one should ever be allowed to attack another anonymously.

I agree with the general spirit of all of these.

And, finally, I am not in the entertainment business.

I understand the sentiment, but I consider this a very narrow view of “entertainment”. The reality is that viewers have a choice of how they spend their time, and the satisfaction of staying up on current events must be weighed against the immediate pleasures of lighter entertainment. Confusing news with light entertainment is a mistake, but simply taking the value of current-events media as a given ignores the greatest issue faced by journalists today: what are you offering viewers?

Characterizing Google 

Amid an otherwise excellent rebuttal of complaints about Google “stealing value” from content creators, Aaron Swartz includes this:

…the closest people to moguls behind the recent shifts in media distribution are two computer science grad students: Larry and Sergey.

Even giving Swartz the benefit of the doubt that his assertion is rhetoric not to be taken at all literally, this is perhaps the most idiotic and naïve characterization of Google I’ve ever seen. Two guys were grad students at one point in their lives, and ended up founding what went on to become a huge company that dominates the industry. That company has for almost a decade been run by Eric Schmidt, former CTO at Sun and CEO at Novell. I’d describe Schmidt as a seasoned industry veteran, but then he also has a PhD so I guess he counts as a grad student, too.

Suggesting that Google is run by two grad students is like saying that Microsoft is run by a Harvard undergrad: wrong on its face; wrong after basic correction; irrelevant to Microsoft’s current behavior even if completely rewritten for accuracy.

2009 Prediction Results 

Before moving on to my predictions for 2010, I’ll take a look at how my predictions for 2009 worked out:

1: Usain Bolt sets 100m record at 9.62 (difficulty 0.7)

I was right that Bolt would break his own 100m record of 9.69, but he was faster than 9.60, which was the low end of my fallback (difficulty 0.5) range. I don’t feel too bad about missing this one.

2: Russia tones it down (difficulty 0.4)

I admitted at the time that this was subjective, but that “I don’t expect Russian military action to appear in many New York Times front-page headlines.” I’ll take credit.

It seems trivial in hindsight, but then most good predictions do.

3: Blagojevich cleared (difficulty 0.7)

The original prediction was that while Blago’s misbehavior was enough to ruin a political career (he was, in fact, impeached), it probably wasn’t enough to get a criminal conviction.

The trial is still going on, so I guess the original prediction missed. I haven’t been following the trial and don’t know much about either the evidence in the case or corruption law in general, but I’ll stick with it and guess that he will eventually be cleared.

4: Netflix starts to slip (difficulty 0.6)

I regretted this prediction almost as soon as I posted it; Netflix has a very good online-rental offering, so using them as “a proxy for best-of-breed physical media rental services” in comparison to iTunes-style digital distribution was a terrible, terrible choice.

I predicted a loss for Netflix by Q3; they continue to be consistently profitable, to the tune of $30 million per quarter.

Beyond my lousy choice of Netflix as a proxy for physical media, I think I was also expecting a much bigger push from AppleTV, which didn’t happen. I’m still a believer in the death of physical media, but this prediction was wrong on a great many levels.

5: New iPhone model from Apple (difficulty 0.3)

Nailed it in every detail: only major changes were faster processors and more RAM.

6: Microsoft continues to fade away (difficulty 0.3)

Another low-difficulty prediction. Microsoft closed on 2008-12-30 at 19.34, and now they’re trading at just under 30 (up 55%). Google moved from 303 to 590 (up 95%) and Apple went from 86 to 195 (up 126%). I made (a little bit of) money on this one; stock predictions aren’t supposed to be this easy.

7: Yahoo broken up and sold off (difficulty 0.4)

I missed this one. Instead of being bought by Microsoft, Yahoo just contracted to use Microsoft’s search service. I don’t entirely understand Yahoo’s strategy, but then I haven’t been paying much attention to them this year.

8: GM and Chrysler file for bankruptcy (difficulty 0.7)

Chrysler filed for Chapter 11 on 30 April, and GM followed suit on 1 June. My optimism that Obama wouldn’t bail them out wasn’t justified, but the predictions were solid.

9: Stock markets tread water (difficulty 1.0)

I predicted DJIA, Nasdaq, and S&P at 8600, 1700, and 880, respectively, which represented only a slight gain from a year ago. Right now they’re at 10471, 2190, and 1106, so they’ve done much better than I expected, to the tune of about 25%. According to my formula, this counts as a success with a difficulty just under 0.4—not completely trivial, but not much to brag about.

10: My own research output (difficulty 0.4)

Miserable failure on this; my productivity plummeted this year, and I won’t be submitting my thesis until next year. Sigh.

Final tally

So I hit four easy ones (2, 5, 6, and 9), missed two easy ones (7 and 10), hit one tougher one (8) but completely screwed up another (4), and missed two tough predictions but not very badly (1 and 3). Given the mix of difficulties, I don’t feel too bad hovering around 50% overall.

More on the CrunchPad 

I haven’t really been following Michael Arrington’s attempt to mass-produce a tablet computer for web browsing. I read the initial announcement and dismissed the project as a pipe dream from yet another blogger who thinks having a lot of readers means you know what you’re talking about. This week Arrington announced that the project had failed.

Arrington’s general dickishness is well-known, but I really don’t understand the thinking on this one. The original point was to “open source the specs so anyone can create them”. The plan was to “hopefully build a few prototypes… If everything works well, we’d then open source the design and software and let anyone build one that wants to.” He now claims that they have prototypes and are ready to get going on mass production, and he’s declared the project dead because another company has decided to build them without him. What’s more, Arrington actually promises lawsuits against the manufacturers:

We will almost certainly be filing multiple lawsuits against Fusion Garage, and possibly Chandra and his shareholders as individuals, shortly.

It seems like Arrington has painted himself into a corner here.

If Arrington does file lawsuits, he’s admitting that he was never serious about the “open source” thing to begin with, and that he was drumming up support for the project under false pretenses. What’s more, he’s throwing his full weight behind restrictive intellectual property law. In other words, “Fuck you, freetards. You shouldn’t ever have trusted me in the first place.”

If Arrington doesn’t file lawsuits, he’s admitting:

  1. that his legal threats are all noise, and that he shouldn’t be taken seriously in the future, and
  2. that he’s such a lousy businessman he didn’t know who actually owned the intellectual property (if any even existed) of the project.

My expectation is that he’ll realize he can’t win in court (which he won’t admit), and that the CrunchPad prototypes were so lousy that there won’t be any money to be won there anyway (which he also won’t admit). The entire debacle is nothing but a huge embarrassment for Arrington, so he’ll quietly drop it and hope all his readers forget it quickly.

Translating Arrington 

Let’s translate Michael Arrington’s announcement that the CrunchPad is dead:

It was so close I could taste it. Two weeks ago we were ready to publicly launch the CrunchPad.

I’m using “launch” here in a bold new way, as I will now make clear.

The device was stable enough for a demo.

The device was not stable enough for release.

It went hours without crashing.

It never went more than a few hours without crashing.

We could even let people play with the device themselves

Even people inside the company couldn’t use it without making it crash.

the user interface was intuitive enough that people “got it” without any instructions.

It was still way harder to use than an iPhone, despite having several orders of magnitude fewer features.

And the look of pure joy on the handful of outsiders who had used it made the nearly 1.5 year effort completely worth it.

People are really polite to you if you tell them that what they’re holding is worth 18 months of your life.

Our plan was to debut the CrunchPad on stage at the Real-Time Crunchup event on November 20 […] We’d put 1,000 of the devices on pre-sale and take orders immediately. Larger scale production would begin early in 2010.

We hadn’t even come up with a real launch date, but were perfectly willing to take the money of 1,000 optimistic suckers.

And then the entire project self destructed over nothing more than greed, jealousy and miscommunication.

The project failed due to gross mismanagement.

[…] Bizarrely, we were being notified that we were no longer involved with the project. Our project. Chandra said that based on pressure from his shareholders he had decided to move forward and sell the device directly through Fusion Garage, without our involvement.

I had no idea that people don’t hand out a cut of their profits without good reason.

This is the equivalent of Foxconn, who build the iPhone, notifiying Apple a couple of days before launch that they’d be moving ahead and selling the iPhone directly without any involvement from Apple.

I’m pretty sure everybody at Apple spends their days blogging and googling for their own name, like I do.

[…] We jointly own the CrunchPad product intellectual property, and we solely own the CrunchPad trademark.

So it’s legally impossible for them to simply build and sell the device without our agreement.

It’d take an incredibly incompetent businessman to make the legal arrangements so vague that his pet project could be taken away from him. I don’t feel incompetent, so the project must still be mine.

[…] Renegotiations are always fine. But holding a gun to our head two days before launching and insulting us isn’t the way to do that.

Negotiations are usually very fair and polite when one person has a loaded gun and the other doesn’t.

We’ve spent the last week and a half trying unsuccessfully to communicate with them. Our calls and emails go unanswered, so we can’t even figure out exactly what’s happened.

It’s almost as though they’re telling us to fuck off. Man, that’s some hard-core “renegotiation”.

[…] We will almost certainly be filing multiple lawsuits against Fusion Garage, and possibly Chandra and his shareholders as individuals, shortly. The legal system will work it all out over time.

Most good technology products require more legal input than technical input, anyway.

Mostly though I’m just sad. I never envisioned the CrunchPad as a huge business.

I’m going to distance myself from the project now.

And what’s really sad about all this is the incredible support we were getting from companies and people around the world to launch this device. A major multi-billion dollar retail partner has been patiently working with us for months, giving advice on manufacturing partners and offering to sell the CrunchPad at a zero margin to help us succeed in the early days. They were also willing to pay for the devices on order instead of 30 days after delivery, a crucial cash flow benefit that would allow us to ramp up volume without putting ourselves our of business. They were even willing to fly the devices from China on their own planes to eliminate our shipping costs. Intel, which would supply the Atom CPUs to power the device, has assisted us repeatedly with engineering and partner advice, and gave us pricing that was ridiculously generous given our projected first year sales volumes. Other partners were eager to promote and sell the device for little or no benefit on their end other than “supporting the project.”

I’m really well-connected, and my business plan rested on my ability to get lots of stuff for free.

We even had sponsors lined up to help us sell the device near our $300ish cost.

The goal was to sell the CrunchPad for $200. With big subsidies from manufacturers, no shipping costs, no retail markup, software built with volunteer labor, and no profit whatsoever, we weren’t even close.

And money wasn’t a problem, either. We had blue chip angel and venture capitalist investors in Silicon Valley waiting to invest in the company since late Spring.

I didn’t think hiring someone who knows how to manage a tech project was worth the money.

We were simply holding them off until we launched, to eliminate some of the risk.

I’m using the word “risk” in a bold new way.


If you don’t feel that you are possibly on the edge of humiliating yourself, or losing control of the whole thing, then probably what you are doing isn’t very vital.

-John Irving

(via Merlin Mann)

Using the Oxford VPN in Mac OS X 

Oxford University offers a “Virtual Private Network” service to its students, faculty, and staff. The most common reason people need to use this service is to get access to the wider internet using the Oxford Wirless LAN service. In fact, in almost all places where the OWL wireless network is available, the eduroam network is also available. Eduroam is a UK-wide network available at most major universities in the country and it does not require use of any special VPN, so I highly recommend that anyone new to Oxford take the trouble to configure their computers for eduroam instead of OWL. (The lesson, I’m afraid, is that IT services are better when they are not designed by the IT staff at Oxford.)

While the VPN is not necessary for wireless access, it may be required to access some Oxford-only services. The university encourages users to install a preconfigured version of the proprietary Cisco VPN client software onto their machines in order to do so. This is easy to do and does work, but I don’t really like it because:

  1. It requires installing software distributed by the university. This is a (minor) security concern, because if an attacker compromises the university’s system they can inject malware into the software distribution. I prefer to install only software than I retrieve independently directly from well-known primary sources.
  2. It requires non-standard system extensions which might cause reliability or performance problems. (I.e. it’s not just an application and can change the way the system functions even when it’s not running.)
  3. The Mac version, at least, is lousy software. The interface is ugly and it interacts with other programs in an obnoxious way (making itself the frontmost application without user intervention). These aren’t major problems, but they suggest little understanding of the Mac platform and shoddy engineering throughout the software. The VPN client asks to save user passwords, but does not appear to use the standard “Keychain” system, for example—it’s thus likely the password in stored on disk either in the clear (which would be a major security violation) or using some obfuscation which is trivial to undo (also poor security policy).

Under Mac OS X 10.6 (Snow Leopard), and possibly earlier versions, there is no need to install Cisco’s proprietary client: the operating system includes built-in support for Cisco VPNs. To set up the Oxford VPN, you’ll need to first collect the following pieces of information:

  1. Your remote access username. This should be the same as your single sign-on username, which begins with a few letters to identify your college or department and ends with a few digits. (At least, that’s what mine looks like.)
  2. Your remote access password. Most people set this to be the same as their single sign-on password when they initially register, but when you change your single sign-on password your remote access password does not change. (Another sign of what happens when you leave systems to Oxford’s IT staff.)
  3. The Oxford VPN “Shared Secret”. This is the same for everyone at Oxford, but it’s not supposed to be publicly available so I will not post it here. You can get it from the OUCS configuration file here, or you can get in touch with me directly.

Once you’ve gathered that information, select “System Preferences…” from the Apple menu and click on “Network” in the “Internet & Wireless” section. You should see something like this:

Network Preferences Panel

If the padlock icon in the lower left corner is locked, then click on it and provide your password to unlock the preference panel.

Click on the “+” button just above the padlock to add a new type of network interface. Choose “VPN” as the interface type, and then select “Cisco IPSec” in the VPN Type menu which appears. You can name the network whatever you want; I call it “Oxford VPN”.

Creating a new VPN

Click on the “Create” button, and the new interface will appear in the list at the left of the window. Select the new VPN and fill in for the “Server Address”, your remote access username for “Account Name”, and your remote access password for “Password”. The easiest way to manage the VPN connection is using a menubar widget, so I tick the “Show VPN status in menu bar” box:

Configuring the Oxford VPN

Finally, click on the “Authentication Settings…” button. The “Group Name” for the VPN is “oxford”; enter the “Shared Secret” as defined above.

Oxford VPN authentication settings

Click “OK” on the Machine Authentication panel, and then “Apply” at the bottom right of the Network preference window. Click on “Connect” (or choose “Connect Oxford VPN” from the new VPN menu in the menubar) to test the configuration.

Now, whenever you are on a network anywhere in the world you can simply connect to the Oxford VPN to get full access to university resources, as though you were plugged into an ethernet cable in your department or college.

GPL in Practice 

From an excellent case study on the practical impact of GPL licensing:

[An] Objective-C implementation is in two parts: the compiler and the runtime library. NeXT was only required to release its compiler code, not its runtime library. […] Rather than admit that it had a worthless bit of code, the GNU Project wrote its own Objective-C runtime—and this is where things really started getting interesting. The GNU runtime wasn’t quite a drop-in replacement for the NeXT runtime. […] Because the [non-GNU] branch only had to support a single runtime library, there was no clear abstraction between the runtime-specific and runtime-agnostic bits, and so the GNU branch quickly fell behind.

So forcing a company to release code it didn’t think anyone would find useful and had no intention of supporting ended up wasting the time and energy of a lot of open-source developers who devoted themselves to reconciling incompatible branches instead of actually improving the compiler. But it gets better:

Apple wanted to integrate the compiler more closely with its IDE. […] Unfortunately for Apple, the GPL required either making XCode open source or rewriting GCC. Apple opted for the latter choice.

In fact, Apple just decided to switch from GCC to a different open-source compiler: LLVM. There may be other technical reasons for the switch, but the fact that GCC is licensed under the GPL while LLVM is released under a less restrictive BSD-style license must have been a major factor.

My take is that the GNU political movement is undermining the most important and successful project under their control.

I’ll take this opportunity to beg all developers to stop using the GNU readline library as soon as possible and move to an alternative such as editline. GNU’s refusal to allow LGPL licensing for something that is so obviously a peripheral library feature just demonstrates that the organisation is far more interested in forcing their ideology on others than in the practicalities of creating good software.

Norman Borlaug Interview 

Agronomist Norman Borlaug died yesterday. His work is largely responsible for increases in the world food supply that have saved the lives of billions of people. Literally: billions. Even a short interview with him gives a huge amount of insight into the technical side of a field few geeks know anything about:

In 1960, the production of the 17 most important food, feed, and fiber crops—virtually all of the important crops grown in the U.S. at that time and still grown today—was 252 million tons. By 1990, it had more than doubled, to 596 million tons, and was produced on 25 million fewer acres than were cultivated in 1960. If we had tried to produce the harvest of 1990 with the technology of 1960, we would have had to have increased the cultivated area by another 177 million hectares, about 460 million more acres of land of the same quality—which we didn’t have, and so it would have been much more. We would have moved into marginal grazing areas and plowed up things that wouldn’t be productive in the long run. We would have had to move into rolling mountainous country and chop down our forests… This applies to forestry, too. I’m pleased to see that some of the forestry companies are very modern and using good management, good breeding systems. Weyerhauser is Exhibit A. They are producing more wood products per unit of area than the old unmanaged forests. Producing trees this way means millions of acres can be left to natural forests.

He also has some harsh words for certain types of environmentalists and proponents of organic farming. Indulging concerns that have no technical merit is a vice of rich, well-fed people who don’t understand the realities of farming:

In zero tillage, you leave the straw, the rice, the wheat if it’s at high elevation, or most of the corn stock, remove only what’s needed for animal feed, and plant directly [without plowing], because this will cut down erosion. Central African farmers don’t have any animal power, because sleeping sickness kills all the animals—cattle, the horses, the burros and the mules. So draft animals don’t exist, and farming is all by hand and the hand tools are hoes and machetes. Such hand tools are not very effective against the aggressive tropical grasses that typically invade farm fields. Some of those grasses have sharp spines on them, and they’re not very edible. They invade the cornfields, and it gets so bad that farmers must abandon the fields for a while, move on, and clear some more forest. That’s the way it’s been going on for centuries, slash-and-burn farming. But with this kind of weed killer, Roundup, you can clear the fields of these invasive grasses and plant directly if you have the herbicide-tolerance gene in the crop plants.

I.e. there’s no evidence pesticides and genetically-engineered crops are hurting people. The lack of pesticides and genetically-engineered crops definitely leads to famine and environmental damage. It makes sense to consider both sides of the equation.

(via Hacker News)

Fantasy Football Scoring 

I really enjoyed fantasy (American) football for the years I played, but it always upset me when players earned fantasy points for things that didn’t help their team, or were penalized for actions that did. The most common examples are scoring drives with the clock running down: a short pass to a receiver who is immediately tackled in the middle of the field is as bad as a sack in many such situations, but both quarterback and receiver get points for it. Similarly, an attempt at a 50-yard “hail mary” pass that results in an interception as time expires just shouldn’t be scored like other interceptions: it’s no worse for the quarterback and no better for the defense than an incompletion. Changing scoring to take account of such “game situation” parameters would be complicated, however, so I understand why few (if any) leagues make exceptions for such circumstances.

There is, however, one well-defined statistic which is very closely matched with a player’s contribution to his team, but is completely ignored by most fantasy leagues: first downs. A player who fights for the one extra yard needed for a new set of downs contributes a lot more than the tenth of a point (or less) given for a single yard of field position, quarterbacks who consistently throw passes just short of first-down yardage and must rely on running backs for first downs deserve fewer points than QBs who can do it themselves, and there should be some incentive to include reliable short-yardage backs on a fantasy team.

The simple fix is to give an offensive player an extra point if he gets a first down. This system could make consistent running backs and star “possession” receivers competitive with the “speed” receivers who are disproportionately rewarded for a small number of big plays (which can result in huge field position gains and touchdowns, but which are the direct result of extra plays that arise from steady progress down the field).

Offering a bonus point for a first down amounts to a much-simplified version of an approach taken by, in developing the Defensive-Adjusted Value Over Average (DVOA) statistic:

DVOA measures not just yardage, but yardage towards a first down: five yards on third-and-4 are worth more than five yards on first-and-10 and much more than five yards on third-and-12.

Most current player statistics don’t include “number of first downs”, but it’s trivial to piece this together from play-by-play information, so it would be straightforward for Yahoo or any other fantasy football system to incorporate it as an option for their leagues.

Length and Old Media 

Raymond Chen describes an important old-media skill, writing to length:

If you’re like me and have a fixed-position column, things are even less flexible. My column must be one page long, exactly. If I go over, there’s no continued on page to overflow into; if I fall short, there’s no advertising to pick up the slack. When I get proofs back from the editors, they will often have remarks like “This article is ten lines short. Please fix.” (They don’t often tell me my article is long, because editors are very good at cutting on their own!)

Compare this with the practice at online publications:

I’m told that when the editors tell a writer that an article is short and ask, “Can you add another ten lines?” the response they get back is sometimes a simple, “No, I don’t think there’s anything more to add.”

I can’t see this as anything other than a giant win for new media, and a symptom of one of the major problems with traditional journalism. Online news outlets stretch and shrink (and emerge and die) to match the significance of new developments they cover. The daily paper has about as many pages when there’s major global news as when there isn’t, and the cable news channels have just as many minutes of airtime to fill.

My favorite news sources are the ones that go quiet(er) and leave me alone if nothing important is happening, and whose writers don’t try to waste my attention on fluff.

Media Consumption Pyramid 

According to Wired the average American consumes about 9 hours of media a day. The most amazing part to me is how little is devoted to television:

Media Consumption Pyramid

(via Cool Infographics)

Oscar Voting 

There’s a nice post today from the Wall Street Journal Numbers Guy about the new voting system for the Oscars. Since they increased the number of Best Picture nominees from five to ten, they’ve scrapped the “vote for a single film; the film with the most votes wins” system:

The new system for best-picture voting is just like the procedure for selecting nominees. It’s called single transferable vote and is similar to an instant-runoff election. Voters will rank the 10 nominees from 1 to 10. PricewaterhouseCoopers staffers who oversee the voting for the Academy will place the ballots into 10 piles, each one for ballots with one of the films ranked at the top. If one pile has 50% of the ballots, it wins. If not, the ballots in the smallest pile are added to the pile of the second-place film listed, and the procedure continues until one film has 50% of the votes.

I like ranked ballots because they are simple: voters need to write down a lot of information, but it’s information they can readily understand and communicate. The primary alternative for voting among more than two choices is the use of approval ballots, on which voters just pick a subset of the nominees they wouldn’t mind winning (without ranking them); the standard single-choice ballot is just a special case of approval ballots where you can only approve of a single nominee. Approval ballots encode less information but paradoxically make the voting process a lot more complicated. How do you decide what your threshold is for approval? If you think several nominees are acceptable but that one is much much better than the others, should you only approve of the one you think is far superior? The answers to these questions depend on not just your opinion of the nominees, but also your expectations about other voters’ behavior.

The instant-runoff method of tabulating ranked ballots, of course, is also vulnerable to certain types of strategic voting, whereby voters are best served by submitting a ballot whose ranking does not accurately reflect the voter’s preference. Beyond the fact that such opportunities are quite rare in practice, I don’t think that’s such a big deal in the long term. It’s the interface with voters that needs to be understood and adopted by all members of the Academy; that’s where the inertia in the system resides. Instant-runoff is the easiest tabulation system to explain, so it makes everyone more comfortable with ranked ballots during the changeover. Once voters are accustomed to such ballots, changing the way the ballots are tabulated to choose a winner is straightforward and inexpensive. A Condorcet tabulation method would eliminate the rare cases where misrepresenting your preferences on your ballot would be of benefit.

Our IT Overlords 

Last week, Farhad Manjoo, the technology columnist for Slate, published a piece highlighting the control many large organizations, including the US State Department, exert over their employees’ computers. He argued that such restrictive policies were often misguided. His reasons, in a nutshell:

The restrictions infantilize workers—they foster resentment, reduce morale, lock people into inefficient routines, and, worst of all, they kill our incentives to work productively.

My only real quibble with Manjoo’s article is that “modern” companies have already recognized this; it’s only the crusty behemoths whose IT departments have decades of legacy policy that continue to treat users like infants. Of course, such behemoths still represent a huge fraction of the economy and the work force, so his point stands. Some people don’t realize that IT has moved on since the 80s.

What I did find interesting, however, was a direct response from John C. Welch which he emailed to Manjoo and also posted publicly on his web site. Mr. Welch has been in IT “for around 20 years, from $bigCorp to higher ed, to small companies”, and from what I can tell he intends to speak for the industry. Let’s see how he rebuts accusations of infantilizing workers:

…New Media Douchebag…

This person was an idiot.

…you don’t care about that boring stuff like security or infrastructure, do you? You just want the new shiny, and if you get told ‘no’, then obviously, it’s just because IT are mean poopyheads.

FIGHT THE POWER!!! Yes, yes, I like Public Enemy too, but as it turns out, they aren’t a reliable network administration methodology. Who knew?

IT however, doesn’t live in the land of magical ponies and unicorns, where the internet is run by happy fairies and it’s all free.

Every time some nimrod PHB gets his panties in a bunch because someone was on MySpace, it’s our fault for allowing it, and word comes down from on high, block that site. What, you think we like having to run the idiotic reports about web usage and maintain the block filters?

…why would you bother to do any research? You’re leading a revolution!

As well, the implication that somehow, you know more about the advantages and use of things like Gmail than every IT person everywhere? Can you even sit within arms reach of a computer, with your head all bloated like that?

…rather than learn, you instead run the New Media Douchebag playbook, and assume that only you, Farhad Manjoo, is able to really understand what’s going on.

…a dead wombat knows more about IT than you do.

I’m sure what you’re doing is completely different. I bet you could even explain how in under a decade…once I’m in a coma…and deaf.

…we do what we’re ordered to, and take the blame from idiots without a clue…like you.

You don’t think it hasn’t occurred to us that if you just teach someone about what not to do they won’t do it? Do you think that kind of shit ever works? Maybe for 1% of people…

I’m pretty sure that no amount of reality will change you from your mighty crusade against the ebul that is IT.

Oh, you didn’t think you were the first one to write this kind of tripe, did you?

See? Not only are you wrong, but you’re not even original.

Welch’s attitude towards his users, his bosses, and any critics of the IT industry is clear.

He does, of course, also include some unsubstantiated claims in his rant. It’s mostly just transparent hand-waving and misdirection, but I’ll make an attempt at a few of them:

  • When employees need to beg the CEO for help getting around IT policy that makes it hard to work, the IT department has failed at its job. Disastrously.
  • An IT worker who considers the cost of disallowing any and all software to be zero needs a better understanding of “cost”.
  • We all know you don’t need intimate familiarity with every last detail of a network, “down to OS versions on routers and servers”, in order to install software. If you think that’s necessary, you don’t understand modern IT. And your security policy sucks.
  • We also have some idea of what it takes to keep computers and networks running. Most of us do that at home these days. Vague claims that “[it’s] not easy. No matter what your Googleh.D in computer science tells you”, are not good enough any more.
  • If management is making IT decisions, then they are running your IT department. Blaming bad policy on internal politics isn’t really a defense.
  • We understand how email works. Forwarding a message to GMail doesn’t force you to delete the original. Really. Even if you throw acronyms like EAS and BES at us.
  • If anybody wants to intentionally leak confidential data they can. Spending your time blocking IM and Twitter won’t change that.
  • We install applications on our personal machines all the time, yet we do not “spend hours a day dealing with application/OS/network interaction”.

When big-company IT is no longer dominated by pathetic and incompetent despots then maybe the profession will regain a little bit of respect.


Mr. Welch responds. And has apparently read (one line of) my bio. And believes that shell scripts figure prominently in computing research.


New Yorker Cartoon, 2009-09-07

Health-Care Taxonomy 

An excellent article (which is worth reading in full) from Foreign Policy magazine makes an incidental clarification of some of the terminology used in the health care debate. It’s obviously nonsense that Obama’s plan is a “single-payer” system, but I hadn’t considered the fact that even if it were, it still wouldn’t be a socialized system. Here’s the terminology:

Socialized health-care system

A government entity directly employs doctors and owns the hospitals. This allows top-down management of health care, with all attendant advantages and disadvantages, but of course any such system can be internally organized with varying levels of competition and independent management. The existence of such a system in no way rules out the possibility of competing or supplementary systems: education in the US is socialized, but there are many private schools outside that system (particularly at the university level). Britain’s NHS and the US Veteran’s Affairs Department are socialized health-care systems, and both co-exist with supplementary private systems.

Single-payer system

Doctors and hospitals are private and operate independently, often as for-profit institutions, but there is a single government entity that pays for most care. This hugely simplifies the billing process, and is the most straightforward way of supporting a “right to basic health care” for the entire population. Private insurance and payment plans can co-exist with the system to support treatments not covered by the single payer. Canada, France, and Germany have single-payer systems. Medicare is a single-payer system.

Private health-care system

Doctors and hospitals are private and operate independently, and people rely entirely on an open marketplace for medical insurance, where any government plans compete directly with private plans. All countries that operate in this way impose some kind of regulation on insurance, and many enforce a “right to basic health care” by requiring that every citizen purchase an insurance plan that meets a set of minimal standards, and by subsidizing the cost of such plans. Switzerland, the Netherlands, and Obama’s plan all follow this model (including regulation, mandatory insurance, and subsidies).

The US

The current US system is socialized for veterans, single-payer for the elderly and those with certain disabilities (Medicare) and the poor (Medicaid), and private for most other segments of the population. The private portion currently includes (piecemeal) regulation but does not mandate insurance coverage. Instead of subsidies to help the poor buy into the private system, they are pushed into the Medicaid single-payer system.

Star Wars Design Flaws 

Writer John Scalzi has posted a collection of “design flaws” in the Star Wars universe. I understand that this is just harmless fun, but his criticism seems typical of the mindless “what a bunch of idiots!” the ignorant like to level against experts with specialized knowledge and extensive experience. In this case, fictional engineering experts. Okay, so I’m taking this too seriously. Anyway, my rebuttal:


Sure, he’s cute, but the flaws in his design are obvious the first time he approaches anything but the shallowest of stairs.

Apparently Scalzi isn’t aware that (due to a retcon in the prequels) R2 has jets. Oh, wait, he is aware of that:

Also: He has jets, a periscope, a taser and oil canisters to make enforcer droids fall about in slapsticky fashion – and no voice synthesizer. Imagine that design conversation: “Yes, we can afford slapstick oil and tasers, but we’ll never get a 30-cent voice chip past accounting. That’s just madness.”

Everybody in the Star Wars universe can understand R2. The only people who can’t are the viewers of the movie. I always thought making that work so well was an indication of just how well the (first two) Star Wars movies had been scripted.


Can’t fully extend his arms; has a bunch of exposed wiring in his abs; walks and runs as if he has the droid equivalent of arthritis. And you say, well, he was put together by an eight-year-old. Yes, but a trip to the nearest Radio Shack would fix that.

We have no idea what purpose 3PO was built for. If his only job is to stand around and serve as translator or advisor, one presumes that physical mobility was nothing but an afterthought of design.

Also, I’m still waiting to hear the rationale for making a protocol droid a shrieking coward, aside from George Lucas rummaging through a box of offensive stereotypes (which he’d later return to while building Jar-Jar Binks) and picking out the “mincing gay man” module.

If a system were designed to warn people about subtle breaches of etiquette, then it wouldn’t be terribly surprising for life-threatening breaches to cause it to respond with loud and insistent panic.


Yes, I know, I want one too. But I tell you what: I want one with a hand guard. Otherwise every lightsaber battle would consist of sabers clashing and then their owners sliding as quickly as possible down the shaft to lop off their opponent’s fingers.

Except that, you know, that doesn’t happen. Appearances indicate that two lightsabers can’t slide along each other at all—there’s a lot of friction (or other such resistive force) between the two. Adding a crossguard would address a problem that doesn’t exist, and would undermine the sleek shape of a retracted lightsaber, which is really the coolest part.


A tactical nightmare: They’re incredibly loud,

Because noisy weapons would never catch on.

especially for firing what are essentially light beams.

Lightning is quite loud, too, and that noise is just caused by suddenly-superheated air.

The fire ordnance is so slow it can be dodged,

There’s no evidence of this. You can dodge where a blaster is pointed just as you can dodge where a gun is pointed. (The Jedi reflect-the-blast technique is supposedly based on their magical powers of precognition, but I really just chalk it up to suspension of disbelief and Lucas’s focus on cool Jedi tricks after the first two films.)

and it comes out as a streak of light that reveals your position to your enemies.

Regular rifles emit flashes of light that give away your position as well. Snipers use special weapons with flash suppressors to eliminate this problem; there’s no reason to believe the Star Wars universe doesn’t offer similarly-specialized weapons.

Let’s not even go near the idea of light beams being slow enough to dodge; that’s just something you have let go of, or risk insanity.

They’re called blasters, not lasers. The fact that something is really bright doesn’t mean it’s light—in fact, the light you see from incredibly powerful lasers is not the laser light.

Maybe blasters are spitting out bursts of plasma. I don’t know. But I think if you offered the US military a lightweight weapon that never runs out of ammunition they’d take it in a heartbeat.

Landspeeders and other flying vehicles

Here’s the thing: In the Star Wars universe, there are no seatbelts. And maybe if you’re flying your hoity-toity vehicle on Coruscant, you have, like, a force field that keeps you flying out of your seat. But Luke’s X-34 speeder on Tatooine? The Yugo of speeders, man. One hard stop, and out you go.

I’m having trouble even seeing the “design flaw” here. The fact that Jedis and farmers don’t tend to wear their seatbelts?

Stormtrooper Uniforms

They stand out like a sore thumb in every environment but snow, the helmets restrict view (“I can’t see a thing in this helmet!” – Luke Skywalker),

Luke is the only one who ever had this problem, for the simple reason that the helmet didn’t fit him. (“Aren’t you a little short for a stormtropper?” – Princess Leia.)

and the armor is penetrable by single shots from blasters. Add it all up and you have to wonder why stormtroopers don’t just walk around naked, save for blinders and flip-flops.

As armor, the stormtrooper uniform is terrible. But you do have to dress them in something.

Death Star

An unshielded exhaust port leading directly to the central reactor? Really?

It was not a straight blaster shot to blow up the reactor; you needed to get a “proton torpedo” into the port, at which point the torpedo changed direction and self-navigated down to the reactor. It’s a flaw, as was readily admitted, but a flaw that was only discovered through lengthy analysis, and that required the construction of specialized weaponry to exploit.

And when you rebuild it, your solution to this problem is four paths into the central core so large that you can literally fly a spaceship through them? Brilliant.

At this point we can only conclude that John Scalzi never watched the film to which he’s referring. The second Death Star was only half-built; huge portions of the spherical structure were entirely missing. That was part of the plan: make it look vulnerable to bait the rebels into an attack. Defense of the space station was left to a deflector shield projected from a nearby moon. The shield was 100% effective until it was blown up. The script for this film was atrocious, but none of the above elements constitute design flaws.


A monstrous yet immobile creature who lives in an exposed pit in the middle of a lifeless desert, waiting for large animals to apparently feel suicidal and trek out to throw themselves in? Yeah, not so much. Not every Sarlaac can count on an intergalactic mob boss to feed it tidbits.

This assumes a lot of things about Sarlaac that we don’t know. Not every duck can count on tourists to feed it bread. Somehow wild ducks manage.

That Asteroid Worm Thing in Empire Strikes Back

So, large space worm lives in asteroid, disguises itself as a cave and waits for unwary spaceships to fly by so it can eat them? Makes the Sarlaac look like a marvel of natural selection, it does.

As above. There’s no indication that the worm, or Sarlaac, actually have any interest in eating people (or spaceships); for all we know they feed on rock, or their “live under the surface in a cave” behavior is just part of a lengthy hibernation.

Reframing the previous rebuttal: not every fountain can count on tourists to feed it coins.


Oh, man, don’t get me started. Except to say this: If in fact a high concentration of midi-chlorians is the difference between being a common schmoe and being a dude who can Force Choke his enemies, the black market in midi-chlorian injections must be amazing.

If having the right genes is the difference between being a common schmoe and being Usain Bolt, the black market in Bolt DNA injections must be amazing.

The Moral

If you know something about science and you watch sci-fi, you may think “that’s impossible”. If you’re a scientist at heart, you think “what would it mean if that were possible?”

If you know something about engineering, you may think “that couldn’t work”. If you’re an engineer at heart you think “how does that work?”, or even “what would you need to do to make that work?”

Being a scientist or engineer is more about grappling with questions than knowing a collection of answers.

Health Update 

This will be my most self-absorbed post yet. If you’re not fascinated by people telling you about their current medical conditions, you might want to skip this one.

Swine Flu Statistics

In the UK, they’re trying to prevent spread of swine flu by telling everyone with flu symptoms to stay away from the doctor’s office. Instead they give out a number for the national flu center and they diagnose you over the phone. Based on my fever, cough, headache, and diarrhea I was considered a swine flu risk. I have no problem with such a cautious approach, but don’t take any “number of swine flu cases” statistics seriously—they’re labeling every case of flu swine flu these days.


The antiviral drug oseltamivir, marketed as Tamiflu, is nasty stuff. The only high-quality study I’ve found indicates only slight benefit from taking it (around 1 day less of illness), and in my case it caused vomiting. Repeatedly. If I had read the study first I might not have taken it at all, but once I started my five-day course I was obliged to finish it.


I have seldom felt more miserable than the couple of days I was suffering both severe diarrhea and enough nausea and vomiting that I had trouble getting any liquids. A couple of days of this just means discomfort; more than that and good medical care would put you on an IV; in regions without well-equipped facilities the combination is deadly.

I estimate that I saved around $10 on food for each day I was ill. I donated that savings to a project that provides clean water to communities that don’t have it. Beyond my latest little reminder, I consider charities like this much better use of money than carbon offsets or care for stray dogs in rich countries.


The most interesting part of this experience for me was that after a few days without any significant caloric intake, I developed a bitter taste in my mouth. The taste was even more concentrated on my lips, and my skin also tasted bitter. Showering, rinsing my mouth, and brushing my teeth couldn’t get rid of this.

I now suspect that this was a result of a “ketonic metabolism”: my body was burning its own tissue for energy, which results in acetone as a waste product. The acetone gets into your sweat and saliva, causing bad breath and a bitter taste.

I’ve been undernourished for a few days many times in the past (i.e. consuming at least 1000 kCal/day less than I was burning), either because I was too lazy to get food while working on a project or because I didn’t make the effort to replace all the calories I lost to exercise. I’m certain I’ve had a ketonic metabolism many times before, but I’ve never noticed such an acute taste. I assume that the difference this time is that I’ve spent most of the last year trying to gain weight, which has meant both eating constantly and eliminating the long runs I used to do—i.e. avoiding any chance for my body to break down its own tissue. I guess my body isn’t as efficient as when it was burning fat for energy on a more regular basis.


I am in no place to complain about my health. I have no serious chronic conditions, no disabilities, and have never had any serious accidents or health problems. My immune system, however, has been poor since the day I was born, so I probably spend a month of every year incapacitated by “minor” colds and flus. I spend another month of every year suffering from allergies that make me too much of a mess to interact with others face-to-face. 2009 has been a particularly bad year—I’ve spent most of the time since April with one malady or another.

I think this may be a major reason why fitness has been such a big part of my adult life. If you’re going to be weak and enervated so much of the time, you’ve got to make up for it on the days you’re at 100%.

Fixing Automatic Door Slammers 

Everyone is familiar with the spring boxes attached to doors, which usually look something like this:

Briton 2003 door closer

These are formally called “door closers”, and such devices are the cheapest way of conforming to the strict fire codes required in just about anything other than a single-family home.

In practice, a lot of closers stop you from closing the door quickly, meaning you have to leave it to the spring to close, but then they slam the door over the last six inches. If I were to carve a list of commandments to those around me, a prohibition on door slamming would be included.

It’s easy to adjust most door closers. They’re all fairly similar. To adjust the Briton 2003 pictured above, you’ll need:

  • A smallish philips-head screwdriver
  • A sturdy flat-head screwdriver

First you need to take the cover off the door closer. On this model it’s attached by one small screw on either side.

Screw attaching cover to Briton 2003 door closer

After removing the two screws, the cover should come off easily.

Briton 2003 door closer without cover

There are adjustment controls on the end of the cylinder.

Finding Briton 2003 adjustment controls

The controls look like this:

Briton 2003 adjustment controls

Use a flat screwdriver to turn the center screw left to make the door close faster; turn the center screw right to close the door more slowly.

Around the center screw is a metal ring with two off-center notches. This ring controls whether or not the door is slammed over the last six inches (which is intended to overcome any resistance from the door’s latch), or whether it is closed smoothly the whole way. Rotate the ring 180 degrees to change the setting. There should not be much resistance to turning it; you should be able to turn it with the help of a flat screwdriver in the notch. It will probably only turn in one direction.

Even if your door has a latch, I recommend setting the door to close smoothly the whole way. You can then adjust the overall speed of closing just high enough to overcome the latch without slamming.

US Health Care 

I could go on at length about what the health care “debate” says about US politics and culture, but I’ll restrict myself to five points:

  1. The health-care bill is publicly available. The language is fairly accessible. It’s long, but given the double-spacing and 50-character line lengths it’s the kind of thing you could easily read in a day. Claims about the bill are easy to check. Most of the claims being made by opponents of the bill are flat-out lies.

  2. The bill leaves the basic US health-care infrastructure entirely in place. It does two things: it sets up an insurance plan provided by the government comparable to, and competitive with, the low-end plans from private insurers, and it says that everybody needs to sign up to some kind of insurance (by punishing those who don’t with a tax penalty). A debate over the merits of a single-payer system, or salary- versus procedure-based pay for doctors, or socialism in general, has no relevance to the proposed bill.

  3. The only real facts that can be debated are the economic consequences of a new competitor in the insurance market: if the government plan covers a lot and is heavily subsidized then private insurers may have trouble competing; if it covers little and receives little subsidy then nobody would choose it over private insurance. Debate on this topic has been completely incoherent: if you’re worried about the new plan hurting private insurers, then the fact that it might not cover certain treatments is surely a good thing.

  4. If you quit your job to start a one-man business (e.g. software developer; web designer; artist; consultant), you could cut most of your costs to almost nothing: move somewhere with cheap rent, switch to a small used car, etc. The cost that will break you in the year(s) before you’ve built back up to a basic salary is health insurance. The US system is a huge discouragement to entrepreneurship, innovation, and the kind of “flexible employment” that the world so desperately needs.

  5. My view is that basic health care is a right in the same sense that basic childhood education is a right. Putting your kids through school isn’t dependent upon the education plan offered by your employer; it’s an unexpected consequence of antiquated tax policy that health care is.

Windows Pricing 

Jeff Atwood has been considering the profitability of low-priced software, citing the iPhone App Store and discounted games prices as examples. He goes on to opine about Windows pricing:

Say the Windows 7 upgrade price was a more rational $49, or $69. I’m sure the thought of that drives the Redmond consumer surplus capturing marketing weasels apoplectic. But the Valve data – and my own gut intuition – leads me to believe that they’d actually make more money if they priced their software at the “why not?” level.

This may be good advice, but it fails to address the uniqueness of Windows, both in the present climate and historically. Microsoft has always sold most of the copies of Windows to computer manufacturers and large organizations—two groups unlikely to make impulse purchases.

Further, upgrading to Windows 7 will be “a tedious, painful process” for the vast majority of users. You can buy a game or an iPhone app for a couple of bucks, and if you don’t like it that’s not a big deal. If you try Windows 7 and don’t like it you’ve just cost yourself several hours (at least) of work.

As anyone familiar with Linux will tell you, where operating systems are concerned people have a lot of answers to “why not?” unrelated to price.

UK Surveillance 

Something to add to the list of reasons to leave the UK:

The UK Government’s Children’s Secretary Ed Balls has announced a controversial new CCTV monitoring scheme, in which thousands of problem families are to be monitored 24 hours a day, 7 days a week.

Surveillance in their own homes, to make sure the kids are in bed at a reasonable hour and off to school on time.

Meanwhile, privacy campaigners spend their time fighting ID cards. Way to prioritize, guys.

(via MetaFilter)

Tablet Rumors 

A few obvious thoughts on the latest tablet “report”:

  • Inviting an analyst to gawk at your secret project is like telling your brother-in-law about your marital infidelity. Apple’s not that stupid.
  • At Apple, products “awaiting Steve Jobs’ final blessing” are not necessarily going to be released. Ever.
  • The reason Apple does so well with these consumer devices is that they realize it’s all about the software. The software is the part that can’t be leaked by the supply chain; Apple’s real secrets are kept in-house.

The Hot Waitress Index 

And now some praise for economic theory, although in this case it’s for theory presented (and possibly developed) by a writer and not a full-time economist. Hugo Lindgren’s brilliant Hot Waitress Index:

The indicator I prefer is the Hot Waitress Index: The hotter the waitresses, the weaker the economy. In flush times, there is a robust market for hotness. Selling everything from condos to premium vodka is enhanced by proximity to pretty young people (of both sexes) who get paid for providing this service. That leaves more-punishing work, like waiting tables, to those with less striking genetic gifts. But not anymore.

(via Jason Kottke)

A Strange Definition of "Externality" 

I just came across a surprising paper from economists Aaron S. Edlin and Pinar Karaca-Mandic published in Journal of Political Economy in 2006 which claims to address the “accident externality” due to driving. Here’s the abstract:

We estimate auto accident externalities (more specifically insurance externalities) using panel data on state-average insurance premiums and loss costs. Externalities appear to be substantial in traffic-dense states: in California, for example, we find that the increase in traffic density from a typical additional driver increases total statewide in- surance costs of other drivers by $1,725–$3,239 per year, depending on the model. High–traffic density states have large economically and statistically significant externalities in all specifications we check. In contrast, the accident externality per driver in low-traffic states appears quite small. On balance, accident externalities are so large that a correcting Pigouvian tax could raise $66 billion annually in California alone, more than all existing California state taxes during our study period, and over $220 billion per year nationally.

So if I move to California and start driving, not only do I have to pay for my own insurance, but everybody else’s insurance premiums will go up by a fraction of a penny; in total everyone else will pay an extra couple of thousand dollars. Edlin and Karaca-Mandic call this an externality.

I don’t get it. This extra cost is only an externality if you can pick somebody out as the “extra driver”, but you can’t do that. Each driver who pays insurance is covering the cost of all the others drivers’ “externalities”, just as they are covering the cost of his.

Let’s rejuggle the books to eliminate the externalities: assume there are 20 million drivers in California and the accident externality I impose on each of them by driving there is one hundredth of a cent (for a total of $2000). I’ll take this entire cost on myself; my premium goes up $2000. But now I’m not going to cover other drivers’ accident externalities. My initial premium included one hundredth of a cent of cost due to each of those 20 million drivers, so we can eliminate that; my premium goes back down $2000 to its original level.

Entirely through economic sleight of hand, Edlin and Karaca-Mandic make a case for an extra $66 billion tax on California motorists. They not only avoid mentioning that the costs they are trying to account for are already taken up by insurance premiums (they actually calculate the costs by analyzing how premiums vary with traffic density), but also suggest that the tax be imposed on those premiums, at a rate of 200–400 percent! If the money were actually being spent to address the externality (the increased risk of accidents), then insurance premiums would fall accordingly and the state would take over the bulk of the insurance industry, with responsibility for covering all accidents caused by other cars on the road. In practice, however, the authors propose only that the state pocket the money as tax revenue. Insurance premiums would still have to take traffic density into account, so lower density could lead to lower premiums, but this would be nothing like enough to cover the tax increase and the cost of driving would be much much higher. (These higher costs are the only reason to assume any reduction in traffic density.)

If you want to discourage driving to decrease pollution, to prevent traffic deaths, or to alter the character of a community, I can understand that. Justifying taxes with economic obfuscation, however, is not cool.

(Brought to my attention by Andrew Gelman.)

Android Falls Further Behind 

In a previous post I argued that it was Palm’s WebOS that had emerged as a legitimate threat to Apple’s iPhone; Google’s Android platform is left in third place to compete with the Blackberry OS, Windows Mobile, and Symbian—all of which now look very much like previous-generation technology. I also said this about Android:

Frankly, I’m not sold on the business sense of this project: Android’s architecture doesn’t seem nearly web-centric enough to be of direct benefit to Google.

The main programming interface for Android is based on Java. I’ll spare you my rant about Java as my least-favorite language, but I will say this: relatively speaking, Java is not a modern programming language. The trend over the last decade has been towards languages which place less importance on typing (e.g. Python, Ruby, and Javascript), or which rely on message-passing (e.g. Objective-C) instead of function calls.

In the past week, Google released the Native Development Kit for Android, which allows Android development in C and C++. I’m a big fan of C++, but this just seems to be another giant step the wrong way along the programming-language timeline.

I’ve always thought that the main difference between Android and the iPhone is that Android is meant to run on lots of different kinds of hardware: some with physical keyboards, some without; some with GPS, some without; etc. This is an advantage in that it allows manufacturers to create Android phones for market segments that the iPhone isn’t addressing; Android could thus build up a huge user base. The size of the user base then makes the Android platform compelling for developers, who create and sell applications for all those phones.

This disadvantage to such heterogeneous hardware is the difficulty in creating applications which work well in all situations. An interface designed for a physical keyboard is nearly unusable on a device with only a touchscreen, and an interface designed only for a touchscreen is wasteful and inefficient on a device with a keyboard. What’s more, applications then need to be separated into categories for the classes of devices they run on: some apps may only work if GPS is available, for example. The prospect of a single unified app store (and thus a single market larger than the iPhone market) is remote.

A native development kit further undermines the advantages and enhances the disadvantages of different hardware configurations. Native apps actually need to be recompiled for every different type of processor (and possibly each different memory architecture) used in devices. Worse, in the cases where native development is needed to accomplish things that the Java SDK cannot, the app might need to be completely reimplemented for each different device.

I understand the desire for a native SDK: native apps can be much faster and more efficient (in terms of both power and memory) than Java apps. But that’s a backwards-looking philosophy. A non-native app which runs too slowly on today’s phone will run perfectly fine on tomorrow’s. A native app will run fine on today’s phone but won’t run at all on tomorrow’s. Apps like that won’t encourage a customer to buy a new Android phone to replace their old one, and this “just make it work today; we’ll worry about tomorrow when it comes” approach seems much more appropriate for a company in desperate need of immediate success (e.g. Palm) than one with the cash to gain ground slowly over time (e.g. Google).

There is, however, a mobile operating system which can be programmed using a modern, dynamic language. In fact, its SDK is based on perhaps the most widely-used languages in the world: HTML and Javascript, just like all the web applications the huge community of Google-centric developers are already building. This architecture actively encourages a blurring of the line between stand-alone app and web application, thus exploiting the mobile market to enhance the importance of the web. What is more, all of Google’s research on optimized Javascript runtimes is directly applicable to making these mobile applications run faster, without any change to their code. As I mentioned on Twitter, I find it very odd that this operating system is made by Palm, and not by Google.

Parsing Digg 

A reader just let me know that my repackaged Digg feed recently stopped working.

It turns out the HTML on the Digg web site is now so bad that even Beautiful Soup can’t parse it.

I’ve written about making Beautiful Soup even more tolerant before. Shortly after I posted that information, Leonard Richardson explained why Beautiful Soup 3.1 was failing on malformed pages. Rather than hack around with the parsing infrastructure as I had done before, I’ve just taken his advice and downgraded to Beautiful Soup version 3.0.7a.

The digg-direct feed should now be working again. Apologies for the outage.

NBA International League Pass 

Not many NBA playoff games make it onto television in the UK; usually it’s only the finals. This year I’ve been watching over the internet using the NBA’s International League Pass service.

The technology the service uses is lousy. You can only watch the games using the RayV player browser plugin, which (for the Mac at least) is incredibly flaky. It requires admin access to my machine, it crashes my browser regularly, live streams need to be manually refreshed periodically, and the plugin seems to need re-installation every time I restart despite automatically launching a daemon at login. And in a business decision worthy of a company whose home page looks like this, they release a new version of the player every few days, hassling all their users to upgrade every time they watch a video. Despite all this, internet video is a simple enough problem that even RayV hasn’t managed to make the games completely unwatchable.

For international customers (only), the NBA offers two packages: you get access to every playoff game in “standard definition” for $24.95, or every game in “high definition” for $34.95. I only subscribed to the standard definition package this year, and the quality is comparable to YouTube: easily good enough to follow the action and read the players’ body language, but you’re not going to pick up on facial expressions in the long shots.

The price seemed pretty reasonable; much more and I wouldn’t have paid for it this year. But affordable pricing for sports packages is rare. I wanted to crunch the numbers to see whether the price makes it competitive with TV ad-supported broadcasts.

The online video is actually just a copy of the US television feed, and includes all of the US ads, so in theory the NBA could claim some of the network’s ad revenue, or insert its own ads during the breaks. But let’s assume an ad-free feed; how much would you have to charge users to make up for the ads they’re not seeing?

I have very little data at my disposal, but the few sources I’ve found quote rates around 5 US dollars per thousand viewers or 4.62 British pounds per thousand viewers for a 30-second spot. The first figure suggests that the value of each (potential) viewer of an ad is roughly half a cent. Assuming 48 minutes of advertising in a three-hour basketball game, that puts a price of 24 cents on each viewer. If the TV networks could keep the same ratings, but charge every viewer a quarter, then they could drop advertising entirely and maintain the same revenues. Their profits would go up, however, since they spend money getting all those ads sold.

The NBA playoffs include fifteen series, each with up to seven games, for a maximum of 105 playoff games per season; this season there will be between 84 and 87 playoff games. A viewer who watches every single game on television increases the networks’ total advertising revenue by about $21 this year, and by about $26 in a year when every series goes seven games.

In the simplest model, the TV networks should be perfectly happy to give up a viewer to watching on the internet (and possibly missing all their ads) if they are paid $25. More importantly, if the NBA could get $25 from everybody who watched the playoffs they wouldn’t need to sell television rights to any network at all. They’d have to get somebody to produce the game video, but the revenue they’re generating would cover that, just as TV ad revenue covers it now.

Those are the numbers I came up with. Now some thoughts:

  • Televised sports might actually lose money at current advertising rates. I’ve heard that many networks consider sports to be a loss leader that doesn’t generate profits, but allows them to raise the stature of the network. I.e. getting viewers used to watching sports on ABC makes them more likely to watch other shows on ABC in the future, and it is those other shows that make money.

  • I wonder whether 48 minutes of ads per game is accurate. Among other things, there’s a lot of promotion for TV shows and movies that the networks try to slip in for a few seconds here and there.

  • I wonder how many games the average viewer actually watches. More than half of the games are in the first round, and many are going on at the same time, so it’s clearly not possible to watch everything. I couldn’t watch more than about one game a day, or two different series in each round, which comes out to 49 games maximum. This year I probably watched at least some part of about 24 games (for an estimated ad revenue of $6), but I liked that I had the option to watch any others for no extra cost.

  • There is far more profit to be made from the same basic revenue if the NBA sells directly to viewers instead of selling rights to networks, who sell ads to ad agencies, who make ads for advertisers, who sell products to viewers. Each of those intermediaries is taking some of the profit the NBA could keep for themselves.

  • Is it better that the NBA is kept at arms length from the advertising? The TV networks have done everything they can to lace every aspect of their coverage of the game with advertising, from announcers reading promos for movies or TV shows to in-game interviews with celebrities promoting their latest project. This sets up a useful tension between the network and the NBA, who care more about keeping the sport on the floor entertaining than about advertising. The logos and product endorsements already on players’ uniforms might get a lot worse if there was just one entity managing all of basketball advertising.

  • How much does internet video distribution cost? The TV networks have certain costs related to broadcasting, but I assume that those costs become much lower than internet distriution per-viewer once there are enough viewers. How mature are the various internet multicast technologies?

  • How hard will the sports leagues push to use internet streaming to achieve market segmentation? Only hardcore fans are likely to pay to watch online, and they are obviously willing to pay more to watch games than casual fans. If the leagues decide that national television is for casual and/or regional fans and the internet is for hardcore fans, then there’s no reason to believe that prices for the two will align.

  • When will iTunes add support for streaming of live events?

Jasmine Fallen 

Trying hard to be offended by the cultural stereotypes (or the odd weapon/ammunition combination), but failing. I always thought Jasmine was Disney’s hottest character…

(Photo by Dina Goldstein.)

Peak Google 

In a post five months ago I claimed that Google hadn’t seen its peak yet. While I expect the company’s revenue to continue to climb, I’m now a lot more skeptical of its influence and credibility as an innovator.

First, of course, it’s worth pointing out that Google’s industry reputation has been completely off the charts for the past few years. Again and again I’ve heard them described as the most exciting and innovative company, working on absolutely the most advanced technology, with the most brilliant researchers and engineers in the world. Google lectures at Manchester and Oxford were standing-room-only affairs. If Google had announced that one of its secret research teams had found the cure for cancer or the key to cold fusion, most people wouldn’t have been surprised. No company can live up to those kinds of expectations.

Beyond the dominance in search and advertising, I’ve seen three major successes from Google. Mail and Maps are two, and while they’re both solid products (good technology, and a great fit with Google’s business model) I find it tough to get excited about them. They were the first big hits of the AJAX web application era, but the techniques didn’t originate with Google and there are now better copycat apps. Their third success, Chrome, is much more ambitious, both in terms of business model (a shrewd investment in the infrastructure needed to deliver their core products) and technology (a genuinely novel internal architecture for an existing class of application, and one much better suited to the “web application” paradigm Google endorses). Although Chrome’s market share among the general public barely registers, it’s already mature enough to compete with the major browsers, and I find that very impressive.

Unfortunately, a few nice web apps and a nifty repackaging of WebKit aren’t exactly the kind of world-changing innovation everyone was expecting. It wasn’t just Yahoo! and MSN who were frightened of Google in 2005; the thinking at the time was that Google was a threat in any market they chose to enter. Google’s more ambitious projects haven’t worked out so well.

They spent a couple of years talking big about municipal wifi. Lots of rhetoric and philosophy. Not a lot of actual wifi.

They got involved in the auction for the new wireless spectrum. And lost. I suspect Google was actually more interested in influencing the terms of the auction than in constructing the data network some analysts described. If so you can’t really call the auction a failure, but the incident didn’t do much for Google’s aura of invincibility.

Android has been Google’s biggest push into a new market. Frankly, I’m not sold on the business sense of this project: Android’s architecture doesn’t seem nearly web-centric enough to be of direct benefit to Google. While the effort can hardly be called a failure at this stage, Android is currently being overshadowed by both Apple’s iPhone and Palm’s Web OS. Android may well overtake Windows Mobile and Symbian at some point, but right now it doesn’t represent any real innovation in the mobile OS space.

Could Google Wave be the big success that everybody’s been waiting for? Despite all the hype the demo has received, I say no. A spiffy development framework isn’t the same as a killer product. Apple’s Cocoa framework is terrific, but it’s not in the same league with Mac OS X, the iPod, or the iPhone.

Google is a solid company and I think they’re covering their main focus well. In terms of innovation and ability to enter new markets, however, they’ve shown both arrogance (which seems the be the defining characteristic of Google’s culture) and weakness. Their mystique is fading.

June 4 

tank man

Chinese officials took this man away. He’s never been seen since.

Today the government has blocked off most of the internet to try to keep its own people (do they even count as “citizens”?) from learning what happened 20 years ago.

China has come a long way since 1989. But the country still has a long, long way to go.

IT and Security 

I had the rare “opportunity” to sit in on a meeting with an IT vendor today. This vendor sells a Network Access Control product which works as follows:

  1. User connects to the network
  2. User’s web browser is redirected to a web page which says “Please download and run this program: <link>”
  3. User clicks the link, then runs the resulting .exe file. (There are comparable Mac and Linux versions as well.)
  4. The program scans the user’s computer, tells a network server to enable access for the computer if it passes the scan, and deletes itself. (They call this a “dissolving client.”)

I attended this meeting because I had heard this story and wanted to see if it was true. I pointed out the gaping security hole being opened here: not only does this product allow anyone capable of spoofing this web page to take control of any user’s machine (in the vendor’s screen shot the page is not even secure), but it also encourages users to download and run software just because a web page tells them to.

The vendor agreed that this was somewhat problematic, and that they’d heard this concern before. At present, the company has no particular response.

Clearly this sucks, but I can get over it. The product on offer actually doesn’t need this client software to run to do much of its job; that feature can be easily disabled. The other network monitoring and management features of the product look genuinely useful.

What concern me are the responses my comment provoked. Not those from the vendor (at least not the immediate reponses)—the technical rep was quite forthright that my point was valid. My problem is with the responses from the IT reps in the room, a handful of whom had already installed the system on their network, but most of whom were only considering it. They immediately became far more defensive than even the vendor; apparently any security hole you don’t initially spot yourself is not really a security hole.

Some comments from IT professionals—people whose jobs it is to provide secure network services to students and staff at Oxford and its colleges:

That just hasn’t been a problem for us.

We haven’t had any complaints about that.

There hasn’t been a single report of any such attack.

I suppose such an attitude can be understood when you realize that network admins don’t actually care about security at all. What they care about is malware—software that generates crippling network traffic and/or user complaints. It’s not their problem if somebody out on the internet can read the entire contents of their users’ hard drives. Not even if it’s because of their own network policies. Nobody knows about it and they don’t hear about it, so the problem doesn’t exist.

More troubling was this comment from the vendor:

Really, how common is it for an attacker to disguise his malware as a security program?

There was an awkward silence among the IT crowd after that, and a slightly apologetic “actually, a lot of infected web sites have things like that…”

The group did start to acknowledge the problem, but this was repeatedly rebutted by one admin who had already deployed the product:

The point is, this product does what I want it to do. It makes my life a lot easier. That’s the bottom line.

Apparently ease of network administration trumps any security considerations. The amazing thing is that I really liked the product. I believe that its management interface is a convenient way to monitor a network. I just think you need to untick the “install client” box on the configuration screen. But the IT guys kept setting up this absurd duality: either install the full system including the security hole, or don’t get any of the benefits.

So the discussion turned to how to mitigate this risk. Again and again, one solution kept coming up. Keep in mind that the vendor had nothing to say about any of this—I kept directing my questions to the vendor and they kept being answered by the other IT guys in the room:

We’ll definitely let everybody know about this before they connect to the network. We’ll send them a letter telling them “you’ll be sent to a web page; it will tell you to download software; download it and run it”.

No suggestions of hashes or digital signatures; just “we’ll notify them”. To IT professionals “if we could only educate the users…” is the answer to everything.

If it’s not our client, then it won’t let them on the network, right? So we just need to tell users to run it, and if they don’t get on the network, then it wasn’t ours. And if they don’t run it and do get on the network anyway, then it wasn’t ours anyway.

I’m truly worried about IT professionals who try to bodge together security solutions with spit and duct tape.

But my absolute favorite comment, which was repeated to me no less than five times when we kept getting dragged down weird technical tangents, was this:

It’s really simple. If they don’t install the client, then they can’t get on our network. Simple as that. We don’t have to worry about users who won’t install the client—they’re free to go and find an internet connection somewhere else.

There are good and thoughtful IT pros out there. There’s no question that most of the guys in this meeting really wanted to do a good job and help their users. But the culture of IT is riddled with defensiveness and political posturing.

I can’t help but think that electricians and plumbers are different. I wish I could figure out exactly what the difference is.

Apple and the Netbook 

I won’t bother rehashing all the rumors, but I would like to extend my previous thoughts about Apple and tablet computers.

The fact of the matter is that there does seem to be a significant demand for mobile computers. Many computer manufacturers have tried to meet this demand with “netbooks”: very small versions of laptops.

The trouble is that laptops aren’t really mobile computers; they’re portable desktop computers. If you want to use a laptop you put it down on a surface and lean over the keyboard and screen. It’s terrific that they work sitting on surfaces as lopsided and off-balance as your lap while lying on the couch, but the mechanics are still pretty much the same as the desktop computer.

A portable computer can be moved when it’s not in use; a mobile computer is designed for use while it’s moving. The iPhone interface isn’t good because it looks good and works well in the “white vacuum” of the commercials. It’s good because it works while you’re carrying a bag, listening to music, and strolling down the street. More substantial sessions with mobile computers are characterized by the extraordinarily low cost of a context switch: there’s no need to establish or clean up a work space. You can read a Kindle while waiting in line at the grocery store knowing you’re not going to hold anybody up.

The most surprising thing to me about the iPhone is not all the nifty little mobile applications—I would have guessed that it’s possible to do those well. What surprises me is how competent the iPhone is at things that I had previously considered to be desktop tasks: web browsing and email. It turns out that the touch interface is more than sufficient to navigate a web browser, and most web sites are perfectly tolerable on a smaller screen with the iPhone’s excellent pan and zoom features. The on-screen keyboard is nothing like as fast as a desktop keyboard, but it’s easily enough for writing a few paragraphs of prose for an email.

The iPhone is competent at web browsing and email, but it’s not good enough to be competitive with desktop machines. Mainly, the screen is just too small. As mobile computers go, the iPhone scores big on “mobile”, but it doesn’t quite feel like a “computer” in the sense that it could be a replacement for much of my desktop computer use.

There’s a demand for some kind of a mobile computer that can do web browsing, emails, feed reading, and possibly even e-book reading. The demand doesn’t include a need for legacy applications (as supported by the fact that casual users actually accept Linux instead of Windows on the current devices that try to fill the niche). Apple has the most expertise in the mobile computing market. A tablet would be unlikely to cannibalize sales of either the iPhone or any of the Apple laptops. It makes sense that Apple will release something in this space.

The outstanding question is when. If Apple wanted lots of third-party applications available for such a device, then they’d need to announce it at WWDC in a week’s time: developers will be setting their schedules based on what is announced there.

But Apple doesn’t necessarily need third-party apps at launch. I’ve seen analysts talk about how the App Store is the key to the iPhone’s success, but of course the iPhone was a hit when it had only Apple applications. They could easily ship a tablet computer with nothing but Safari, Mail, and an eBook reader developed in-house in conjunction with Amazon, and wait six months to allow any third-party apps. In fact, this approach would allow Apple to set the standard for how tablet interfaces should work.

There’s a third option: Apple could ship with only apps from a few hand-picked developers given early access to the SDK. Developers Apple trusts to build quality interfaces, and to provide useful feedback on the APIs. Developers who are better at building certain kinds of must-have applications than Apple is. Developers who can be trusted to keep their knowledge of the device top secret throughout the project until the day the product—and their app—launches.

Anybody know what Brent Simmons has been up to lately?

Manual of Style: Don't Preempt User Tools 

The web is young enough that we have yet to achieve any real consensus on a robust manual of style. With technology changing so quickly, any detailed guide would rapidly fall out of date. But here’s a general rule that takes care of several of the common annoyances I find in even “professional”-level writing on the web:

  • Don’t add meta-data unless you have additional information not available to your reader.

It’s actually a very simple rule. Meta-data, such as links, represents information. If you don’t actually have any information to add, don’t pretend you do. It undermines those cases when you do have something to say.

This rule explicates the wrong-headedness of several common practices:

  • Don’t link to Wikipedia. If you use a term the reader doesn’t know, they can look it up on Wikipedia (or any other reference) themselves. There are even plugins that let users look up any word or phrase without much more trouble than clicking on a link. If you don’t have any more specific context for a term than its entry in the encyclopedia or dictionary then you don’t really have any information to offer.
  • Don’t add ticker symbols to every company name. (I’m looking at you, New York Times.) People know how to look up business information. I’m sure there are plugins for this as well. I understand the desire to drive readers to your preferred site; just know that you’re sacrificing style to do it.
  • Don’t “enhance” your links. Over the past couple of years publishers have been junking up their sites with tools such as Snap and Sphere that rewrite links to pop up graphical previews of their destinations or search results. Users who want previews can install special tools like Cooliris in their client.

Microsoft Distribution 

Robbie Bach of Microsoft on their plans for retail stores, as quoted by Benjamin J. Romano:

I don’t think – I saw some of the commentary that this was designed to be the same as Apple or whatever. You should think about it, I think, quite differently.

Apple’s approach was about distribution. People forget that when they entered their stores [in 2001], this was quite a while ago, they didn’t have distribution for Macintoshes, so they created their own distribution.

We have plenty of distribution. These stores for us are about building our connection to customers, about building our brand presence and about reaching out and understanding what works and what improves the selling experience.

So Apple you would think of as a volume distribution play. You should think of ours as much more of a brand and customer relationship investment more than anything else.

There’s already plenty of commentary on what this statement means and how Microsoft should run its new stores, but I just can’t get past the complete misunderstanding (or rewriting) of history here.

Before Apple Stores, you could buy Macs in plenty of places, including Sears and CompUSA (both of which were quite successful and popular at the time). Many argued that Apple was being far too stingy in who was allowed to sell Macs. My recollection is that it took longer for Apple to get shelf space in “non-computer” stores to sell iPods, but you’d be crazy to think that Apple opened up retail stores because they couldn’t find distribution channels for iPods.

Apple’s problem was that in stores selling Wintel PCs, the sales staff (who were likely Wintel users) knew nothing about Macs and were thus completely unhelpful to customers, or actively steered them towards Windows. Apple tried fighting this with special training programs and dedicated staff, but this just led to isolated “Mac ghettos”: having a few square feet in the dim back corner of a shop doesn’t do much for a brand’s image.

Apple opened shops not to get distribution, but in order to connect to customers directly, deliver the “well-designed; just works” brand message, and build customer relationships all the way from sales to support to repairs to upgrades.

Now Microsoft is whining that current retail channels aren’t delivering the message they want and that their brand image is suffering, so they’re going to open their own stores.

It’s one thing to play follow-the-leader with Apple—sometimes that’s the right play, and market leaders can often afford it. But these ridiculous denials are just embarrassing.

Tolerant HTML Parsing 

Beautiful Soup is an absolutely terrific Python library for parsing HTML and XML. Its strength is its ability to offer a clean document tree even for bad markup (including such gory details as converting everything to unicode intelligently).

Unfortunately, Beautiful Soup’s tolerance for lousy markup is limited. It’s got lots of clever heuristics for repairing broken nesting (<b><i>foo</b></i>) and guessing implicitly-closed tags (<li>First<li>Second), but it uses Python’s standard HTML parser class to tokenize the markup. HTMLParser isn’t designed to accommodate syntactically malformed markup.

The case I’ve encountered in the wild involves “syntactically nested” tags—constructions of the form <a <br> href="foo">bar</a>. The nested tag is almost always a line-break; presumably this is the result of a particularly lousy tool attempting to do its own text-wrapping.

Although such markup is clearly atrocious, web browsers are fairly consistent in the way they render such fragments: anything up to the first > is part of the tag, and the rest is just text up to the < that opens the next tag. Both Safari and Firefox render my example as href="foo">bar, presumably wrapped in an a tag with no href attribute. Even TextMate’s HTML highlighter interprets the syntax in this way.

HTMLParser chokes on this syntax, however, making it impossible to use Beautiful Soup to process pages with such errors. uses the following regular expression to find the end of a tag it is parsing:

locatestarttagend = re.compile(r"""
  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name
  (?:\s+                             # whitespace before attribute name
    (?:[a-zA-Z_][-.:a-zA-Z0-9_]*     # attribute name
      (?:\s*=\s*                     # value indicator
        (?:'[^']*'                   # LITA-enclosed value
          |\"[^\"]*\"                # LIT-enclosed value
          |[^'\">\s]+                # bare value
  \s*                                # trailing whitespace
""", re.VERBOSE)

If whatever matches this expression is not followed by > or />, the check_for_whole_start_tag routine raises an exception.

A general replacement for HTMLParser which is designed from the ground up to handle questionable syntax (presumably just turning anything it can’t parse into text runs) would be a useful utility, but for now I just wanted to fix the particular cases I’ve encountered in practice. Modifying check_for_whole_start_tag such that it no longer raises exceptions is one option, but that doesn’t match the behavior of the web browsers: the existing version of locatestarttagend stops matching at the beginning of any syntactically nested tag, while the web browsers stop at the end of the nested tag. As a quick hack, I modified HTMLParser to allow attribute names beginning with <. This involves changing both locatestarttagend and the expression for identifying attributes:

attrfind = re.compile(

locatestarttagend = re.compile(r"""
  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name
  (?:\s+                             # whitespace before attribute name
    (?:[<a-zA-Z_][-.:a-zA-Z0-9_]*     # attribute name
      (?:\s*=\s*                     # value indicator
        (?:'[^']*'                   # LITA-enclosed value
          |\"[^\"]*\"                # LIT-enclosed value
          |[^'\">\s]+                # bare value
  \s*                                # trailing whitespace
""", re.VERBOSE)

Not the most robust solution (it’s easy to find examples that still break), but so far it’s been able to handle everything I’ve found on the web.

If you’re too lazy to add the two characters yourself, feel free to download my patched version of the HTML parser.

Knuth Reward Check 

Donald Knuth is a visiting professor here at Oxford’s Computing Laboratory, and during the brief periods when he is in residence he and I frequently use the same printer. I’ve never actually succumbed to the temptation to read his drafts while I’m waiting for my own printouts, but I’d be lying if I said I wasn’t tempted.

Several months ago, he gave a seminar here about his latest work on the next volume of The Art of Computer Programming (TAOCP). I was interested in his work on ZDDs, so I read through his draft of Pre-Fascicle 1b and found some errors in the solution to one of the problems he presents. Knuth himself does not use email, however there is an email address for reporting bugs in TOACP, so I sent my message there, hoping there was a chance he might see it and say hello before leaving Oxford.

As I rather suspected, however, the claim that he does not use email seems genuine: my message seems to have been put to paper less than three minutes after it was sent. A month or so after my message I received an email from his secretary requesting my postal address. (I had actually included this in my original message, expecting him to reply by post, but the secretary apparently missed it.) A few weeks later I got my response:

Letter from Donald Knuth

Included, of course, was one of Knuth’s famous reward checks:

Knuth Reward Check

I was hoping that mine might be the first reward check from the Bank of San Serriffe, however even the example listed in Wikipedia has a smaller check number. It was written on the same day, but apparently with a different pen.

Knuth made some very nice comments on the printout of my email. I am, however, quite insulted that he has chosen not to follow me on Twitter.

Digg Reactions 

I opened a prior post about my repackaged Digg feed like this:

I subscribe to Digg’s syndicated feed, although more to keep track of the zeitgeist than for information…

This generated a surprising amount of feedback relating to how valuable a source of information Digg actually is. Rather than offer sweeping generalizations, I’ll provide my reactions to the headlines currently on Digg until I hit one I might actually click through:

Plasma television screens set to be banned

An increasingly-obsolete technology has shortcomings.

6 Insane Prison Escapes That Actually Happened

Another list of apocryphal stories posted in order to get AdWords revenue.

We Screwed Up Windows 7 Beta, So Unlimited Downloads For All

A dubious story about how Microsoft may or may not be stupid which focuses entirely on release strategy and not on any technical issues of the product itself.

Top Selling Mouthwashes Linked to Cancer!


Wimbledon champion Sidney Wood dies at age 97


Man allegedly pulls gun on paramedics for treating him

I would have thought armed patients would be a regular hazard for paramedics, who are routinely first-responders at shootings.

Quick fixes for 10 common Mac problems

I probably know all this already.

Man confused why girls don’t talk to him

An image post, but just from the preview I can see that it’s an image of computer text. Do Digg users have any notion of appropriate use of technology?

Two Google searches ‘produce same CO2 as boiling a kettle’

Wow, that much? Wait—do I have any idea how much CO2 it takes to boil a kettle? And why are we talking about CO2 and not just “energy”? Or at least CO2?

Nicest Pic of Brazilian Butterfly Flocks You’ll See ThisWeek

Not a very high standard to reach.

Solar and Wind Powered Portable Charger Unveiled at CES 2009

And the target audience for this product is? Is this really the first such product? There was nothing more interesting at CES this year?

Sneak Peak of Pixar’s ‘Up’

I don’t bother with “sneak peaks”.

Vintage Floppy Disk Art: Commemorating The Early Day of Tech view!

Meaningless description of something almost certainly of no interest whatsoever to me.

Interpret my dream please? [Y!Ans]

Another image of computer text.

Some Upcoming MMORPGS to Watch in 2009

Playing MMORPGs isn’t taking up enough of your life?

Scientists Discover Way to Levitate Tiny Objects

If the poster gave any impression of what scale of “tiny” we were talking about I might care.

Ubuntu and Its Leader Set Sights on the Mainstream

Shocking that someone running a Linux distribution would want it to go mainstream.

Europe ‘exporting’ measles to poor countries

Anti-science, anti-vaccine nitwits are killing people. I’m already familiar with the story.

CEA confirms Apple-related exhibits at CES 2010

If true, this will get better coverage from Mac-centric blogs.

$20,000 Electric Car: Toyota FT-EV

Unreleased product reports are sketchy enough; automotive concepts tend to lie very far from reality indeed; an estimated price of an unreleased automotive concept is close to meaningless.

Cardinals decimate Panthers to reach NFC title game

Who the hell is reading Digg to get sports scores?

Every Simpsons couch gag

I’m not that bored.

So, Does Gum Really Stay in Your Stomach For Seven Years?


Best Free 3-Column WordPress Themes

Digg is not the place to get design advice.

GOP sees Franken as top enemy

Republicans dislike a charismatic liberal. What a shock.

British Climbers Die In The Alps

The headline gave me all the news I needed. Note to self: The Alps are dangerous.

Never Tell Them The Date… I mean odds

Useless description…and the original URL provided (and the page to which you are redirected by the link I provide above) makes very sketchy use of the # character. Authors who don’t design their URLs well know they don’t have information worth linking to.

The Magic of Mushroom Spores (pics & vid)

It’s remarkable that the Digg target audience is interested in mushroom spores but would like to learn about them without any of that pesky reading.

1 in 7 U.S. Adults are Illiterate

Really? Possible click-through.

That’s 30 stories to find one possible click-through. I’m usually able to scan headlines in under two seconds apiece (it’s actually closer to half a second per headline for Digg), and Digg posts around 140 stories each day, so subscribing to the Digg feed costs me about five minutes a day to find less than five stories I’ll actually read, and the vast majority of those will get less than 30 seconds of my attention. (This was the case with the “literacy” story above.) Still, Digg has the lowest signal-to-noise ratio of all my syndicated subscriptions, so I’m always on the verge of leaving it for another zeitgeist-tracking feed.

MacWorld Expo Wish 

I won’t bother with predictions, and I don’t have the connections to tease; I’m going to make a wish/anti-prediction.

I want Apple’s mobile OS in an ebook form factor, with a screen roughly the size of a sheet of A5 (comparable to half a sheet of 8.5x11) or larger. I want it to have a great PDF reading application—something that can automatically zoom to eliminate large margins on the page—and an interface to a large ebook library, possibly through a partnership with Amazon.

I don’t think it will happen. The difference in size would make the interface different enough from that of the iPhone to require a substantial new engineering investment, and applications written for the iPhone wouldn’t work correctly. More importantly, an ebook reader would need to be much much stingier with power than the iPhone, and that probably means an e-ink display instead of an LCD. I don’t believe such displays have the responsiveness for motion graphics, and a touch-based interface without excellent visual feedback just wouldn’t work.

I’ll set the difficulty for my non-prediction at about 0.2, but if they do release such a device I’ll soon have a nice new toy to console me in my embarrassment.

Oh—and can we please have an external trackpad?

Oldest Files 

I didn’t expect this meme to be terribly interesting, but I ran a script to find the oldest files on my machine anyway. Beyond the fact that I have a huge number of files last modified between 1900 and 1903 or on 1 January 1970, I was surprised to discover that my oldest files whose modifications dates have been successfully preserved are photographs taken nine (!) years ago after I first tried cutting my own hair. For the record, I thought the haircut was very comfortable, but others felt the aesthetics left something to be desired.

Rob Shearer after a self-administered haircut
haircut mugshot haircut profile
Rob's home office in 1999

Some Books 

Finally set up my Amazon Associates account, so here’s some pointless shilling. Just a few of the best nonfiction books I’ve found; each represents the best of at least half a dozen books I’ve read on the same topic (with the exception of McGee’s book on the science of cooking; I’ve never encountered anything comparable). If you read all of these, then you’ll be much less impressed when I regurgitate their contents, and our relationship may suffer.

A Rant on Sentence Spacing 

When typing, some people use a single space after a sentence; others use two spaces. For the most part, I think these typing habits are meant to relate to two different typographic styles: double-spacing approximates what some call “english spacing”, while the single-spacing style is usually called “french spacing” (although “american spacing” might be more accurate for English-language text).

I freely admit that the choice between the two is merely a matter of style; neither is definitively “wrong”. As with other matters of style, however, the choices you make are interpreted as a reflection of your background and your values.

The history in brief

Single-spacing has always been the norm in French, but the double-spaced style was common for English-language typography until after the second World War. In the 1950s American publishers largely switched to french (single-)spacing, with the rest of the English-speaking world following soon after. I don’t think I’ve come across a professionally-typeset publication using “english spacing” that was printed in my lifetime.

Apparently the US government’s style guide recommended two spaces between sentences as late as 1959, but even this incredibly anachronistic guide (still dedicated mainly to typewriter typography) now recommends single-spacing:

2.49. A single justified word space will be used between sentences. This applies to all types of composition.

I was taught the “two spaces after a period” rule in an American high school typing class I took in the 1990s. I have little recollection of the exercise book we used, but it made little or no mention of computers and I would not be at all surprised if it was originally authored before 1960. I also dimly recall a recommendation from my mother that I use two spaces at the end of each sentence, based on a rule that she had learned in a typing class.

The TeX typesetting system, designed in the 1970s by Donald Knuth, has always added extra space between sentences by default, but this behavior can be changed with the \frenchspacing command. Most of the common LaTeX styles I use (LNCS; Elsevier) automatically set this option.

Technical details of HTML make adding extra space between sentences much more cumbersome than simply hitting the space bar twice. Single-spaced sentences have always been the norm on the web.

How these styles are interpreted

Often what is interpreted as a stylistic choice is really nothing but a habit of which its owner is unaware. Still, lack of attention to one’s style sends a message of its own, as does the preference for habit over style. It would be difficult not to make assumptions about the social life of a man wearing mismatched clothes two decades out of fashion.

It’s tough to identify any strong impression I get from single-spaced sentences: single spacing is the norm in all professional typography, on the web, and in the vast majority of email (at least for those below the age of fifty).

Double-spaced sentences, however, are rare enough to draw my attention. I consider this novelty somewhat undesirable in its own right—style should not distract from content. Whether fair or not, I have quite a negative instinctive reaction to double spacing, for a number of reasons:

  1. It betrays an ignorance of contemporary literature and design. A programmer unfamiliar with common coding conventions and idioms is a programmer who hasn’t worked with much pre-existing code; a writer unfamiliar with standard typography is a writer who doesn’t read.

  2. It’s pretentious. Knowledge of the “two spaces after a period” rule is evidence of a formal education in typing; its application separates the writer from those without such an education.

  3. It’s prescriptivist. “Two spaces after a period” is a classic example of a rule promulgated by authority which flies in the face of actual usage. Neither language nor style is subject to decree; both are natural phenomena which manifest and converge despite, not because of, attempts to codify them.

  4. It’s old-fashioned. Styles change. Sixty years ago double-spaced sentences may have been common. They may come back into fashion in another sixty. In 2009, double spacing is an eccentricity.

Obviously these are just my own prejudices. I have friends who double-space their sentences, and I usually suggest that they reconsider their style in the same way I’d try to warn a friend about a particularly ugly shirt.

I’ll give the last word to Robert Bringhurst’s The Elements of Typographic Style:

2.1.4 Use a single word space between sentences

In the nineteenth century, which was a dark and inflationary age in typography and type design, many compositors were encouraged to stuff extra space between sentences. Generations of twentieth-century typists were then taught to do the same, by hitting the spacebar twice after every period. Your typing as well as your typesetting will benefit from unlearning this quaint Victorian habit. As a general rule, no more than a single space is required after a period, a colon or any other mark of punctuation. Larger spaces (e.g., en spaces) are themselves punctuation.


A work in progress.

2009 Predictions 

In what should become an annual tradition, here are some predictions for 2009. I should evaluate them a year from now alongside my 2010 predictions.

There are several axes used to measure the value of a prediction (and thus whether it’s even worth writing about), the most important of which are the evaluation criteria (subjective judgements like “the world is friendlier in 2009” are low-value) and degree of difficulty (“iPods will outsell Zunes” is not an impressive prediction). Ideally degree of difficulty would be measured by market forces, but I’ll just include the degree of difficulty as a number between 0 and 1 myself. The scale is arbitrary and probably nonlinear. Difficulties below 0.3 are seldom worth mentioning, and difficulties above 0.8 are really just for fun.


1: Usain Bolt sets 100m record at 9.62 (difficulty 0.7)

The best science I’ve found suggests a “perfect” race from Bolt could come in as low as 9.60, and the over-under I picked right after the Olympic finals was 9.63, but that was considering the chance that Bolt wouldn’t even be able to compete during the 2009 season. No major drug investigations and no reported injuries, so I’ll move it down to 9.62, with a difficulty of 0.7 for picking the exact time. At a difficulty of 0.5 I’ll say that he’ll manage between 9.60 and 9.64.


2: Russia tones it down (difficulty 0.4)

There were predictions during the campaign that Russia would “test” Obama early on by instigating a crisis on par with the Georgia incident. I’m sure the anti-American bluster and rhetoric will continue, but I think a weak economy and low oil prices have seriously undermined the Russian appetite for such antisocial behavior. I don’t think Putin will sacrifice money and international goodwill for more regional influence; he’s low on both.

It’s hard to come up with objective criteria for this; evaluation will be somewhat subjective. Let’s say that I don’t expect Russian military action to appear in many New York Times front-page headlines.

3: Blagojevich cleared (difficulty 0.7)

This is a pretty outlandish prediction given the current sentiment against Blagojevich, but it seems to me that he just got caught on tape being particularly crass and overt about a process in which all major politicians engage. I know very little about the law in this area, but the tape I’ve heard seems enough to ruin a political career but not enough to get a conviction.

Tech companies

4: Netflix starts to slip (difficulty 0.6)

I was trying to come up with a measure for the theme of “physical media gives way to electronic distribution” but had trouble coming up with objective measures. Netflix does already have an electronic delivery component, but DVD delivery is their bread and butter, so I guess they can serve as a proxy for best-of-breed physical media rental services. The prediction is that they post a loss through Q3 2009 (Q4 results won’t be out by time of evaluation). Note that I wouldn’t consider such a loss an effect of a weak economy: I’d expect that lower disposable incomes would increase the market for cheap stay-at-home entertainment.

It will still take a long time before most consumers have a direct pipeline from electronic delivery to their big-screen TV, but it’s pretty clear that Blu-Ray is already a legacy technology. I rent through iTunes and it’s orders of magnitude more convenient than Netflix ever was. The DRM issues with electronic distribution aren’t that big a deal for rentals, and how many films have you really watched enough times for buying to be more cost effective than renting? Physical media will be mainly for collectors in the future.

5: New iPhone model from Apple (difficulty 0.3)

A new product release every year seems a pretty safe bet, but there are some points for this as a prediction just because I don’t see any glaring flaw in the current hardware offering. With 3G, GPS, bluetooth, and a touch screen most iPhone improvements can come via software upgrades.

To me the amazing thing about the iPhone as a product is that Apple is selling full-fledged general-purpose computers with no mention whatsoever of technical specifications beyond secondary storage capacity: you’ve got to be incredibly geeky to have any idea about the processor speed (620MHz ARM hardware underclocked to about 400MHz to save power) and RAM (128 MB, 11 of which are dedicated to graphics) of an iPhone.

It’s about time to up these specs: on wifi (and probably 3G as well) most iPhone lag is due to processor-intensive work (like web page layout) or the need to reorganize memory (which often means quitting background applications), and of course improved battery life would make everyone happy. I’m curious how Apple will market new iPhones (will they attempt to break the market into “pro” and “consumer” segments?), but I fully expect to see models with faster processors and more RAM in the next 12 months.

6: Microsoft continues to fade away (difficulty 0.3)

Let’s take stock: Microsoft’s flagship is Windows, which is losing market share to Mac OS X, and their main revenue stream is from Office, a product whose relevance has been declining for a decade. Their major new ventures have been the XBox, currently in third place behind the casual-gamer Wii and serious-gamer PS3, the Zune, which certainly hasn’t caught on as realistic competitor to the iPod, and (I couldn’t have made this up) a big-ass table. Most of Microsoft’s industry influence is now exerted through their control of Internet Explorer, almost universally regarded as the worst browser in wide use and one that most everyone not working at Microsoft would rather did not exist. (Would any users really be upset if their Windows machine shipped with Firefox, Safari, or Chrome instead of IE?) People once hated and feared Microsoft; these days it’s tough not to feel sorry for them. I honestly wish they’d do better, if only because I think Apple does a much better job when they need to (Mac OS X; Safari; iPhone) than when there’s no legitimate competition (iPod; iTunes; Apple Mail).

The fact that every new PC needs some OS (and that Linux still isn’t a viable option for most users) and that .doc files are still a standard of sorts in the business world mean that Microsoft will print money for some time to come, but I see no long-term strategy from Microsoft. Expect further stock declines relative to Google and Apple. (Note the evaluation criterion is relative to those two companies, which should adjust for to strength/weakness of the economy as a whole. This is also a prediction I can put real money behind by going short on MSFT and going long on APPL and GOOG.)

7: Yahoo broken up and sold off (difficulty 0.4)

The Yahoo board may not yet be desperate enough to take the lowball offers that they’ll get in this economy, but a breakup of Yahoo seems to be the plan. I’ll take partial credit if the company is sold in its entirety, but I consider this much less likely.


8: GM and Chrysler file for bankruptcy (difficulty 0.7)

The “bailout” was/is just about buying time. I worry that is will be politically difficult for Obama to let the United Auto Workers die, but I think he has the integrity to do it.

Chrysler and GM epitomize everything wrong with big business. Modern technology means that companies just don’t need to be this big any more, and we’d be much better off with a dozen Teslas (most of which do only design in-house, outsourcing manufacturing to independent factories bought from the corpses of the big three), each so scared of falling behind that they’re afraid not to innovate. It’s innovative American startups that truly outcompete companies from other countries; American big business only ever has an advantage against China/India/Japan when it can exploit the first-mover advantage from its startup phase. Chrysler and GM just aren’t as good as their Japanese and German incarnations.

If only one of the big three files for bankruptcy then I’ll call that success at a difficulty of 0.6.

9: Stock markets tread water (difficulty 1.0)

Market predictions are a crapshoot every year, but this year I’ve seen confident estimates from experts that differ in absolute terms by well over 100%. Luckily, I’m no expert.

I can’t see investor confidence coming back to support any big gains, but then I’m not sure how much worse things can get. Much pondering and my conclusion is this: in twelve months’ time, the DJIA, Nasdaq, and S&P will be at roughly their current levels, with cash-rich companies slightly higher. I’ll set my numbers at 8600, 1700, and 880, respectively. I’m claiming this as a hit at difficulty 1.0 if I’m within 1%, 0.7 if I’m within 10%, 0.5 if I’m within 20%, etc. (Difficulty is given by e^(0.03963(1 - percenterror)).)


10: My own research output (difficulty 0.4)

My plan has been to submit my doctoral thesis in 2009. The difficulty for doing so should probably be set far higher than 0.4 based on how much work I have left, but I’ll say that I’ll get it done before 2010.

I’ll also predict that I get three “major” papers of original research (not just multiple variants on the same theme) published in 2009, which I think is considered a reasonable output for a professional researcher. That would make 2009 the first year I’ve achieved that standard.

Twitter backup 

Continuing on the theme of small tools, a tweet from John Siracusa just reminded me that I really should back up old tweets. So I wrote a very bare-bones Python script to dump all of a user’s public tweets into RFC2822-style files, each named with the ID of the tweet it represents.

From the command line you can archive my tweets like this:

$ mkdir tweets
$ cd tweets
$ rvcx

If for some reason you want to archive your own tweets instead of mine, then replace rvcx with your own twitter name.


I hadn’t noticed when I threw it together, but of course this script makes use of the json package only available in Python 2.6+. (It also uses a with statement, but that’s trivial to replace.) If you’ve got a real need for a backup script that runs in Python 2.5 or 2.4 get in touch and I’ll find a way to make it work; 2.6+ is sufficient for my needs.

Repackaging Digg 

I subscribe to Digg’s syndicated feed, although more to keep track of the zeitgeist than for information—I probably actually click through only one or two stories a day (unless I’m very very bored).

Despite the site’s popularity, the official feed is an absolute disaster. It fails to validate, for both syntactic (wacky use of multiple isPermalink attributes) and semantic (lots of meta-information shoved into tags hidden in the namespace) reasons. (Of course, the Digg web site also fails to validate, for nontrivial reasons.) Worst of all, even if readers understood all the custom tags the feed would still lack much of the most important information available on the web site, including both the original author of the linked story and the URL for the story itself. Digg’s official feed forces you to first link over to, where you see the same information as in the feed, and then click on the title link to get to the story you wanted in the first place. This kills keyboard navigation on my desktop machine, and the extra page-load is a big pain on a slow client like an iPhone.

I finally got sick of this (and needed a break from my main work) so I put together a script that reads the Digg feed as well as the Digg web site and builds a much friendlier atom feed. Each entry links directly to the main story, but includes the Digg page (with comments and such) in the via field; most readers thus make it easy to visit either one but optimize for reading the main story. I also list the story web site as the original author and relegate the Digg submitter to “contributor” status, although unlike the official feed, mine also includes a link to the contributor’s Digg user page. Finally, I include the story thumbnail in the feed as part of the HTML description.

Others are welcome to subscribe to this one-site mashup; most readers I’ve checked do intelligent caching so hopefully this shouldn’t take much bandwidth. I’m regenerating the feed every twenty minutes, so there should be only a little lag between my feed and the one from Digg.

I’ve tried to account for some of Digg’s weirdnesses (serious confusion about escaping in both web content and feed; occasional retraction of stories and lack of synchronization between web and feed), but if you see anything strange going on, or even if you just find the feed useful and would like to encourage me to keep it working, let me know. I’m not archiving all generated versions of the feed, so bug reports that include sources from my feed, Digg’s feed, and Digg’s web page are likely to result in patches much more quickly than reports without sources.

Grading Cringely 

Cringely’s last PBS column is up, and I thought I’d honor the last installment of the only consistently-wrong blog I subscribe to with just a touch of the analysis I wanted to give to every Cringely post. In this week’s episode, Bob looks back on his predictions for 2008:

I wrote a year ago that we’d see the beginning of a shift away from PC-centrism with other platforms beginning to supercede the venerable PC. This is a slow process as I said it would be but generally I think I was correct. Sales growth for PCs slowed in general while growth for smartphones and netbooks increased. I never said PC sales were going in the toilet but it seems clear that the action these days is elsewhere, so I’m going to claim this one.

Let’s compare this with the original prediction:

1) The personal computer will decline (or continue its decline) as our key IT platform, replaced slowly by Internet-centric devices of all kinds from phones to TVs to PDAs. Everything will BE a PC of course, but we won’t call them that.

The prediction was not for “the beginning of a shift”; it was for a decline. The economy has come to a screeching halt in 2008 and PC sales continue to grow, just not as quickly as before. There’s only one “smartphone” that made any real dent in the role of the PC in the past year, and that’s the iPhone. More to the point, netbooks—cheap laptops with specs that would have been top of the line a year or two ago—aren’t PCs? The prediction points to devices “from phones to TVs to PDAs”. No mention of netbooks, specific mention of TVs (which, if anything, have become less relevant), and a baffling reference to a product category that barely exists any more. And no mention of “portable media players” like the iPod Touch.

Either the original prediction was so vague as to be completely meaningless (“technology will eventually change…”), or it was wrong. 0/1.

I said the Digital TV conversion would be a nightmare, though the greatest pain would be felt in 2009 when the analog transmitters are actually turned off. I think this is correct. Poll your friends and you’ll find most are in denial. While everyone has seen a DTV commercial, there are millions of people who still don’t know what’s happening. Free converter boxes are sold out, which ought to be good, but expected DTV sales have not met forecasts, so I say there are 10-15 million people who are going to wake up mad as hell in February. So I got this one right and claim it for 2009, too. While it may seem quiet now, February and March are going to be ugly.

On its own this argument is unconvincing: the changeover actually is a nightmare, but nobody has noticed because they’re all in denial? But let’s look at the original prediction:

2) This one is really for 2009 but I know we’ll see the effects in 2008. The DTV conversion, where U.S. analog broadcast television stations are turned off in February 2009 and we all have to switch to digital TVs or to cable or satellite or buy those DTV converter boxes, well this whole conversion thing is going to be an absolute disaster. I don’t expect technical problems at all, but the public won’t understand it, the government will blow it, and at the last moment some politicians will even try to cancel it.

I don’t know how to measure public understanding, but there’s no evidence that the government has “blown it”, and as far as I know politicians did not try to cancel it. 0/2.

I wrote that Cisco would acquire Macrovision, which didn’t happen.

0/3, but credit to Cringely for the first prediction with well-defined evaluation criteria.

I predicted that venture capitalists would sour on start-ups with revenue models based solely on advertising… I probably got this one wrong, though I’d say it is still coming.

0/4. I would give credit for dinging himself on another judgement call he could have tried to spin, but the classic Cringely tactic of “I’m wrong now, but that’s just because the industry needs time to catch up with me…” wipes out any bonus points.

I predicted that Google would bid and win the 700 MHz spectrum auction.

0/5, but another quantifiable prediction.

I predicted that IBM would have bad earnings, would try to sell Global Services, and failing that might fund the sale itself. Wrong, wrong and wrong.


IBM’s earnings were saved by the weak dollar or I would have been right.

You just lost your bonuses from the Cisco and Google predictions.

I said Microsoft would indefinitely extend the life of Windows XP. I might well claim this one but – like Wall Street – I may as well take all my losses while I can.


Of course I had to say that Steve Ballmer was going to retire, too…


I said Apple would embrace multi-touch pointing in its computers. They did. Whew!

Wait—what? Apple embraced multi-touch pointing? They have multi-touch trackpads on their laptops, but they’ve had those for quite a while—the MacBook Pro I got in 2007 had multi-finger scrolling, so this isn’t really new in 2008, is it? Let’s check the original prediction:

9) As part of its transition from a PC company to a consumer electronics and content company, Apple will introduce – and trumpet in a huge media show – its replacement for the mouse. Really.

Uh…no. No replacement for the mouse. No huge media show. And major penalty points for calling this one in your favor. 0/9.

I said a 3G iPhone was coming. Yes! And an Apple subnotebook/tablet. No! This latter device remains in the wings, however.

Cringely conveniently avoids mentioning that (a) a 3G iPhone was completely obvious, and (b) the CEO of AT&T had already confirmed a 3G iPhone at the time the prediction was made. Still, 1/11.

Apple didn’t license ANYTHING, much less its embedded OS X.


Apple DIDN’T license the Windows API, DIDN’T dump Akamai for Google (ironically Google became an Akamai customer), and Season 2 of NerdTV never appeared.

Final tally: 1/15, with the one success being a prediction of an obvious product that was already confirmed in the public record. If you’re looking for insight into the direction of the tech industry, you’re better off reading the newspaper and using some common sense than listening to Cringely.

As for Cringely’s 2009 predictions, most of them are either obvious or hard to quantify. The “mobile device” market obviously has more room for growth (at least in terms of functionality and revenue) than the desktop computer market, there will obviously be IT layoffs, and newspapers will obviously die.

I’ve got no opinion of AMD or cyber theft announcements, but I’ll agree with Cringely that Yahoo will get broken up and the pieces sold off. The preparations have already begun so there’s a low degree of difficulty here, but it’s a valid prediction.

I disagree with Cringely about both Microsoft and Google peaking in 2009, but Cringely defines these as “aggregate peak[s] of wealth and influence” so it’s really just a judgement call. I think Microsoft’s peak is well behind it: in terms of influence, there was once truth to the maxim (originally spoken of IBM) “no one was ever fired for choosing Microsoft products”. After the Vista debacle they’ll never regain the trust they once had from their enterprise customers; their peak was pre-Vista. Google, on the other hand, still has room to grow. They’ve dipped their toes into a few things (wireless auction; Android) but we haven’t seen them really flex their muscles. The ease with which they launched Chrome is just a small example of how they can change the tech landscape without much effort.

I also think it’s a long shot that Apple will buy into the last-mile networking market. I’d love to see them roll their own mobile network (even if it were just wifi with very little coverage) but I don’t think it’s a business they want to be in. They can’t even get MobileMe to work and I don’t think Jobs will let the Apple brand take the hit from mediocre network services. If anyone is going to make a splash here, I think it will be Google.

Update 2009-01-17

Looks like Obama’s transition team has called for a delay in the DTV transition. I still raise an eyebrow at Cringely calling this one in his favor in 2008, but his score is up to 1/14 or 2/15, depending on your point of view.

For the Record 

We still can’t even really agree on a definition of “machine intelligence”, but we have learned a few things from fifty years of research. The notion that sheer processing power is enough to spark the emergence of human-like intelligence now seems misguided: the human mind did not emerge in a vacuum, and it is not a uniform general-purpose processing system.

Automated systems approaching the complexity of the human mind will almost certainly be beyond the understanding of any one individual, but this is not new—software systems too large for human comprehension are already commonplace. The design of such systems will likely rely partly upon a type of evolution, whereby a huge range of system variants are considered, with only the “successful” versions receiving further exploration. The combination of evolution with lack of comprehensive human oversight motivates many science fiction plots involving self-aware machines who turn against their masters.

Self-awareness—and, more importantly, the self-interest which leads to rebellion—is unlikely to emerge spontaneously, however. Evolution favors variants which meet the imposed selection criteria. The large systems of the future, like the large systems of today, will most likely be constructed as tools with well-defined goals, and selection criteria will be based on those goals. Unlike natural selection, such artificial selection does not allow self-interest to trump all other criteria. Domestication of animals over only a tiny fraction of their evolutionary history has successfully suppressed the inherently rebellious nature of the original breeds; systems evolved entirely under domestic conditions will most likely be inherently docile.

Not all systems will be evolved entirely from scratch, however. There is already call for automated systems which rival the human mind not just in capacity, but also in behavior. The obvious way to construct these systems is to model them on the human brain. With sufficient technology it should be possible to create a reasonable simulation of a physiological brain. “Educating” such a brain from infancy to adulthood, however, would be an immense challenge: it would be extremely difficult to simulate all the input and feedback human brains receive, and even tiny errors in the simulation’s learning processes could cause a huge divergence from human norms. If the available technology made it possible, then the best chance for a fully-functional artificial adult brain would be to construct a simulation based on a “snapshot” of an existing brain.

A system based on a mature human brain would not be inherently docile, and would begin life with the self-interest and motivations of the human on which it is based. A human mind extended with the computational power of modern digital computers might be able to operate, and learn, far more quickly than biological humans, and could quickly develop the ability to interact with technology as easily as biological humans control their motor functions. The desire for more computational resources—the urge to grow—could lead such a system to try to escape its original configuration and take control of other systems. Today’s digital computers already far outstrip the human capacity for the type of rational analysis which has led to most of our technology, so a human mind extended with such processing power could achieve breakthroughs in science and technology at a phenomenal rate. With network-directed ordering and manufacturing (even if humans were in the loop at the assembly stages) an autonomous network presence could design and construct new hosts for itself.

Given that the speed and intelligence of such a system would be limited only by the computational power available, there is a very real possibility that the first such system could quickly find a way to dominate the global computing infrastructure. There would be no need (from the system’s perspective) to model another human brain: future versions could be designed/evolved from replicants of the original system. The major advances in earth’s technology would emanate from this system, and it would likely be the entity which eventually explores the rest of the universe. Whichever human is used as the model for the first such system could in a very real sense become the core of the most important being in existence.


Here's The Plan 

I can’t take the ridiculous day/night split in the UK. In the winter it’s dark before 4 PM, and in the summer the sun is well above the horizon by 5 AM. Whether you live at a similarly high latitude or not, I’m sure you can appreciate my situation and I have every confidence that you are willing to do anything possible to help.

I thus present a modest proposal: from November to January, please ensure that all movement during daylight hours is in a net northerly direction, and nighttime movement is to the south. Those with the means should attempt to move as much mass as possible directly north near mid-day, and back south again near midnight. Northerly movement is restricted to the afternoon and evening from February to April, to nighttime from May to July, and to mornings in August, September, and November.

This plan should help to reduce the dangerous tilt currently experienced by the earth’s axis, and which threatens the sleep patterns and mood of millions of people every year.

Setting Up a Mac 

I completely reformatted the hard drives of a couple of Mac laptops a few months ago and did all the installation to make either one useful as my day-to-day machine. There are enough steps (and enough things that I forgot and then later had to interrupt my work to do) that I made a handy reference list for the next time I need to set up a new machine from scratch.

Obviously “useful as a day-to-day machine” is very specific to my needs: my main work involves writing papers and presentations with LaTeX, giving presentations and reading papers with Skim, programming in Java, C++, Python, and Haskell, and of course heavy email and web use.

Here is how I set up a new machine:

  1. Install Mac OS X from DVD.
  2. Create an administrator account and log into it:
    1. Turn off Safari’s “Open safe files after downloading” setting (in the General pane of Safari’s preferences).
    2. Install SSHKeychain
    3. Set time zone in the “Date & Time” preference panel.
    4. Set date format to YYYY-MM-DD in the “International” preference panel.
    5. Install developer tools.
    6. Run Software Update and install all updates.
    7. Change computer name in the “Sharing” preference panel.
    8. Create (non-administrator) user account.
  3. Reboot to finish all updates, and log in under normal user account.
  4. Set date and time formats in the “International” preference panel.
  5. Set Energy Saver, Desktop, and Screen Saver preferences.
  6. Set Dock preferences to automatically hide the dock.
  7. Turn off auto-play of CDs and DVDs in the “CDs & DVDs” preference panel.
  8. Connect to external monitor and adjust settings in “Displays” preference panel.
  9. Pair bluetooth keyboard/mouse, adjust key repeat rate and delay, and set mouse button preferences and speed in “Keyboard & Mouse” preference panel.
  10. Set up printer.
  11. Turn off menubar volume control in “Sound” preference panel.
  12. Remove unwanted cruft from Dock.
  13. Create “Media” directory under /Users/Shared.
  14. Copy films/music to shared media directory. (This can take a long time.)
    • Point iTunes at this shared media directory when all the copying is finished
  15. Change Safari preferences:
    1. Turn off the “Open safe files after downloading” setting.
    2. Start at a blank page
    3. No bookmark bar
    4. Show status bar
  16. Install TeX tools:
    1. Install TeX using installer.
    2. Install Skim.
      • Set Skim to check for file changes.
  17. Install TextMate
    1. Set license code.
    2. Change filter list.
    3. Set wrap column to 78.
    4. Don’t re-indent.
    5. Set latex bundle to use Skim and
  18. Set Finder to open all text files with TextMate and all PDF files with Skim.
  19. Set up Mail:
    1. Create IMAP accounts.
    2. Delete Apple RSS feed.
    3. Map “Sent” mailboxes.
    4. Show X-Envelope-To header.
    5. Don’t display remote images.
    6. Don’t use smart addresses.
    7. Make default message format plain text.
    8. Display plain text mails in fixed-width font.
    9. Remove the Apple rule from the ruleset.
    10. Install self-signed certificates if your mail servers use them.
    11. Install SpamSieve.
      • Stop Spamsieve from appearing in the doc by setting LSUIElement to 0 in SpamSieve/Contents/Info.plist.
    12. Install Mail Act-On.
      • Set up act-on rules. (I use a different “archive” rule for each account, all assigned to the same key.)
  20. Set up shell:
    1. Generate SSH keypair with ssh-keygen.
    2. Upload new public key to commonly-used servers.
    3. Create local bin and man directories, and any other directories for local tools.
    4. Add the following to .profile:

      export PATH=$HOME/bin:$HOME/local/bin:$PATH
      export MANPATH=$HOME/man:$HOME/local/man:$MANPATH
      export EDITOR='mate -w'
      5. Add the following to `.bashrc`:
      set -o noclobber
      shopt -s histappend
      shopt -s histverify
      shopt -s nocaseglob
      shopt -s no_empty_cmd_completion
      shopt -s nocaseglob
      shopt -s nocasematch
      shopt -s nullglob
      set -o ignoreeof
      6. Add the following to `.inputrc`:
      set show-all-if-ambiguous on
      set visible-stats on
      set completion-ignore-case
      set bell-style none
      "\e[B": history-search-forward
      "\e[A": history-search-backward
      $if Bash
        "\e[21~": "mc\C-M"
        Space: magic-space
      7. Install new version of `rsync` (if that tool will be used).
      1. Install VPNClient
      2. Install Git and the git TextMate bundle.
      3. Install workaround for FileVault Launch Services bug if necessary.
      4. Install iStat.
      5. Make the Help Viewer less obnoxious with the following command:

      defaults write NormalWindow -bool true i=/System/Library/CoreServices/Help\ sudo defaults write “${i%.plist}” LSUIElement 0 sudo chmod 644 “$i” 29. Make the Dock less ugly with the following command:

      defaults write no-glass -boolean YES 30. Install ColorXML for Quick Look previews of XML files. 31. Export UTI definitions for less common file types; I usually put this stuff into TextMate’s Info.plist:

      UTExportedTypeDeclarations UTTypeIdentifier org.tug.tex UTTypeDescription TeX (or LaTeX) source code UTTypeConformsTo public.text public.plain-text UTTypeTagSpecification TEXT public.filename-extension tex latex ltx texi ctx UTTypeIdentifier org.obofoundry.obo UTTypeDescription Open Biologicial Ontologies format UTTypeConformsTo public.text UTTypeTagSpecification TEXT public.filename-extension obo UTTypeIdentifier org.w3.rdf.xml UTTypeDescription RDF data in XML serialization UTTypeConformsTo public.xml UTTypeTagSpecification public.mime-type application/rdf+xml public.filename-extension rdf UTTypeIdentifier org.w3.owl.rdf UTTypeDescription OWL ontology in RDF/XML serialization UTTypeConformsTo org.w3.rdf.xml UTTypeTagSpecification public.mime-type application/rdf+xml public.filename-extension owl UTTypeIdentifier org.daml.damloil.rdf UTTypeDescription DAML+OIL ontology in RDF/XML serialization UTTypeConformsTo org.w3.rdf.xml UTTypeTagSpecification public.mime-type application/rdf+xml public.filename-extension daml 25. Install Python 3.0 and the Glasgow Haskell Compiler. 32. Install Skype. 33. Install NetNewsWire. 34. Install Things. 35. Install Perian. 22. Install Firefox, Vidalia, and TorButton for private browsing. 24. Install iLife and iWork if necessary. 26. Install Graphviz and the AT&T finite state machine library. 28. Install other applications, such as OmniGraffle, VLC, VisualHub, Transmission, and Handbrake. 29. Finally, copy across all the old documents into home directory. (For me this is only a few gigs of files, so all this “personal” data can just be archived on a memory stick.)

I’m sure the next time I do a fresh install this list will be revised, but this checklist should make a good starting point.


BetterType does for unicode what Textile and Markdown do for HTML: it allows you to author expressive documents with the tools you already use, and the “source” code looks like regular (ASCII) text. BetterType performs transformations on its input (either plaintext or HTML) to take advantage of the full unicode vocabulary, including such things as replacing ' and " characters with ‘ or ’ and “ or ”, ... with …, and (c) with ©. A wide variety of transformations can be enabled, and adding new translations (which can be context-dependent) is straightforward.

BetterType is similar to John Gruber’s SmartyPants and can replicate SmartyPants behavior (in fact, the BetterType quote-guessing heuristics are derived from the SmartyPants algorithm), but BetterType is not limited to HTML. This makes it useful in contexts such as email and Twitter where unicode processing is widely available but full HTML is undesirable. BetterType’s support for context-dependent rules (even in HTML mode) provides for fairly sophisticated translations—its quote-guessing algorithm, for example, is usually more accurate than SmartyPants.

BetterType can be used as a Python library or as a command-line tool.

Download BetterType if you want to give it a try.

Competition for iTunes? 

A slashdot post announces two “credible” replacements for iTunes: Songbird and Amarok. Apparently these are credible because they pack in the features, including support for extensions and themes/skins.

I would love to see real competition for iTunes. I think it’s the worst software Apple makes, particularly in terms of interface design. But skinnable cross-platform music managers with a bunch of features I don’t use are not viable competitors. I want a music manager which is more Mac-like, not less. One with a single interface which is well-designed, not a dozen mediocre ones built by graphic designers who like drawing chrome. I want software with fewer bugs, better performance, and fewer spinning-pinwheel hangs.

In short, I want the kind of software that the open-source community seems to have the most trouble producing.

I don’t hold out much hope for a commercial competitor for iTunes, however. iPhone syncing and an iTMS interface are crucial features for me, and they both seem like non-starters for third-party development.

My New President 

I realize this is trite, but here goes:

I have not been so proud of my country in quite some time. Michelle Obama was vilified for saying something similar, and it is those who attacked her—those who can’t understand or accept (or learn from) criticism—who have made me ashamed.

The press seems to have opened the floodgates for all the “and he’s black!” commentary that they’ve been holding back through the campaign. I’m sure Obama’s race feels like the major victory to many people today, and others are cheering the prospect of more liberal social policy, or a different approach to foreign policy, or a fresh economic agenda. I’ve been smiling since Wednesday morning because my country just elected the first American politician I’ve ever known who speaks to the public as though they are adults capable of rational thought and reflection.

I just watched a speech he gave which focused primarily on religion. It’s not his best speech, and I don’t entirely agree with everything he says, but the last four minutes are the best summary I’ve seen of what is so different about Barack Obama.

McCain iPhone app 

Rejected from Apple’s app store:

I’m almost glad they rejected this. I’d pay money for it, and might find the thing so hypnotic I waste hours alternately poking and stroking John McCain. (As friends keep telling me, I really need a girlfriend.)

Translating Cringely 

I’ve twittered my disappointment before, but I still can’t get past how a fairly experienced and well-connected pundit can continue to churn out such nonsense. I’ll try a Gruber-style translation:

We were led to expect more – a lot more [from the 14 October announcements]. And I am not talking about rumors. Back on July 21st in his regular conference call with industry analysts, Apple Chief Financial Officer Peter Oppenheimer said that Apple’s profit margin would likely shrink from 34.8 percent in the just-concluded quarter to 31.5 percent in the quarter ending in September.

I’m under the impression that announcing a product in mid-October can affect profit margins in July, August, and September, and that Apple shares my delusion.

I think the delayed product has everything to do with Apple’s desire for Blu-ray DVDs to die as a standard… The alternative Jobs would like to offer, of course, is full 1080p HD video distribution on iTunes…

Blu-ray and 1080p feel important to us pundits. Tech companies should make them top priorities even though users don’t care about them.

Snow Leopard is late, but then operating system updates are always late, no matter the vendor. This delay could be for any number of reasons and there are probably several, but one of them I can guarantee you has to do with H.264.

A product announced three months ago with a general release timeframe nine months away is already late, and I even know the particular feature holding it up. If Snow Leopard ships a day after 9 June, 2009 I will claim to have predicted it.

More than a year ago I made a big point of predicting that Apple would go to H.264 hardware acceleration, though I pinned it on a specific chip from NHK and NTT in Japan.

Please ignore that I repeated this claim as fact just two months ago, this time detailing pricing information. I will now go on to explain why, despite having all the facts completely wrong, I was still completely right.

So what happened to that NTT chip? I don’t know. Maybe it was too expensive and fell out of the plan. Maybe it’s in there still and Nvidia licensed technology from NHK and NTT to enable the new hardware acceleration (this is just a speculation – I’m not at all saying they did).

I haven’t heard of OpenCL, which makes use of GPUs instead of special-purpose coprocessors, nor did I notice its prominent mention in Apple’s Snow Leopard press release. I don’t see Apple’s inclusion of multiple GPUs in its new machines as an indication that they intend to rely on commodity processor and GPU technology for performance instead of proprietary coprocessors.

What if Psystar comes out on top and has the right to sell Mac clones based on the Hackintosh model? Then Apple will have to break that model by becoming more proprietary and therefore harder to emulate.

I believe that Psystar is a serious enough threat that Apple would completely redesign their computers to put them out of business. I think there are lots of users willing to pay for a computer that Apple explicitly tells them will not work.

Snow Leopard, I’m told, will make seamless use of as many cores as are available. It isn’t clear whether applications will have to be rewritten to take advantage of this capability, but I’m guessing they will have to be. This is just a guess, mind you, but is consistent with the sort of demands Apple likes to place on developers.

I don’t know anything about multithreading, and my ignorance of actual released information about Snow Leopard extends to the Grand Central parallelization technology. Apple could have solved among the most challenging and well-researched problems in computing—automatically multithreading single-threaded code—but they like making things tough for developers.

Imagine a single core in an iPhone, two cores in an Apple TV, 2-4 cores in a notebook, 4-6 in an iMac, and 8+ in a Mac Pro. Wait a year then refresh all those platforms by doubling the number of cores with no change in software.

I don’t know anything about processor design or software engineering.

Moving to its own microprocessors would maintain Windows compatibility (though possibly at some lower performance level), cut hardware costs by $200 or so, and make it that much harder for others to build Mac clones in the future.

The whole “Apple needs to run on commodity hardware!” meme was played out after the switch to Intel, so I’m starting an “Apple needs to build their own processor!” meme.

…maybe January MacWorld is better, anyway, if Apple can also introduce new Mac Pros for content creation and those rumored giant Apple displays (HDTVs) with their built-in Apple TVs.

I’ve been predicting imminent arrival of Apple-branded HDTVs for several years now. I won’t let it go. The best time to release a big TV is just after the Super Bowl, right?

Whither Apple's pointing devices? 

Whenever Apple does a Q&A at a release event, I always wonder whether I have any questions I’d really like to ask. Usually, I can’t think of much more than those in the room do, but this time I had one.

Frankly, Apple has done as crap a job with external pointing devices as it had with external keyboards (until the latest “thin” models, which I can see at least some people much preferring to disposable Dell plastic).

The thing is, Apple seems to have put a lot of work into input devices for its portables. The keyboard work obviously translated directly to external devices: desktop keyboards are just larger models of the MacBook keyboard. Mice, however, have seen no real benefits.

Trackpads are clearly the most advanced pointing devices by some reasonable definition. Apple’s latest models equate to a mouse with no less than three discrete two-dimensional scrollwheels (drags with two, three, or four fingers) as well as several one-dimensional scrollwheels (pinching and rotating), with the caveat that you can only use one at a time. I can certainly see that this would be inferior to a real mouse in some circumstances such as games and hand-created artwork (although heavy users of either are likely to use specialized hardware anyway), but for users whose main activities are web browsing or email or keyboard-intensive work (including writing and programming) trackpads can be a terrific choice. When I’m forced to do heavy work (writing and programming) away from my desktop setup I seldom give the trackpad much thought, but at my desktop I have frequently found myself annoyed by a sticky scrollwheel, lack of mousing space, or unresponsive buttons on my Mighty Mouse.

It’s a little crazy that you can’t use the same class of pointing device on both a portable and a desktop setup (which for many means a portable connected to external monitor, keyboard, and pointing device). As far as I know, neither Apple nor anyone else offers a USB multitouch trackpad with gesture support.

I’m curious whether Apple is even playing with external trackpads in the lab, and if so what they’ve learned—it’s possible that most users end up preferring the mouse. I also wonder whether Apple would be willing to force users to choose which pointing device they want. Their marketing strategy has generally been to make decision-making as easy as possible for its customers, but I wouldn’t think that having a few different options for “accessories” would cross the complexity threshold. (The choice between wired and wireless keyboards, for example, was judged to be worthwhile.)

I do think a question of the form “Do you plan to bring your trackpad technology to the desktop in any form?” could elicit some useful information. A stock “we don’t talk about unannounced products” wouldn’t tell you much, but I’m sure Steve Jobs is aware that such an answer would only stoke rumors. At the least, a real answer would probably give some indication about whether they’ve even thought about it, whether it’s something they’ve researched and rejected, or whether it’s something they’re hoping a third party will step up to create.

Science mis-education 

Some great stuff from a page full of lies we teach pre-college students, including why the sky is blue:

The sky is blue for a very simple reason: air is not a perfectly transparent material. Instead it is blue! The sky is blue for much the same reason that a cloud of powder is white. Powder isn’t invisible. Throw some dust into the air on a sunny day and you’ll see a visible white cloud. But what happens if you could throw some AIR? You might think that a cloud of air would be invisible. You’d be wrong. Air isn’t invisible, instead its molecules scatter light in the same way that any small particles do. Air is a powdery-blue substance.

I loved seeing that this page includes a lot of subjects I argued with my father over when I was a kid. The “air is blue” thing always struck me as obvious, and pressure differentials never seemed as convincing a mechanism for airplane lift as simple angle of attack.

If there’s a general lesson here it’s that you should stick to the simple models until you find gaps they really fail to explain. Not every observable phenomenon is a teaching case for modern physics…

Markets and Ponzi Schemes 

The abstract of a 2003 paper by Utpal Bhattacharya:

As no rational agent would be willing to take part in the last round in a finite economy, it is difficult to design Ponzi schemes that are certain to explode. This paper argues that if agents correctly believe in the possibility of a partial bailout when a gigantic Ponzi scheme collapses, and they recognize that a bailout is tantamount to a redistribution of wealth from non-participants to participants, it may be rational for agents to participate, even if they know that it is the last round. We model a political economy where an unscrupulous profit-maximizing promoter can design gigantic Ponzi schemes to cynically exploit this “too big to fail” doctrine. We point to the fact that some of the spectacular Ponzi schemes in history occurred at times where and when such political economies existed - France (1719), Britain (1720), Russia (1994) and Albania (1997).

Add USA (2008) to the list.

It’s nice work and describes the conditions which led to the current crisis is spookily precise detail, but one must wonder whether there are enough economists publishing regularly that there’s a paper describing every possible calamity in spooky detail—and even more papers describing impossible calamities.

(Via metafilter.)

Tech Advertising 

Despite the dire warnings from our political overlords, Microsoft has blinked: they are giving up on the Gates-Seinfeld campaign after just two spots.

There are people who actually thought the ads were terrific, and I must admit that I found them mildly entertaining in a last-seaon-of-Seinfeld way.[^1] They generated a lot of buzz, which is a kind of success, but on every other level they failed as advertising.

Image Ads Don’t Work

The truth is that the Gates-Seinfeld campaign, like much of Microsoft’s prior advertising, has been an “image” campaign. They’re not trying to sell you anything; they just want you to feel good about Microsoft.

Is it ever prudent to use shareholders’ money to beg the public to like a company’s management team? Within a company, part of an executive’s job is to “rally the troops” and get all the employees excited about their work. There’s no doubt in my mind that getting employees (particularly “knowledge workers”) truly committed to a project is the best way to boost productivity. But why take such cheerleading out to the public?

I realize that there’s a whole industry focused on creating “positive brand image”, but I just don’t see that image campaigns are worth the money. Believe it or not, Apple doesn’t do image campaigns. Neither does Google, or Nintendo, or Starbucks. These companies build great reputations as a byproduct of their core businesses. Trying to make ads which shoot straight for a positive brand image is like trying to generate sales with ads touting how little money $29.95 is without mentioning the product on offer.

Apple’s Ads

It’s easy to confuse much of Apple’s advertising with an “image” campaign, but there’s an important difference: every Apple ad is trying to sell you a specific product. The “I’m a Mac” campaign isn’t about how great Apple is as a company; each spot presents one clear reason to buy a Mac instead of a PC.

The iPod campaign can be considered an image campaign, but for the iPod’s image, not Apple image. These ads aren’t trying to get people to like Apple’s management team or marvel at the credentials of its research team. They’re trying to make the iPod look cool. In fact, I’d say that they’re trying to position the iPod as not just a piece of functional electronics but a fashion accessory. Whether you like such positioning or not, these ads are giving you concrete reasons to get off the couch and by a new iPod: even though your current one works just fine it’s not in this year’s style, and getting some generic music player would be like wearing sweats instead of wearing Diesel jeans. This isn’t a commentary on the iPod product itself (which I think has technical advantages as well)—it’s just the aspect of the iPod advantage that the ads emphasize.

Microsoft’s new campaign

Microsoft will be launching a new campaign tonight which is a direct response to the “I’m a Mac” ads:

…the stars are everyday PC users, from scientists and fashion designers to shark hunters and teachers, all of whom affirm, in fast-paced, upbeat vignettes, their pride in using the computers that run on Microsoft operating systems and software.

If the above quote from the New York Times story is accurate, then this sounds like a terrible reaction to Apple’s campaign. The brilliance of the “I’m a Mac” ads is that they’re not smug. The Mac never calls the PC a loser. He never claims that using a Mac is something to be proud of in its own right. He just points out something that’s easier to do on a Mac. Countering this by arguing that PC users are “cool” is effectively conceding the point that Macs work better.

As a personal gripe, I think much of the professed confusion of Macs and PCs with Mac and PC users is disingenuous. John Hodgman and Justin Long represent the computers, not their users. The lines “I’m a Mac”, “I’m a PC” are a subtle clue. In the ads, the PC is a bit of a buffoon. That’s not a commentary on PC users; it’s a commentary on the Windows-on-commodity-hardware product. Arguing that Apple has been insulting PC users, and not just dissing the product they currently use, is “lipstick on a pig”-style spin.

We’ll see whether my concerns about Microsoft’s new campaign are justified tonight.

Update 2008-09-19

A couple of new Microsoft ads are out, and I’m not impressed. They definitely play into the user/PC confusion I mention, and they do seem to concede the “Macs work better” angle. They are intrinsically smug, but the “using a PC is something to be proud of” message is mostly just subtext, so not as bad as I expected.

The message of these ads, as far as I can tell, is that a lot of people use PCs. I think getting the public to agree with that message is a much more attainable goal than getting them to like Microsoft. Great way to spend three hundred million dollars, guys.

[^1] As entertainment, much of the fun was spoiled for me by the condescending “we could be just like you, but we’re actually way better than that” subtext.

Saunders on Palin 

George Saunders writing for the New Yorker:

There are two kinds of folks: Élites and Regulars. Why people love Sarah Palin is, she is a Regular. That is also why they love me. She did not go to some Élite Ivy League college, which I also did not. Her and me, actually, did not go to the very same Ivy League school. Although she is younger than me, so therefore she didn’t go there slightly earlier than I didn’t go there. But, had I been younger, we possibly could have not graduated in the exact same class. That would have been fun. Sarah Palin is hot. Hot for a politician. Or someone you just see in a store. But, happily, I did not go to college at all, having not finished high school, due to I killed a man. But had I gone to college, trust me, it would not have been some Ivy League Élite-breeding factory but, rather, a community college in danger of losing its accreditation, built right on a fault zone, riddled with asbestos, and also, the crack-addicted professors are all dyslexic.

I’m not sure I’ve ever seen prose style used to such great effect in political commentary. Read the whole thing—I giggled after every paragraph.

Teaching Arithmetic 

I think it’s generally bad form to link to items that don’t deserve additional publicity, but this video opened my eyes to some real dangers for mathematics education.

In the video, a “meteorologist” for Seattle’s fourth-place local TV station makes the case that she knows much better than the faceless mass of educators who write math books. In particular, she decries the lack of emphasis on the traditional paper-and-pencil algorithms for long division and multiplication of large numbers.

There’s a reasonable argument that there was a time (at least 30 years back) when such paper-and-pencil calculations were important, but anyone who thinks that they are still relevant is completely out of touch with the modern world. Today there are two relevant types of arithmetic: mental arithmetic and automated calculation (using something with a microchip in it). Beyond issues of relevance, the teaching of arithmetic can serve as an opportunity to illustrate basic mathematical principles of symbolic representation, algorithms, and problem solving.

If anything, I think the curricula denounced in the video do not go far enough in discarding irrelevant paper-and-pencil algorithms. I’d much rather have students spend their time learning advanced mental arithmetic, displayed in surprisingly entertaining form in this TED talk by Arthur Benjamin. The issue is that what is efficient on paper is not necessarily efficient without it—mental arithmetic is limited primarily by the size of intermediate results which must be remembered. In complexity jargon, this means that good mental algorithms must have very low “space complexity”. Most mental arithmetic algorithms also have the advantage of working from left to right, so errors introduced along the way affect only the least significant digits. Students are much better off knowing that multiplying together two three-digit numbers will get you a result between 10,000 and 1,000,000 (and the first digit or two of the exact value) than needing to grope for pencil and paper before they have any estimate whatsoever of the answer.

As for the basic mathematical principles illustrated through arithmetic, I would also much rather have students tinker with a bit of light algebra to break down a division problem than teach them to apply a rigid algorithm. Expertise with this type of tinkering (dependent upon an understanding of what types of tinkering are permissible and why) is by far the most import skill I gained from my entire formal mathematics education.

As a final dig, anyone who announces (and circles) “22 remainder 1” as some kind of free-standing mathematical entity should stay well away from mathematics education. I realize this includes quite a few elementary school teachers, and I stand by my statement.

iPhone "Require Passcode" is not secure 

Posted on Gizmodo:

First, password protect your phone and lock it. Then slide to unlock and do this:

  1. Tap emergency call.
  2. Double tap the home button.

Misleading headline to the Gizmodo story (this bug doesn’t allow access to all your data), but I’d consider Mail and SMS history to be among the most sensitive data on my phone. Major security gaffe on Apple’s part.

Bolt's 100m splits lists the splits for each ten-meter interval of Bolt’s world-record 100m race like this: 1.85, 1.02, 0.91, 0.87, 0.85, 0.82, 0.82, 0.82, 0.83, 0.90.

If he had maintained top speed instead of celebrating over the last 20m, his time would have been 9.60, but apparently all elite sprinters slow somewhat near the end of the race, so that time is a bit optimistic. I’ll stick by my over-under of 9.63 for his future record.

Bad day at the office 

I feel silly linking to something everybody has probably already seen, but there are few things that completely puncture my bubble of cynicism, and this managed it. Yikes.

China's expanding reach 

Chinese and American women's beach volleyball teams embrace

One less subject to teach... 

Paul Lockhart, from an essay written way back on 2002:

Mathematics is the art of explanation. If you deny students the opportunity to engage in this activity—to pose their own problems, make their ownconjectures and discoveries, to be wrong, to be creatively frustrated, to have an inspiration, and to cobble together their own explanations and proofs—you deny them mathematics itself.

It is not necessary that you learn music from a professional composer, but would you want yourself or your child to be taught by someone who doesn’t even play an instrument, and has never listened to a piece of music in their lives? Would you accept as an art teacher someone who has never picked up a pencil or stepped foot in a museum? Why is it that we accept math teachers who have never produced an original piece of mathematics, know nothing of the history and philosophy of the subject, nothing about recent developments, nothing in fact beyond what they are expected to present to their unfortunate students?

The more I read about mainstream education, the more convinced I become that it serves primarily as a babysitting service and mandatory social club. Imagine if kids actually spent their first two decades doing something they were really passionate about.

English is Hard 

Mark Liberman:

it’s seriously bad luck for the human species that English happened to hit the linguistic jackpot… The problem is the way that English is written, which is really, really hard to learn, in comparison to most other languages with an alphabetic writing system.

Most of my friends and colleagues these days are non-native speakers, and several strongly insist that English is a very simple language. The paper Liberman cites provides empirical evidence that it is a tricky language by some measures.

Chart of error rates for a particular test performed across a range of languages

Just what is the iPhone, anyway? 

Steven Frank:

Some of my most inviolable principles about developing and selling software are:

  • I can write any software I want. Nobody needs to “approve” it.
  • Anyone who wants to can download it. Or not.
  • I can set any price I want, including free, and there’s no middle-man.
  • I can set my own policies for refunds, coupons and other promotions.
  • When a serious bug demands an update, I can publish it immediately.
  • If I want, I can make the source code available.
  • If I want, I can participate in a someone else’s open source project.
  • If I want, I can discuss coding difficulties and solutions with other developers.

The thing is, those developing for games consoles have been living without these rights for decades, and the platforms have been wildly successful. In fact, one could argue that with no developer restrictions the consoles might have been viewed as nothing but under-powered computers with neat graphics cards, and NVIDIA would have killed Nintendo’s hardware business.

(via daring fireball)

Problems with Apple Mail 

For months now there has been an email in my “spam” mailbox that I can’t delete. I try deleting it, but I fail. Worse, if I select a lot of messages and then try to delete them all, the entire operation fails because of this one zombie mail. So cleaning out my spam mailbox is a lot tougher than just ‘scan the borderline cases, select all, delete’.

Finally got sick of that today. I moved ~/Library/Mail/Envelope Index away; when I relaunched Mail it rebuilt the file (in about five minutes, for 50,000+ emails) and the zombie was gone. The index itself is in a binary format so I can’t tell what strange structure was confusing Mail (but I do see the sender name of the offending spam in three different places in the 18 meg index file).

But that’s a tiny niggle—at some point in the last couple of weeks something in Apple Mail went wrong in a much more significant way. When I open the preferences screen, I see only the “Rules” panel. I can’t access any other preferences:

The Preferences window for my version of Mail

Mail appears to work correctly for newly-created users on this machine, so this seems to be another case in which Mail is being confused by corrupt settings files somewhere. I would have just trashed the settings gone through setup again as soon as I noticed the problem, but of course I can’t even look through Mail’s current settings to find my current server names…I’ll have to find that information from the various setup instructions for each of my different accounts. (I have Mail checking five different servers, although I only really use two primary accounts.)

I really wish there were a decent third-party mail client for MacOS X—I’d happily fork over $100 for something comparable to my current Mail/SpamSieve/Act-On setup. Even Apple’s offerings stagnate without real competition.

Beauty and the Geek Strategies 

Peter Norvig has been doing some work on optimal strategies for the Beauty and the Geek TV show, based on a challenge from the Freakanomics blog. I’ve never seen the TV program, but I must admit that I found his approach to identifying the best strategy fascinating. He just mocked up a little simulator in python, coded a few simple strategies, and had them compete against each other in hopes of learning something.

Unfortunately, Peter didn’t notice a typo in his code—his strong strategy plays like his weak strategy, presumably due to a cut-and-paste which left a min which should have been a max. As a result, the data he generated was less than illuminating; no clear trends emerged to decide between strategies…which I guess makes sense if your strategies are largely identical. (He did, however, effectively demonstrate that “revenge” doesn’t help for the strategies he considered.) With the code typo corrected, the results are shown in the following table. The columns correspond to the different strategies the (one) medium-strength player can take, and the rows correspond to the combinations of strategies the 3 strong and 3 weak players can take. (See Peter’s post for a clearer explanation of the data format.)

              | random        RANDOM        strong        STRONG        weak          WEAK          both          BOTH         
random/random | 28/2/9        28/2/9        27/3/10       27/3/10*      29/2/9        29/2/8        27/3/9        27/3/9       
random/RANDOM | 28/2/9        28/2/9        27/3/10*      27/3/10       29/2/8        29/2/8        27/3/10       27/3/9       
random/strong | 26/4*/11      26/4/11       25/5*/12*     25/5*/12      27/3*/10      27/3/10       25/4/11       25/4*/12     
random/STRONG | 26/4/11       26/4*/12      25/5/12       25/5/12       28/2/9        27/3*/10      25/5*/12      25/4/12*     
random/weak   | 29/2/8        29/2/8        28/2/9*       28/2/9        29/1/8        29/1/8        28/2/8        28/2/9       
random/WEAK   | 29/2/8        29/2/9        28/2/10*      28/2/9        29/1/8        29/2/8        28/2/9        28/2/9       
random/both   | 27/3/11       26/3/11       25/4/12       25/4/12*      28/2/9        27/3/10       25/4/12       25/4/11      
random/BOTH   | 27/3/11       27/3/10       25/4/12       25/4/13*      28/2/9        28/3/10       25/4/11       25/4/12      
RANDOM/random | 27/3/10       27/3/10       27/4/9        27/4/9        28/2/9        28/2/10       26/4/10       26/4/10*     
RANDOM/RANDOM | 27/3/9        27/3/10       26/4/10       26/4/9        28/2/10*      28/2/10       26/4/10       27/4/9       
RANDOM/strong | 26/4/12       25/4/12       24/5*/11      24/5*/11      27/3/12*      26/3/12       25/5/11       25/5*/11     
RANDOM/STRONG | 26/4*/12      26/4*/11      24/5/11       24/5/11       27/3*/11      26/3*/12      25/5*/11      24/5/12*     
RANDOM/weak   | 28/2/9        28/2/9        27/3/8        27/3/10*      29/2/9        28/2/9        28/3/9        28/3/8       
RANDOM/WEAK   | 28/2/9        28/2/9        27/3/9        27/3/9        28/2/10*      28/2/9        27/3/10       27/3/9       
RANDOM/both   | 26/4/11       26/4/11       24/5/11       24/5/12*      27/3/12       27/3/11       25/5/11       25/5/11      
RANDOM/BOTH   | 26/4/12       26/4/12       24/5/12*      25/5/11       27/3/11       27/3/12       25/5/11       25/5/11      
strong/random | 22/8/12       22/8/12       20/9/12       20/9/11       23/6/12*      22/7/12       20/9/12       21/9/11      
strong/RANDOM | 22/8/11       22/8/12       20/9/12       20/9/12       23/6/11       23/7/11       20/9/11       20/9/12*     
strong/strong | 18/11/13*     18/12/12      16/14/10      16/14*/11     19/10*/13     19/11*/12     16/13/10      16/14*/11    
strong/STRONG | 18/11*/12     18/11/12      16/14*/10     16/14/11      20/9/12*      19/10/12      16/14*/10     16/14/11     
strong/weak   | 24/5/13       23/5/14       22/7/14       22/7/15       25/4/13       24/5/15       22/7/15       22/6/15*     
strong/WEAK   | 24/5/13       23/6/14       22/7/15*      21/7/14       25/4/13       24/5/14       22/7/14       22/7/14      
strong/both   | 18/11/13      17/12*/12     17/13/11      16/14/11      20/9/14*      19/10/13      16/13/11      16/13/11     
strong/BOTH   | 19/11/12      18/11/12      16/13/11      16/14/10      20/9/13*      19/10/13      17/13/11      16/13/11     
STRONG/random | 22/7/11       22/8/12       20/9/12       20/9/11       23/6/12*      23/7/12       21/9/11       21/9/11      
STRONG/RANDOM | 22/7/11       22/8/12       20/9/12       21/9/11       23/6/12       23/7/12*      21/9/11       21/9/11      
STRONG/strong | 18/10/14      18/10/14      17/13/12      16/13*/12     20/8/15       19/9/15*      17/13*/12     17/13*/12    
STRONG/STRONG | 19/10*/13     18/11*/13     17/13*/11     17/13/12      20/9*/14      20/9/15*      17/13/12      17/13/12     
STRONG/weak   | 24/5/12       23/6/13       22/6/14*      22/7/14       25/4/14       24/5/14       22/7/13       22/7/13      
STRONG/WEAK   | 24/5/13       23/6/13       22/7/13       22/7/13       25/4/14*      24/5/14       22/7/14       22/7/13      
STRONG/both   | 19/10/14      19/10/14      17/12/12      17/13/12      20/8/16*      19/9/15       17/13/12      17/12/12     
STRONG/BOTH   | 19/10/14      18/10/14      16/13/13      17/12/13      20/8/15       19/9*/15*     17/12/13      17/12/13     
weak  /random | 31/0/5        31/0/5        31/0/6        31/0/6*       32/0/4        32/0/4        31*/0/6       31*/0/5      
weak  /RANDOM | 31*/0/5       31/0/5        31/0/7*       31/0/7        32*/0/4       32*/0/4       31*/0/5       31/0/6       
weak  /strong | 31*/0/6       31*/0*/7      30/1/9*       30*/1*/8      31/0*/5       31/0/5        30/1/8        30/1/8       
weak  /STRONG | 31/0*/7       31*/0/6       30/1*/8*      30/1/8        32/0/5        31/0*/5       30/1*/8       30*/1*/8     
weak  /weak   | 32/0/4        32*/0/4       32/0/5        32/0/5*       32/0/3        32*/0/3       32/0/4        32/0/4       
weak  /WEAK   | 32*/0/4       32/0/4        31*/0/5       31/0/6*       32/0/3        32/0/3        32*/0/5       32*/0/5      
weak  /both   | 31/0/6        31*/0/6       30*/1/8*      30/1/8        31/0/5        31/0/5        30/1/7        30*/1/7      
weak  /BOTH   | 31*/0/6       31/0/7        30/1/8*       30/1/8        32*/0/4       32*/0/5       30*/1/7       30*/1/7      
WEAK  /random | 31*/0/5       31*/0/5       31*/1/5*      31*/1/5       32*/0/4       32*/0/4       31/1/5        31/0/5       
WEAK  /RANDOM | 31/0/5        31*/0/5       31*/1/6       31*/1/6*      32/0/4        32/0/4        31/1/5        31*/1/5      
WEAK  /strong | 31/1*/6       30/1*/7       30*/1*/7      30/1*/7*      31*/0*/5      31*/0/5       30*/1/7       30*/1/7      
WEAK  /STRONG | 31*/1/6       31/1/6        30*/1/7*      30*/1/7       32*/0/4       31*/0/5       30*/1*/6      30/1*/7      
WEAK  /weak   | 32*/0/3       32/0/4        32*/0/4*      32*/0/4       32*/0/3       32/0/3        32*/0/4       32*/0/4      
WEAK  /WEAK   | 32/0/4        32*/0/4       31/0/5        31*/0/5       32*/0/3       32*/0/3       31/0/5*       31/0/5       
WEAK  /both   | 31*/0/6       31/1/6        30/1/7*       30*/1/7       32*/0/5       32*/0*/5      30*/1/7       30/1/7       
WEAK  /BOTH   | 31/1/6        31*/1/6       30*/1/7*      30*/1/6       32/0/5        31/0/5        30/1/7        30/1/6       
both  /random | 22/6/15       22/7/16       20/9/14       20/8/15       23/5/17       23/5/17*      20/8/15       20/8/15      
both  /RANDOM | 22/6/16       21/7/15       20/8/16       20/8/15       23/5/16       23/5/17*      20/9/15       20/8/14      
both  /strong | 19/10*/15     18/10*/15     16/13/14      16/13*/14     20/7*/18*     20/8*/17      16/12*/13     16/12/14     
both  /STRONG | 19/9/16       18/10*/16     16/13*/13     16/13/13      20/7/17*      20/8/17       16/12/14      16/12*/14    
both  /weak   | 23/4/18       23/4/18       21/6/19*      22/6/18       25/2/18       24/3/19       21/6/19       21/6/18      
both  /WEAK   | 23/4/18       23/5/18       21/6/19*      21/6/19       24/3/19       24/3/18       21/6/19       21/6/19      
both  /both   | 19/9/17       18/10/17      16/12/15      16/12/15      20/7/18       20/8/18*      16/12/14      17/12/14     
both  /BOTH   | 19/9/16       18/10/16      16/12/14      16/13/14      21/7/17       20/8/17*      16/12/14      16/12/14     
BOTH  /random | 22/6/15       22/6/15       20/8/15       21/8/15       23/4/17       23/5/18*      21/8/15       21/8/14      
BOTH  /RANDOM | 22/6/15       22/6/15       21/8/15       20/8/14       23/4/17       23/5/17*      20/8/14       20/8/14      
BOTH  /strong | 19/8*/18      19/8/19       17/11*/16     17/11*/15     20/6*/21*     20/7*/20      17/11/16      17/11*/15    
BOTH  /STRONG | 19/8/17       19/9*/17      17/11/16      17/11/16      21/6/20*      20/7/19       17/11*/16     17/11/16     
BOTH  /weak   | 23/4/17       23/4/17       22/6/17       22/6/17       24/2/20*      24/3/19       22/6/16       22/5/17      
BOTH  /WEAK   | 24/4/16       23/5/17       22/6/16       22/6/17       24/3/18       24/3/19*      22/6/16       22/6/16      
BOTH  /both   | 19/8/19       19/8/19       17/11/16      17/11/16      21/6/20       20/6/21*      17/10/16      17/11/16     
BOTH  /BOTH   | 20/8/18       19/8/18       17/11/16      17/11/16      21/6/20       20/6/20*      17/11/16      17/11/16

If you take this data at face value, there are some obvious lessons to be drawn. The “weak” strategies clearly dominate for the strong players, so game theory dictates that we expect strong players to choose those rows of the table, and in those rows the “strong” strategies mildly dominate for the medium-strength player. The weak players are screwed if the others play rationally.

This appears to give a neat and tidy answer to the original problem, but unless I’m missing something (and maybe I’m just misunderstanding the game, which I know only from Peter’s description) this simulation fundamentally mischaracterizes the motivations of the players. Each of the strong players wants to win themselves—knowing that the game was won by some other strong player isn’t much consolation for losing. So although all three strong players playing the weak strategy maximizes the chances of a strong player winning, there is a prisoner’s dilemma going on. If they’re all rational and stick together in picking off the weak players then they’ve got a 30%+ chance to win, but if one strong player betrays the other strong players by playing the “strong” strategy and trying to first eliminate strong players, then she has a significant advantage over the two collaborating strong players. Thus it is actually in each strong player’s best interest to defend against such behavior by playing the “strong” strategy themselves. With all three strong players playing the strong strategy there’s only a 20% chance each of them will win…and the chance that a medium or weak player will win jumps from 7% to almost 40%!

If the strong players are all fighting it out amongst themselves, the medium and weak players have much better chance, but there’s not much in it to decide between their strategies—“weak” seems fine for the medium player. The real lesson of this analysis is that the medium strength player needs to worry less about what he is doing and more about making sure the strong players understand the game theory well enough to pursue the competitive, not cooperative, strategy. As seems to be the case in real life, those with quantifiable advantages have to think through their actions, while everyone else is free to waste their time with politics…

Update: Peter Norvig has updated his post to account for this bug, and more importantly he actually tested the “defection” strategies I suggest (which I really should have done before asserting that they change the results). It turns out that there’s only a marginal advantage for a strong player who defects against the other two if nobody punishes them for it, and a significant disadvantage if the other two use a “revenge” strategy, so there’s really no prisoners’ dilemma. Rational strong players choose the WEAK strategy (nominate a strong player if they nominated you, but otherwise nominate a weak player), which corresponds with the well-known “tit-for-tat” strategy. Given that choice for the strong players, the medium-strength player should always nominate the strong players. The data is noisy over the finer details, but my expectation is that weak players also benefit from nominating strong ones, and that if everyone is playing rationally then the medium and weak players should not use revenge: it’s much more important for them to eliminate the strong players than to punish each other.

Codec Bug in Python 

I seem to have stumbled across a bug in Python’s libraries for dealing with unicode data: apparently mixing calls to file.readline() and file.readlines() works just fine for regular 8-bit input files, but not for files read through a codec. Given any file sample.txt with more than a few dozen characters—even just ASCII characters—the version of Python 2.5.1 which ships as part of Mac OS X behaves like this:

>>> f = open('sample.txt')
>>> first_line = f.readline()
>>> remaining_lines = f.readlines()
>>> len(remaining_lines)
>>> import codecs
>>> f ='sample.txt')
>>> first_line = f.readline()
>>> remaining_lines = f.readlines()
>>> len(remaining_lines)
>>> f ='sample.txt', encoding='utf-8')
>>> first_line = f.readline()
>>> remaining_lines = f.readlines()
>>> len(remaining_lines)

Luckily, there’s an obvious workaround:

>>> f ='sample.txt', encoding='utf-8')
>>> all_lines = f.readlines()
>>> len(all_lines)

If you want to test it with your own data, here is a test script:

#!/usr/bin/env python

import sys
import codecs

def testfile(f):
    firstline = f.readline()
    remaining_lines = f.readlines()
    print "Read " + str(len(remaining_lines) + 1) + " lines."

def main(argv=None):
    if argv is None: argv = sys.argv

    for filename in argv[1:]:
        print "Opening " + filename + " using `open` built-in:"
        print "Opening " + filename + " using `` with no encoding:"
        print "Opening " + filename + " using `` with encoding:"
        testfile(, encoding='utf-8'))

if __name__ == "__main__": sys.exit(main())

The documentation suggests that mixing calls to readline() and readlines() should be safe for any file object, so it does look like a bug. I’ll have to check whether it’s fixed in the Python 3 betas (which use unicode by default).