The standardization of measurement is a prerequisite to a (global) distributed economy. And the grounding of measurement units in physical constants of the universe—time measured by a reliable property of caesium, distance by combining this with the speed of light (in a vacuum), and so on—is one of the great achievements of science, unmooring information from physical representations. This standardization is an unalloyed good. The International System of Units (“the metric system”) in which this standardization first occurred, however, is not terribly well designed when compared with more traditional measures.

Prefixes, scale, and conversions

The metric system (SI) is a “system”, and not just a handful of units, mainly in the sense that its many units are designed to work together. One of its great innovations was the introduction of prefixes for each power of 1000, which turns a single measure, eg the meter for length, into units at many different scales: the kilometer for travel distances, the millimeter for fine measurement, and micro- and nanometers for microscopic measurement.1 The “derived” SI units, such as the watt, are also defined as straightforward combinations of base units, instead of as arbitrary independent units (such as horsepower), reflecting a clear modern understanding of how different units relate to each other.

The ability to easily convert between these different units using only powers of ten as constants is the main (only?) reason given for the superiority of SI units over fully-standardized versions of non-SI units. The meter and kilometer are “better” than the foot and the mile because it’s easy to convert kilometers to meters (multiply by 1000), while converting miles to feet (multiply by 5280) is arithmetically awkward.

It’s obvious that such conversions are easier in metric, but it’s not clear that such conversions are problems in common use. Humans have no natural intuition for quantities that differ in magnitude by factors of 1000 (or more); it is completely disingenuous to claim that mile users would stand to gain any benefit from knowing the distance to the neighboring town in feet. Travellers develop intuitive understandings of miles and/or kilometers; tailors develop intuitive understandings of centimeters and/or inches. Traveling tailors learn no intuitive connection between the two, in either system.

In short, the primary (only?) advantage of metric units accrues only to specialists: scientists and engineers who perform mathematics to bridge between and beyond intuitive limits. And in fact the advantages of conversions that require only the movement of decimal points are fairly minimal even to them when virtually all such conversions are done with the aid of computers, which gain no benefit from arithmetic using powers of ten and also store “obscure” conversion constants effortlessly.2

Units as vocabulary

While standardization of units is crucial, the actual choice of the units themselves is, in some sense, arbitrary. “The metric system” would work just as well if one meter were the length of your arm, or if it were the circumference of the earth. And most defenses of SI dismiss any critiques of metric on exactly these grounds: anyone can get used to any unit, so SI units are at least as good as any other.

This misses the point that units are vocabulary, and not all vocabulary is created equal. The fact that every concept is expressible in every language doesn’t mean that every concept is equally accessible in every language. Schadenfreude and hygge and majime can all be translated into “standard” English, but the direct availability of these words provides a richer vocabulary not just for expression, but for thinking about the world. The patterns you recognize are in part a product of your vocabulary.

For all their many (pre-standardization) limitations, “traditional” units were designed on exactly this basis. An acre was not chosen as an arbitrary unit of measure: it was the amount one person (and a team of oxen) could plow in a day—a natural and intuitive way to measure farmland. A league was the distance a person could walk in an hour—a natural way to measure travel distances. The intuition behind the use of units trumped arbitrary regularity between them: measuring farmland in terms of how long it would take to walk around it if the land were square is an absurd contrivance.

You can see the yearning for vocabulary even more clearly in modern measures that have defied conversion to metric. Few have any intuition for how far “150 gigameters” is…but “astronomical unit” (the average distance from earth to sun) is a valuable bit of vocabulary. Similarly 9.5 petameters and one light-year.

To declare the choice of unit as arbitrary is to blind yourself to the reality that units have always been designed by humans for human use.

The meter was chosen poorly

The meter was originally defined as roughly one forty-millionth of the circumference of the earth, but surely it was clear even then that what really mattered (why not one thousandth or one billionth?) was that the unit was about a yard long. And I argue that was not a particularly good choice. Unless you’re measuring cloth.

I mentioned above that SI offers prefixes for powers of 1000, but actually it offers a few more for other powers of ten. Only one of these gets common use, and for only one unit. It so happens that humans have an intuitive need for the centimeter.

Or the inch! I’m not arguing that the exact value of a unit matters, but rather that when your units are naturally spaced a factor of a thousand apart, there are orders of magnitude that are more useful than others. We seem to find it very useful to have a measure about on par with a finger’s width. Perhaps that is because that’s a good scale for measuring, with whole numbers, sizes that are relevant to human bodies: clothing, furniture, tools…anything that humans directly interact with. A difference of a millimeter is irrelevant in most such cases; a difference of a meter (or even the rarely-used decimeter) is too coarse. But sizing things to the nearest centimeter or inch is often good enough.

If the centimeter had been the base unit, would that have ruined any of the other units? It would have made a “kilometer” ten meters; I’d argue if that’s the way it had always been we’d consider it a core feature of the measurement vocabulary: there is a natural intuition for the kind of distance you can estimate visually, but not for our current kilometer. A “megameter” would be the equivalent of a current 10k, again an order of magnitude that seems to have some historical appeal: you can run it in an hour or walk it in two. But to speak more directly to the initial complaint, if we had units equal to the current centimeter, 10m, and 10k, I’m skeptical there would be widespread use of centi-, deci-, deca-, or hecto- prefixes for distance. There was a right order of magnitude for the base unit of distance, and SI got it wrong.

Mass

There is one base unit that even SI more or less confesses that it got wrong. The base SI unit for mass is not the gram. It’s the kilogram. Perhaps this is, again, the desire for units on par with human experience: a difference of a single gram is completely undetectable to human senses, while a kilogram is noticeable. If I had to pick a single order of magnitude, then I probably would have gone with 100g as a natural place to round off weights of human-manipulable objects, because rounding to the kilo is just a bit too coarse. (It is no coincidence that there’s a traditional unit of weight right between the two: the pound.)

Volume

This is where the SI “it’s just powers of ten!” story really falls apart. Area is measured in square meters (or square kilometers, which is very different from kilo-square-meters). But volume is measured in cubic…decimeters.

If the claim is that SI makes unit conversions easy, then ask the average user of the system how many liters are in a cubic meter. I’m extremely skeptical that “obviously one cubic meter is one kiloliter!” will be the most common answer.

Force/acceleration

There’s no need to dive into every derived unit, but it’s worth noting another of the non-SI measures that still gets wide use: the force of gravity on the surface of the earth; acceleration is widely expressed as some number of “gees”. If the base unit of length were the equivalent of the current 0.98 cm, the base of mass were the equivalent of 1kg (but without the kilo- prefix), and the second remained the same, then the force of gravity at the earth’s surface would be 1 Newton/g (or whatever you wanted to name the unit). This seems a much more natural physical basis for unit design than the circumference of the earth (which is frustratingly difficult to perceive, let alone measure).

Temperature: Celsius vs Fahrenheit

This brings us to what is likely the most contentious measure: temperature. Technically the SI measure is kelvin, but it is Celsius that is in common use in combination with other SI units.

The sole argument for the Celsius scale is that it’s somehow natural for the zero-to-one-hundred range of the scale to exactly span the temperatures of water’s liquid state. It’s a bizarre contention. For one thing, it’s not even really scientific: while the freezing/melting point of water is relatively stable across human experience, the boiling point varies considerably: where I currently live water boils at just over 90°C. It’s simply an absurd value to treat as a “constant”.

I grant that the freezing point of water is extremely salient to human experience…but in fact it is so salient that it loses value as the basis for a temperature scale. There is not a single Fahrenheit user who doesn’t know the temperature at which water freezes; enshrining this in the scale itself is not a “helpful reminder” to anyone at all. And in fact putting the freezing point at zero is quite problematic, since (unlike all other “metric” scales) it forces the common use of negative numbers.

I’d argue that there is only one significant context in which humans have intuitive understandings of temperature: atmospheric/air temperatures.3 Fahrenheit maps the zero-to-one-hundred scale very well to a “normal” range of such temperatures. 0°F is really (life-threateningly) cold; 100°F is really (life-threateningly) hot. In fact, every Fahrenheit user quickly adopts vocabulary for every ten-degree increment across the range, which correspond astonishingly well to the different ways to dress for the weather. (It’s rather telling that once you’re dressing for <0°F or >100°F there aren’t all that many differences. My -40° mountaineering getup is almost exactly what I’d wear at any temperature below zero Fahrenheit.)

People who have only ever used Celsius don’t seem to even realize just how impoverished their temperature vocabulary is. We can theorize all we want about saying “the low 20s” the way Fahrenheit users say “the 70s”, but the reality (based on experience living in Celsius-using countries) is that they simply don’t employ such usage in the same way. The scale just makes it very awkward. The language works fine for the 20s, gets a little iffy for the 10s (“tens”? “teens”?), feels verbose for the 0s (compare “high single digits” to “forties”) and is barely English for the -0s (it’s not even clear which are the “high negative single digits” and which are the “low negative single digits”; in Fahrenheit these are the 10s and the 20s). There is little more arrogantly self-absorbed than a Celsius user claiming that a rich vocabulary for air temperatures is irrelevant merely because they’ve never had one available.

Time

Time is a perfect example of unit standardization done right, and demonstrates the absurdity of the rationalizations given for metric in other contexts. SI provides a definition of the second grounded in physical properties of the universe, but makes no pretense of replacing intuitive, context-specific units with arbitrary prefix-based derivations. The arithmetic required to convert “one week” to 604 kiloseconds is awkward…but it is immediately obvious that such conversions simply don’t matter in practice. “Years” don’t even have a consistent conversion to a number of seconds.4 Humans organize time at scales larger than the second according to solar days and annual climate patterns. It is frustrating that these two systems (as well as the lunar cycle and the culturally-dominant seven-day week) are inherently incompatible, but the collection of bodges that have been cobbled together to formalize a complex translation between these scales is clearly preferable to some unrealistic declaration that we discard solar days and years. What’s more, the metric pretense that subdivision into groups of (powers of) ten is inherently superior to other groupings has been tested with both calendars and wall clocks; suffice it to say that any alleged advantages were utterly dwarfed by the downsides in practice.

Conclusions

I’ve pointed out some of the ways in which SI (“the metric system”) is poorly designed: mainly, it prioritizes rare use cases (conversions) and was dictated with little to no consideration for the “vocabulary” historical unit systems offer for intuitive scales.

But it’s worth highlighting what I have not said. I haven’t said SI is completely unusable (in any context). I haven’t said people can’t paper over some of its shortcomings—the embrace of centimeters being the prime example. I haven’t said people shouldn’t use metric.

The huge advantage that SI offers over competitors is standardization—not that its units are formally defined (this has been done for almost all unit systems), but that SI is in wide use everywhere in the world. Its use in science and international commerce means that even in countries with official allegiance to different unit systems, SI is well-known enough to be relatively accessible.

My main point is that wide adoption does not necessarily imply superior design; in fact often one must choose between the two. There are very few units in SI that are better designed than any of their historial counterparts. Some are not significantly worse (centimeters and kilometers aren’t substantially worse than inches and miles), some are a bit worse (kilograms get the scale just a bit wrong), and some are substantially worse (Celsius is simply a worse vocabulary for describing air temperatures than Fahrenheit). Transition costs aside (which is a huge caveat), I think a US that went entirely metric would probably be a net good: sharing standards with the rest of the world would outweigh using slightly less-well-designed units. I would, however, mourn the loss of Fahrenheit. (In fact, I wouldn’t be at all surprised if that transition simply didn’t take.)

It is a huge shame, however, that our single universal system of units was not designed with a more modern understanding of human factors and use cases in mind.


  1. These prefixes have proven particularly useful in describing digital storage sizes, with common usage growing from thousands of bytes (kB) to millions of bytes (MB) to billions and trillions of bytes (GB; TB). There are few (if any) other examples of measurements in common use that are distributed exponentially (and it’s little surprise that there are no “traditional” non-SI units for measuring such quantities). The irony is that standardization for these units is inconsistent: the prefixes are sometimes used to refer to the standard metric powers of 1000 (kilo-, mega-, giga-, etc), but sometimes instead used for powers of 1024 (2 to the power of 10). “Officially” these should be called kibibytes, mebibytes, and gibibytes, written KiB, MiB, and GiB, but in practice kB, MB, and GB are ambiguous. 

  2. In fact, computers actually have significant problems with powers of ten: decimal fractions are awkward for computers to process natively, with the vast majority of systems storing “one tenth” as 0.10000000149. While this is a small error on its own, the accumulation of such errors over many calculations leads to substantial inaccuracy. Financial systems, in particular, entail quite a bit of expensive (and error-prone) engineering to ensure price representations that are precise, not approximations. 

  3. Temperatures are also used widely for cooking—ie oven temperatures—but in my limited exposure humans never really develop much intuition for this. We learn that a 230°C/450°F oven is very hot and a 150°C/300°F oven is relatively cool for cooking, but to me at least such numbers always seem arbitrary and beyond direct experience. 

  4. The use of “leap seconds” to account for Earth’s very-slightly-unpredictable speed of rotation means that exact conversion from years to seconds isn’t even predictable in advance. This lack of predictability is such an inconvenience to automated systems that we’re probably giving up on leap seconds, and merely accepting that the current calendar will slowly drift away from perfect solar time over the course of millenia. If solar noon being a few minutes “off” in ten thousand years is a problem, it will be hard to claim that the legacy date-and-time system will have been anything but astonishingly successful.