State
Sunday, January 1st, 2017Happy New Year, everyone! I tripped over a well-concealed hole and sprained my ankle while carrying my daughter across the grass at Austin’s New Years festival, so am now ringing in 2017 lying in bed immobilized, which somehow seems appropriate. At least Lily is fine, and at least being bedridden gives me ample opportunity to blog.
Another year, another annual Edge question, with its opportunity for hundreds of scientists and intellectuals (including yours truly) to pontificate, often about why their own field of study is the source of the most important insights and challenges facing humanity. This year’s question was:
What scientific term or concept ought to be more widely known?
With the example given of Richard Dawkins’s “meme,” which jumped into the general vernacular, becoming a meme itself.
My entry, about the notion of “state” (yeah, I tried to focus on the basics), is here.
This year’s question presented a particular challenge, which scientists writing for a broad audience might not have faced for generations. Namely: to what extent, if any, should your writing acknowledge the dark shadow of recent events? Does the Putinization of the United States render your little pet debates and hobbyhorses irrelevant? Or is the most defiant thing you can do to ignore the unfolding catastrophe, to continue building your intellectual sandcastle even as the tidal wave of populist hatred nears?
In any case, the instructions from Edge were clear: ignore politics. Focus on the eternal. But people interpreted that injunction differently.
One of my first ideas was to write about the Second Law of Thermodynamics, and to muse about how one of humanity’s tragic flaws is to take for granted the gargantuan effort needed to create and maintain even little temporary pockets of order. Again and again, people imagine that, if their local pocket of order isn’t working how they want, then they should smash it to pieces, since while admittedly that might make things even worse, there’s also at least 50/50 odds that they’ll magically improve. In reasoning thus, people fail to appreciate just how exponentially more numerous are the paths downhill, into barbarism and chaos, than are the few paths further up. So thrashing about randomly, with no knowledge or understanding, is statistically certain to make things worse: on this point thermodynamics, common sense, and human history are all in total agreement. The implications of these musings for the present would be left as exercises for the reader.
Anyway, I was then pleased when, in a case of convergent evolution, my friend and hero Steven Pinker wrote exactly that essay, so I didn’t need to.
There are many other essays that are worth a read, some of which allude to recent events but the majority of which don’t. Let me mention a few.
- Nicholas Humphrey on referential opacity: while I didn’t know the term before, this is precisely the reason why, even if P=NP, that still wouldn’t imply PA=NPA for all oracles A.
- Rebecca Newberger Goldstein on scientific realism.
- Dawkins himself on “The Genetic Book of the Dead.”
- Jim Holt on invariance, and why Einstein’s real greatest blunder was to call it “relativity theory” rather than “invariant theory.” (Holt stole another of my essay ideas!)
- Sean Carroll on Bayes’ Theorem.
- Seth Lloyd on the virial theorem.
- My former algorithms professor Jon Kleinberg on digital representation.
- Peter Norvig on counting.
- Bruce Schneier on class breaks.
- Joichi Ito on neurodiversity.
- Adam Waytz on the illusion of explanatory depth.
- Brian Eno on confirmation bias (maybe the shortest entry, but one of the best!).
- Seth Shostak on Fermi problems.
- Lee Smolin on variety.
- Jennifer Jacquet on the Anthropocene.
- Abigail Marsh on alloparenting.
- Steve Omohundro on costly signalling.
- Chiara Marletto on the notion of “impossible.”
- Elizabeth Wrigley-Field on length-biased sampling.
- Carlo Rovelli on relative information.
- Raphael Bousso on the cosmological constant.
- Max Tegmark on substrate independence.
- Politically incorrect section: Greg Cochran on the breeder’s equation and Helena Cronin on sex.
- Gregory Benford on antagonistic pleiotropy.
- Richard Thaler on premortems.
- Nancy Etcoff on supernormal stimuli.
- John Tooby on coalitional instincts.
- Kurt Gray on relative deprivation.
- Jason Wilkes on functional equations (while the content was fine, the things that he strangely calls “functional equations” should really be called “axioms” or “postulates”).
- Linda Wilbrecht on sleeper sensitive periods.
Let me now discuss some disagreements I had with a few of the essays.
- Donald Hoffman on the holographic principle. For the point he wanted to make, about the mismatch between our intuitions and the physical world, it seems to me that Hoffman could’ve picked pretty much anything in physics, from Galileo and Newton onward. What’s new about holography?
- Jerry Coyne on determinism. Coyne, who’s written many things I admire, here offers his version of an old argument that I tear my hair out every time I read. There’s no free will, Coyne says, and therefore we should treat criminals more lightly, e.g. by eschewing harsh punishments in favor of rehabilitation. Following tradition, Coyne never engages the obvious reply, which is: “sorry, to whom were you addressing that argument? To me, the jailer? To the judge? The jury? Voters? Were you addressing us as moral agents, for whom the concept of ‘should’ is relevant? Then why shouldn’t we address the criminals the same way?”
- Michael Gazzaniga on “The Schnitt.” Yes, it’s possible that things like the hard problem of consciousness, or the measurement problem in quantum mechanics, will never have a satisfactory resolution. But even if so, building a complicated verbal edifice whose sole purpose is to tell people not even to look for a solution, to be satisfied with two “non-overlapping magisteria” and a lack of any explanation for how to reconcile them, never struck me as a substantive contribution to knowledge. It wasn’t when Niels Bohr did it, and it’s not when someone today does it either.
- I had a related quibble with Amanda Gefter’s piece on “enactivism”: the view she takes as her starting point, that “physics proves there’s no third-person view of the world,” is controversial to put it mildly among those who know the relevant physics. (And even if we granted that view, surely a third-person perspective exists for the quasi-Newtonian world in which we evolved, and that’s relevant for the cognitive science questions Gefter then discusses.)
- Thomas Bass on information pathology. Bass obliquely discusses the propaganda, conspiracy theories, social-media echo chambers, and unchallenged lies that helped fuel Trump’s rise. He then locates the source of the problem in Shannon’s information theory (!), which told us how to quantify information, but failed to address questions about the information’s meaning or relevance. To me, this is almost exactly like blaming arithmetic because it only tells you how to add numbers, without caring whether they’re numbers of rescued orphans or numbers of bombs. Arithmetic is fine; the problem is with us.
- In his piece on “number sense,” Keith Devlin argues that the teaching of “rigid, rule-based” math has been rendered obsolete by computers, leaving only the need to teach high-level conceptual understanding. I partly agree and partly disagree, with the disagreement coming from firsthand knowledge of just how badly that lofty idea gets beaten to mush once it filters down to the grade-school level. I would say that the basic function of math education is to teach clarity of thought: does this statement hold for all positive integers, or not? Not how do you feel about it, but does it hold? If it holds, can you prove it? What other statements would it follow from? If it doesn’t hold, can you give a counterexample? (Incidentally, there are plenty of questions of this type for which humans still outperform the best available software!) Admittedly, pencil-and-paper arithmetic is both boring and useless—but if you never mastered anything like it, then you certainly wouldn’t be ready for the concept of an algorithm, or for asking higher-level questions about algorithms.
- Daniel Hook on PT-symmetric quantum mechanics. As far as I understand, PT-symmetric Hamiltonians are equivalent to ordinary Hermitian ones under similarity transformations. So this is a mathematical trick, perhaps a useful one—but it’s extremely misleading to talk about it as if it were a new physical theory that differed from quantum mechanics.
- Jared Diamond extols the virtues of common sense, of which there are indeed many—but alas, his example is that if a mathematical proof leads to a conclusion that your common sense tells you is wrong, then you shouldn’t waste time looking for the exact mistake. Sometimes that’s good advice, but it’s pretty terrible applied to Goodstein’s Theorem, the muddy children puzzle, the strategy-stealing argument for Go, or anything else that genuinely is shocking until your common sense expands to accommodate it. Math, like science in general, is a constant dialogue between formal methods and common sense, where sometimes it’s one that needs to get with the program and sometimes it’s the other.
- Hans Halvorson on matter. I take issue with Halvorson’s claim that quantum mechanics had to be discarded in favor of quantum field theory, because QM was inconsistent with special relativity. It seems much better to say: the thing that conflicts with special relativity, and that quantum field theory superseded, was a particular application of quantum mechanics, involving wavefunctions of N particles moving around in a non-relativistic space. The general principles of QM—unit vectors in complex Hilbert space, unitary evolution, the Born rule, etc.—survived the transition to QFT without the slightest change.
Follow

