{"id":1823,"date":"2014-05-30T15:04:45","date_gmt":"2014-05-30T19:04:45","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=1823"},"modified":"2016-12-10T04:26:41","modified_gmt":"2016-12-10T09:26:41","slug":"giulio-tononi-and-me-a-phi-nal-exchange","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=1823","title":{"rendered":"Giulio Tononi and Me: A Phi-nal Exchange"},"content":{"rendered":"<p>You might recall that last week I <a href=\"https:\/\/scottaaronson.blog\/?p=1799\">wrote a post<\/a> criticizing Integrated Information Theory\u00a0(IIT), and its apparent implication that a simple Reed-Solomon decoding circuit would, if scaled to a large enough\u00a0size, bring into being a consciousness vastly exceeding our own. \u00a0On Wednesday Giulio Tononi, the creator\u00a0of IIT, was kind\u00a0enough to send me\u00a0a fascinating\u00a014-page rebuttal, and to give me permission to share it here:<\/p>\n<p><span style=\"font-size: large;\"><strong><a href=\"http:\/\/www.scottaaronson.com\/tononi.docx\">Why Scott should stare at a blank wall and reconsider (or, the conscious grid)<\/a><\/strong><\/span><\/p>\n<p>If you&#8217;re interested in this subject\u00a0at all, then I strongly recommend reading Giulio&#8217;s response before continuing further. \u00a0 But for those who want the tl;dr: Giulio, not one to battle strawmen,\u00a0first restates\u00a0my own\u00a0argument\u00a0against IIT with crystal clarity. \u00a0And while he has some minor quibbles (e.g., apparently my calculations of \u03a6 didn&#8217;t use\u00a0the most recent, &#8220;3.0&#8221; version\u00a0of IIT), he wisely sets those aside in order to focus on the core question: <strong>according to IIT, are all sorts of simple expander graphs conscious?<\/strong><\/p>\n<p>There,\u00a0he doesn&#8217;t\u00a0&#8220;bite the bullet&#8221; so much as\u00a0devour a\u00a0bullet hoagie with mustard. \u00a0He affirms\u00a0that, yes, according to IIT,\u00a0a large\u00a0network of XOR\u00a0gates\u00a0arranged in a simple expander graph is\u00a0conscious. \u00a0Indeed, he\u00a0goes further, and says that the &#8220;expander&#8221; part is superfluous: even a network of XOR\u00a0gates\u00a0arranged in a 2D square grid is\u00a0conscious. \u00a0In my language, Giulio is simply pointing out here that a \u221an\u00d7\u221an square grid\u00a0has <em>decent<\/em> expansion:\u00a0good enough to produce a \u03a6-value of about \u221an, if not the information-theoretic maximum of n (or n\/2, etc.) that an expander graph could achieve. \u00a0And apparently, by Giulio&#8217;s\u00a0lights, \u03a6=\u221an is sufficient\u00a0for consciousness!<\/p>\n<p>While\u00a0Giulio never mentions this, it&#8217;s interesting\u00a0to observe\u00a0that logic\u00a0gates arranged in a <em>1-dimensional line<\/em> would produce a tiny \u03a6-value (\u03a6=O(1)). \u00a0So even by IIT\u00a0standards,\u00a0such a linear array would <em>not<\/em>\u00a0be conscious. \u00a0Yet the jump from a\u00a0line to a two-dimensional grid\u00a0is enough to light the spark of Mind.<\/p>\n<p>Personally, I give Giulio enormous credit for having the intellectual courage to follow his theory wherever it leads. \u00a0When the critics point out, &#8220;if your theory were\u00a0true, then the Moon would be made of peanut butter,&#8221; he doesn&#8217;t try to wiggle out of\u00a0the prediction,\u00a0but proudly replies,\u00a0&#8220;yes, <em>chunky<\/em> peanut butter&#8212;and you forgot to add that\u00a0the Earth\u00a0is made of Nutella!&#8221;<\/p>\n<p>Yet even as we\u00a0admire Giulio&#8217;s honesty\u00a0and consistency, his stance\u00a0might also prompt\u00a0us, gently, to take another look at this peanut-butter-moon theory, and at what grounds we had\u00a0for believing\u00a0it in the first place. \u00a0In his response essay, Giulio offers four arguments\u00a0(by my count) for accepting\u00a0IIT despite, or even because\u00a0of, its conscious-grid prediction: one &#8220;negative&#8221; argument and three &#8220;positive&#8221; ones. \u00a0Alas, while your \u03a6-lage may vary, I didn&#8217;t find <em>any<\/em> of the four\u00a0arguments persuasive. \u00a0In the rest of this post, I&#8217;ll\u00a0go through them one by one and explain why.<\/p>\n<p><strong>I. The\u00a0Copernicus-of-Consciousness\u00a0Argument<\/strong><\/p>\n<p>Like many\u00a0commenters\u00a0on my last\u00a0post, Giulio heavily criticizes my appeal to &#8220;common sense&#8221; in rejecting IIT. \u00a0Sure, he says, I might find it &#8220;obvious&#8221; that a huge Vandermonde matrix, or its physical instantiation, isn&#8217;t conscious. \u00a0But didn&#8217;t people also find\u00a0it &#8220;obvious&#8221; for millennia that the Sun orbits the Earth? \u00a0Isn&#8217;t the entire\u00a0point of science to <em>challenge<\/em>\u00a0common sense? \u00a0Clearly, then, the test of a theory of consciousness is not how well it upholds\u00a0&#8220;common sense,&#8221; but how well it fits the facts.<\/p>\n<p>The above position sounds\u00a0pretty\u00a0convincing: who could dispute\u00a0that observable facts trump personal intuitions? \u00a0The trouble is,\u00a0what <em>are<\/em> the observable facts when it comes to consciousness? \u00a0The anti-common-sense view\u00a0gets all its force by pretending that we&#8217;re in a relatively late stage of research&#8212;namely, the stage of taking an agreed-upon scientific\u00a0definition of consciousness, and applying\u00a0it to test our intuitions&#8212;rather than in an extremely early stage, of <em>agreeing on what the word\u00a0&#8220;consciousness&#8221; is even supposed to mean<\/em>.<\/p>\n<p>Since I think this point is extremely important&#8212;and of general interest,\u00a0beyond just IIT&#8212;I&#8217;ll expand on it with\u00a0some analogies.<\/p>\n<p>Suppose I told you that, in my opinion,\u00a0the \u03b5-\u03b4 definition of continuous functions&#8212;the one you learn\u00a0in calculus class&#8212;failed to capture the true meaning\u00a0of continuity. \u00a0Suppose I told you that I had a new, <em>better<\/em> definition of continuity&#8212;and amazingly, when I tried out my definition on some examples, it turned out that\u00a0\u230ax\u230b\u00a0(the floor function) was continuous, whereas x<sup>2<\/sup>\u00a0 had discontinuities, though only at\u00a017.5 and 42.<\/p>\n<p>You would probably ask what I was smoking, and whether you could have some. \u00a0But why? \u00a0Why <em>shouldn&#8217;t<\/em> the study\u00a0of continuity produce counterintuitive results? \u00a0After all, even the <em>standard<\/em> definition of continuity leads to\u00a0some famously weird results, like that\u00a0<a href=\"http:\/\/oregonstate.edu\/instruct\/mth251\/cq\/Stage5\/Practice\/Images\/pinchTopSine.gif\">x sin(1\/x)<\/a>\u00a0is\u00a0a continuous function, even though\u00a0<a href=\"http:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/a4\/Sin(1x).svg\">sin(1\/x)<\/a>\u00a0is discontinuous. \u00a0And it&#8217;s not as if\u00a0the standard definition is God-given: people had been using words like\u00a0&#8220;continuous&#8221; for centuries\u00a0before Bolzano, Weierstrass, et al. formalized the\u00a0\u03b5-\u03b4\u00a0definition, a definition\u00a0that millions of calculus students still find\u00a0far from intuitive. \u00a0So why <em>shouldn&#8217;t<\/em> there be a different, better definition of &#8220;continuous,&#8221; and why shouldn&#8217;t it reveal that a step function is continuous while a parabola is not?<\/p>\n<p>In my view, the\u00a0way\u00a0out of this conceptual jungle is to realize that, before\u00a0any formal definitions, any \u03b5&#8217;s and\u00a0\u03b4&#8217;s, we start with an\u00a0<em>intuition<\/em>\u00a0for we&#8217;re trying to capture by the\u00a0word\u00a0&#8220;continuous.&#8221; \u00a0And if we press hard enough on what that intuition involves, we&#8217;ll find that it largely\u00a0consists of\u00a0various &#8220;paradigm-cases.&#8221; \u00a0A <em>continuous function<\/em>, we&#8217;d say, is a function like 3x, or x<sup>2<\/sup>, or sin(x), while a <em>discontinuity<\/em> is the kind of thing that the function 1\/x has at x=0, or that\u00a0\u230ax\u230b has at every integer point. \u00a0Crucially, we use the paradigm-cases to guide our choice of a formal definition&#8212;not vice versa! \u00a0It&#8217;s true that, once we <em>have<\/em>\u00a0a formal definition, we can then apply it to &#8220;exotic&#8221; cases like x sin(1\/x), and we might be surprised\u00a0by the results. \u00a0But the paradigm-cases are different. \u00a0If, for example, our definition told us\u00a0that x<sup>2<\/sup>\u00a0was discontinuous, that wouldn&#8217;t be a &#8220;surprise&#8221;; it would just be\u00a0evidence that we&#8217;d picked\u00a0a bad definition. \u00a0The definition failed at the only task for which it could have succeeded: namely, that of capturing what we meant.<\/p>\n<p>Some people might say that this is all well and good in\u00a0pure math, but empirical science has no need for squishy intuitions and paradigm-cases. \u00a0Nothing could be further from the truth. \u00a0Suppose, again, that I told\u00a0you that\u00a0physicists since Kelvin had gotten the definition of temperature all wrong, and that I had a new, better definition. \u00a0And, when I built a Scott-thermometer that measures\u00a0<em>true<\/em> temperatures, it delivered the shocking result that <em>boiling water is actually colder than ice<\/em>. \u00a0You&#8217;d probably tell me where to shove my Scott-thermometer. \u00a0But wait: how do you know that I&#8217;m not the Copernicus of heat, and that future generations won&#8217;t celebrate my breakthrough while scoffing at your small-mindedness?<\/p>\n<p>I&#8217;d say\u00a0there&#8217;s an excellent\u00a0answer: because what we\u00a0<em>mean<\/em> by heat is &#8220;whatever it is\u00a0that boiling water has more of\u00a0than ice&#8221; (along with dozens of other paradigm-cases). \u00a0And because, if you use a thermometer to check whether boiling water is hotter than ice, then the term for what you&#8217;re doing is\u00a0<em>calibrating your\u00a0thermometer<\/em>. \u00a0When the clock strikes 13, it&#8217;s time to fix the clock, and when the thermometer says boiling water&#8217;s colder than ice, it&#8217;s time to replace\u00a0the thermometer&#8212;or if needed, even the entire theory on which the thermometer is based.<\/p>\n<p>Ah, you say, but doesn&#8217;t\u00a0modern physics define\u00a0heat in a completely different, <em>non<\/em>-intuitive way, in terms of\u00a0molecular motion? \u00a0Yes, and that turned out to be a superb\u00a0definition&#8212;not only\u00a0because it was precise, explanatory, and applicable to cases far beyond our everyday\u00a0experience, but crucially, <em>because\u00a0it matched\u00a0common sense\u00a0on the paradigm-cases<\/em>. \u00a0If it <em>hadn&#8217;t<\/em> given sensible results for boiling water and ice, then the only possible conclusion would be that, whatever new quantity physicists had defined, <em>they shouldn&#8217;t call it\u00a0&#8220;temperature,&#8221;<\/em> or claim that their quantity\u00a0measured the amount of &#8220;heat.&#8221; \u00a0They should call their new thing something else.<\/p>\n<p>The implications for the consciousness debate are\u00a0obvious. \u00a0When we consider whether to accept IIT&#8217;s equation of\u00a0integrated information with consciousness,<em>\u00a0we don&#8217;t start with\u00a0any agreed-upon, independent notion\u00a0of consciousness against which the new notion\u00a0can be compared<\/em>. \u00a0The main\u00a0things we start with, in my view, are certain paradigm-cases that\u00a0gesture\u00a0toward what we mean:<\/p>\n<ul>\n<li>You are conscious (though not when anesthetized).<\/li>\n<li>(Most) other people appear\u00a0to be conscious, judging from their behavior.<\/li>\n<li>Many\u00a0animals appear to be conscious, though probably to a lesser degree than humans (and the degree of consciousness in each particular species is far from obvious).<\/li>\n<li>A rock is not conscious. \u00a0A wall is not conscious. \u00a0A Reed-Solomon code is not conscious. \u00a0Microsoft Word is not conscious (though a Word macro that passed the Turing test conceivably <em>would<\/em> be).<\/li>\n<\/ul>\n<p>Fetuses, coma patients, fish, and hypothetical AIs are the x sin(1\/x)&#8217;s of consciousness: they&#8217;re the tougher cases, the ones where we might actually <em>need<\/em> a formal definition to adjudicate the truth.<\/p>\n<p>Now, given\u00a0a proposed formal definition for an intuitive concept, how can we\u00a0check whether the definition is talking about same thing we\u00a0were trying to get at before? \u00a0Well, we\u00a0can check whether the definition at least agrees\u00a0that parabolas are continuous while step functions are not, that boiling water is hot while ice is cold, and that we&#8217;re conscious while Reed-Solomon decoders\u00a0are not. \u00a0If so, then the definition\u00a0<em>might<\/em>\u00a0be picking\u00a0out the same thing that\u00a0we\u00a0meant, or were trying\u00a0to mean, pre-theoretically\u00a0(though we\u00a0still\u00a0can&#8217;t\u00a0be certain). \u00a0If not, then the definition is <em>certainly<\/em> talking about something else.<\/p>\n<p>What else can we do?<\/p>\n<p><strong>II. The Axiom Argument<\/strong><\/p>\n<p>According to Giulio, there <em>is<\/em> something else we can do, besides relying on paradigm-cases.\u00a0 That something else, in his words,\u00a0is to lay down &#8220;<em>postulates<\/em> about how the physical world should be organized to support the essential properties of experience,&#8221; then use those postulates to derive a consciousness-measuring quantity.<\/p>\n<p>OK, so what are IIT&#8217;s postulates? \u00a0Here&#8217;s how Giulio states the five postulates leading to \u03a6\u00a0in his response essay (he &#8220;derives&#8221; these from earlier &#8220;phenomenological axioms,&#8221; which you can find in the essay):<\/p>\n<ol>\n<li>A system of mechanisms exists intrinsically if it can make a difference to itself, by affecting the probability of its past and future states, i.e. it has causal power (existence).<\/li>\n<li>It is composed of submechanisms each with their own causal power (composition).<\/li>\n<li>It generates a conceptual structure that is the specific way it is, as specified by each mechanism&#8217;s concept &#8212; this is how each mechanism affects the probability of the system&#8217;s past and future states (information).<\/li>\n<li>The conceptual structure is unified &#8212; it cannot be decomposed into independent components (integration).<\/li>\n<li>The conceptual structure is singular &#8212; there can be no superposition of multiple conceptual structures over the same mechanisms and intervals of time.<\/li>\n<\/ol>\n<p>From my standpoint, these postulates have\u00a0three problems. \u00a0First, I don&#8217;t really understand them. \u00a0Second, insofar as I do understand them, I don&#8217;t necessarily accept their truth. \u00a0And third, insofar as I do accept their truth, I don&#8217;t see how they lead to\u00a0\u03a6.<\/p>\n<p>To elaborate a bit:<\/p>\n<p><em>I don&#8217;t really understand the postulates.<\/em>\u00a0 I realize that the postulates are explicated further in the many papers on IIT. \u00a0Unfortunately, while it&#8217;s possible that I missed something, in all of the papers that I read, the definitions never seemed to &#8220;bottom out&#8221; in mathematical notions that I understood, like functions mapping finite sets to other finite sets. \u00a0What, for example, is a &#8220;mechanism&#8221;? \u00a0What&#8217;s a &#8220;system of mechanisms&#8221;? \u00a0What&#8217;s &#8220;causal power&#8221;? \u00a0What&#8217;s a &#8220;conceptual structure,&#8221; and what does it mean for it to be &#8220;unified&#8221;? \u00a0Alas, it doesn&#8217;t help to define these notions in terms of other notions that I also don&#8217;t understand. \u00a0And yes, I agree that all these notions can\u00a0be <em>given<\/em> fully rigorous definitions, but there could be many different ways to do so, and the devil could lie\u00a0in the details. \u00a0In any case, because (as I said) it&#8217;s entirely possible that the failure is mine, I place much less weight on this point than I do on the two points to follow.<\/p>\n<p><em>I don&#8217;t necessarily accept the postulates&#8217; truth.<\/em> \u00a0Is consciousness a &#8220;unified conceptual structure&#8221;? \u00a0Is it &#8220;singular&#8221;? \u00a0Maybe. \u00a0I don&#8217;t know. \u00a0It sounds plausible. \u00a0But at any rate, I&#8217;m far less confident about any these postulates&#8212;<em>whatever<\/em> one means by them!&#8212;than I am about\u00a0my own &#8220;postulate,&#8221; which is\u00a0that you and I are conscious while my toaster is not. \u00a0Note that my postulate, though\u00a0not phenomenological, does have\u00a0the merit of constraining candidate theories of consciousness in an unambiguous way.<\/p>\n<p><em>I don&#8217;t see how the postulates lead to \u03a6.<\/em>\u00a0 Even if one accepts the postulates, how does one deduce\u00a0that the &#8220;amount of consciousness&#8221; should be measured by \u03a6, rather than by some other quantity? \u00a0None of the papers I\u00a0read&#8212;including the ones Giulio linked to in his response essay&#8212;contained anything that looked to me like a derivation of\u00a0\u03a6. \u00a0Instead, there was general discussion of the postulates, and then \u03a6 just sort of appeared at some point. \u00a0Furthermore, given the many\u00a0idiosyncrasies of \u03a6&#8212;the minimization over all bipartite (why just bipartite? why not tripartite?) decompositions of the system, the need for normalization (or something else in version 3.0) to deal with highly-unbalanced partitions&#8212;it would be quite a surprise were it possible to derive its specific form from postulates of such generality.<\/p>\n<p>I was going to argue for that conclusion in more detail, when I realized that Giulio had kindly done the work for me already. \u00a0Recall that Giulio chided me for not using the &#8220;latest, 2014, version 3.0&#8221; edition\u00a0of \u03a6\u00a0in my previous post. \u00a0Well, if the postulates uniquely determined the form of \u03a6, then\u00a0what&#8217;s with all these upgrades? \u00a0Or has\u00a0\u03a6&#8217;s definition\u00a0been changing from year to year because the\u00a0postulates <em>themselves<\/em> have been changing? \u00a0If the latter, then maybe one\u00a0should wait for the situation to stabilize before trying to form an opinion of the postulates&#8217; meaningfulness,\u00a0truth, and completeness?<\/p>\n<p><strong>III. The Ironic Empirical\u00a0Argument<\/strong><\/p>\n<p>Or maybe not. \u00a0Despite all the problems noted above with the IIT postulates, Giulio argues in his essay that there&#8217;s a good a reason to accept them: namely,\u00a0they\u00a0<em>explain various empirical facts from neuroscience, and lead to confirmed predictions. \u00a0<\/em>In his words:<\/p>\n<p style=\"padding-left: 30px;\">[A] theory&#8217;s postulates must be able to <span style=\"text-decoration: underline;\">explain, in a principled and parsimonious way, at least those many facts about consciousness and the brain that are reasonably established and non-controversial<\/span>. \u00a0For example, we know that our own consciousness depends on certain brain structures (the cortex) and not others (the cerebellum), that it vanishes during certain periods of sleep (dreamless sleep) and reappears during others (dreams), that it vanishes during certain epileptic seizures, and so on. \u00a0Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts. \u00a0Such empirical facts, and not intuitions, should be its primary test&#8230;<\/p>\n<p style=\"padding-left: 30px;\">[I]n some cases we already have some suggestive evidence [of the truth of the IIT postulates&#8217; predictions]. \u00a0One example is the cerebellum, which has 69 billion neurons or so &#8212; more than four times the 16 billion neurons of the cerebral cortex &#8212; and is as complicated a piece of biological machinery as any. \u00a0Though we do not understand exactly how it works (perhaps even less than we understand the cerebral cortex), its connectivity definitely suggests that the cerebellum is ill suited to information integration, since it lacks lateral connections among its basic modules. \u00a0And indeed, though the cerebellum is heavily connected to the cerebral cortex, removing it hardly affects our consciousness, whereas removing the cortex eliminates it.<\/p>\n<p>I hope I&#8217;m not alone in noticing the irony of this move. \u00a0But just in case, let me spell it out: Giulio\u00a0has stated, as &#8220;largely uncontroversial facts,&#8221; that certain brain regions\u00a0(the cerebellum) and certain states (dreamless sleep) are not associated with our consciousness. \u00a0He then views it as a victory\u00a0for IIT, if those regions and states turn out to have\u00a0lower information integration than the regions and states that he <em>does<\/em> take to be associated with our consciousness.<\/p>\n<p>But <em>how does Giulio know that the cerebellum isn&#8217;t conscious?<\/em>\u00a0 Even if it doesn&#8217;t produce &#8220;our&#8221; consciousness, maybe the cerebellum has its own consciousness, just as rich as the cortex&#8217;s but separate from it. \u00a0Maybe removing the cerebellum destroys that other consciousness, unbeknownst to &#8220;us.&#8221; \u00a0Likewise, maybe &#8220;dreamless&#8221; sleep brings about its own form of consciousness, one that (unlike dreams) we never, ever remember in the morning.<\/p>\n<p>Giulio might take the\u00a0implausibility\u00a0of those ideas\u00a0as obvious, or at least\u00a0as &#8220;largely uncontroversial&#8221; among neuroscientists. \u00a0But here&#8217;s the problem with that: <strong>he just told us that a 2D square grid is conscious!<\/strong>\u00a0 He told us that we must <em>not<\/em>\u00a0rely on &#8220;commonsense intuition,&#8221; or on any popular consensus,\u00a0to say\u00a0that if a square mesh of wires is just sitting there XORing some input bits, doing nothing at all\u00a0that we&#8217;d want to call intelligent, then it&#8217;s probably safe to conclude\u00a0that the mesh isn&#8217;t conscious. \u00a0So then why shouldn&#8217;t he say the same\u00a0for\u00a0the cerebellum, or for the brain in dreamless sleep? \u00a0By Giulio&#8217;s own rules (the ones he used for the mesh), we have no<em> a-priori <\/em>clue\u00a0whether those systems\u00a0are conscious or not&#8212;so even if IIT predicts that they&#8217;re not conscious, that can&#8217;t be counted as any sort of success\u00a0for IIT.<\/p>\n<p>For me, the point is even\u00a0stronger: I, personally, would be a million<em>\u00a0<\/em>times more inclined to ascribe consciousness to the human cerebellum, or to dreamless sleep, than I would to the mesh of XOR gates. \u00a0For it&#8217;s not hard to imagine neuroscientists of the future discovering &#8220;hidden forms of intelligence&#8221; in the cerebellum, and all but impossible\u00a0to\u00a0imagine them doing the same for the mesh. \u00a0But even if you put those examples on the same footing, still the take-home message\u00a0seems clear: <strong>you can&#8217;t\u00a0count it as\u00a0a &#8220;success&#8221; for\u00a0IIT if it predicts that the cerebellum in unconscious, while at the same time\u00a0denying that it&#8217;s a &#8220;failure&#8221;\u00a0for IIT if it predicts that a square\u00a0mesh of XOR gates\u00a0is conscious.<\/strong> \u00a0If the unconsciousness of the cerebellum can be considered an\u00a0&#8220;empirical fact,&#8221; safe enough for\u00a0theories of consciousness to\u00a0be judged against it, then <em>surely <\/em>the unconsciousness of the mesh can also be considered\u00a0such a fact.<\/p>\n<p><strong>IV. The Phenomenology Argument<\/strong><\/p>\n<p>I now come to, for me, the strangest and most surprising part of Giulio&#8217;s response. \u00a0Despite his earlier claim that IIT need not dovetail with &#8220;commonsense\u00a0intuition&#8221; about which systems are\u00a0conscious&#8212;that it can <em>defy<\/em>\u00a0intuition&#8212;at some point, Giulio valiantly tries to <em>reprogram<\/em>\u00a0our\u00a0intuition, to make us <em>feel<\/em>\u00a0why a 2D grid could be\u00a0conscious. \u00a0As best I can understand, the argument seems to be that, when we stare at a blank 2D screen, we form a rich experience in our heads,\u00a0and that richness must be mirrored by a corresponding &#8220;intrinsic&#8221; richness in 2D space itself:<\/p>\n<p style=\"padding-left: 30px;\">[I]f one thinks a bit about it, <span style=\"text-decoration: underline;\">the experience of empty 2D visual space is not at all empty, but contains a remarkable amount of structure<\/span>. \u00a0In fact, when we stare at the blank screen, quite a lot is immediately available to us without any effort whatsoever. \u00a0Thus, we are aware of all the possible locations in space (&#8220;points&#8221;): the various locations are right &#8220;there&#8221;, in front of us. \u00a0We are aware of their relative positions: a point may be left or right of another, above or below, and so on, for every position, without us having to order them. \u00a0And we are aware of the relative distances among points: quite clearly, two points may be close or far, and this is the case for every position. \u00a0Because we are aware of all of this immediately, without any need to calculate anything, and quite regularly, since 2D space pervades most of our experiences, we tend to take for granted the vast set of relationship[s] that make up 2D space.<\/p>\n<p style=\"padding-left: 30px;\">And yet, says IIT, given that our experience of the blank screen definitely <em>exists<\/em>, and it is precisely the way it is &#8212; it <em>is<\/em> 2D visual space, with all its relational properties &#8212; there must be physical mechanisms that specify such phenomenological relationships through their causal power &#8230; One may also see that the causal relationships that make up 2D space obtain whether the elements are on or off. \u00a0And finally, one may see that such a 2D grid is necessary not so much to <em>represent<\/em> space from the extrinsic perspective of an observer, but to <em>create<\/em> it, from its own <em>intrinsic<\/em> perspective.<\/p>\n<p>Now, it would be child&#8217;s-play\u00a0to\u00a0criticize\u00a0the above line of argument for conflating\u00a0<em>our<\/em> consciousness of the screen with the alleged\u00a0consciousness of the screen itself. \u00a0To wit: \u00a0Just because it feels like something to\u00a0<em>see<\/em> a wall, doesn&#8217;t mean\u00a0it feels like something\u00a0to <em>be<\/em>\u00a0a wall. \u00a0You can smell a rose, and the rose can smell good, but that doesn&#8217;t mean the rose can smell you.<\/p>\n<p>However, I actually prefer\u00a0a different tack in criticizing\u00a0Giulio&#8217;s &#8220;wall\u00a0argument.&#8221; \u00a0Suppose I accepted that my mental image\u00a0of the relationships between certain entities was relevant to assessing whether those entities had\u00a0<em>their own<\/em>\u00a0mental life, independent of me or any other observer. \u00a0For example, suppose I believed that, if my experience of 2D space\u00a0is rich and structured, then that&#8217;s evidence that 2D space <em>is<\/em> rich and structured enough to be conscious.<\/p>\n<p>Then my question is this: <strong>why shouldn&#8217;t the same be true of 1D space?<\/strong> \u00a0After all, my experience of staring at a rope is <em>also<\/em> rich and structured, no less than my experience of staring at a wall. \u00a0I perceive\u00a0some points on the rope as being toward the left, others as being toward the right, and some points as being <em>between<\/em> two other points. \u00a0In fact, the rope even has a structure&#8212;namely, a natural total ordering on its points&#8212;that the wall lacks. \u00a0So why does IIT cruelly deny subjective experience to a row of logic gates strung along a rope, reserving it only for a mesh of logic gates pasted to a wall?<\/p>\n<p>And yes, I know the answer: because the logic gates on the rope aren&#8217;t &#8220;integrated&#8221; enough. \u00a0But who&#8217;s to say that the gates in the\u00a02D mesh <em>are<\/em>\u00a0integrated enough? \u00a0As I mentioned before, their \u03a6-value grows only as\u00a0the square root of the number of gates, so that the ratio of integrated information to total information tends to 0 as the number of gates increases. \u00a0And besides, aren&#8217;t what Giulio calls &#8220;the facts of phenomenology&#8221; the real\u00a0arbiters\u00a0here, and isn&#8217;t my perception of the rope&#8217;s structure a phenomenological fact? \u00a0When you cut a rope, does it not split? \u00a0When you prick it, does it not fray?<\/p>\n<p><strong>Conclusion<\/strong><\/p>\n<p>At this point, I fear we&#8217;re at a philosophical impasse. \u00a0Having learned that, according to IIT,<\/p>\n<ol>\n<li>a square\u00a0grid\u00a0of XOR\u00a0gates is conscious, and your experience of staring at a blank wall provides evidence for that,<\/li>\n<li>by contrast, a linear array\u00a0of XOR\u00a0gates is\u00a0<em>not<\/em> conscious, your experience of staring at a rope notwithstanding,<\/li>\n<li>the human cerebellum is <em>also<\/em> not conscious (even though a\u00a0grid of XOR gates is), and<\/li>\n<li>unlike with the XOR gates, we don&#8217;t need a theory to <em>tell<\/em> us the cerebellum is unconscious, but can simply accept it as &#8220;reasonably established&#8221; and &#8220;largely uncontroversial,&#8221;<\/li>\n<\/ol>\n<p>I personally feel completely safe in saying that this is not the theory of consciousness for me. \u00a0But I&#8217;ve also learned that other people, <em>even after understanding the above<\/em>, still don&#8217;t\u00a0reject IIT. \u00a0And you know what? \u00a0Bully for them. \u00a0On reflection, I firmly believe that a two-state solution is possible, in which we simply adopt different words for the different things that we mean by &#8220;consciousness&#8221;&#8212;like, say, consciousness<sub>Real<\/sub> for my kind and consciousness<sub>WTF<\/sub> for the IIT kind. \u00a0OK, OK, just kidding! \u00a0How about &#8220;paradigm-case consciousness&#8221; for the one and &#8220;IIT consciousness&#8221; for the other.<\/p>\n<hr \/>\n<p><strong>Completely unrelated announcement:<\/strong> Some of you might enjoy <a href=\"http:\/\/www.nature.com\/news\/theoretical-physics-complexity-on-the-horizon-1.15285\">this <em>Nature News<\/em> piece<\/a> by Amanda Gefter, about black holes and computational complexity.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>You might recall that last week I wrote a post criticizing Integrated Information Theory\u00a0(IIT), and its apparent implication that a simple Reed-Solomon decoding circuit would, if scaled to a large enough\u00a0size, bring into being a consciousness vastly exceeding our own. \u00a0On Wednesday Giulio Tononi, the creator\u00a0of IIT, was kind\u00a0enough to send me\u00a0a fascinating\u00a014-page rebuttal, and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[12,11],"tags":[],"class_list":["post-1823","post","type-post","status-publish","format-standard","hentry","category-metaphysical-spouting","category-nerd-interest"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/1823","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1823"}],"version-history":[{"count":24,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/1823\/revisions"}],"predecessor-version":[{"id":1849,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/1823\/revisions\/1849"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1823"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1823"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1823"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}