Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)

Happy birthday to me!

Recently, lots of people have been asking me what I think about IIT—no, not the Indian Institutes of Technology, but Integrated Information Theory, a widely-discussed “mathematical theory of consciousness” developed over the past decade by the neuroscientist Giulio Tononi.  One of the askers was Max Tegmark, who’s enthusiastically adopted IIT as a plank in his radical mathematizing platform (see his paper “Consciousness as a State of Matter”).  When, in the comment thread about Max’s Mathematical Universe Hypothesis, I expressed doubts about IIT, Max challenged me to back up my doubts with a quantitative calculation.

So, this is the post that I promised to Max and all the others, about why I don’t believe IIT.  And yes, it will contain that quantitative calculation.

But first, what is IIT?  The central ideas of IIT, as I understand them, are:

(1) to propose a quantitative measure, called Φ, of the amount of “integrated information” in a physical system (i.e. information that can’t be localized in the system’s individual parts), and then

(2) to hypothesize that a physical system is “conscious” if and only if it has a large value of Φ—and indeed, that a system is more conscious the larger its Φ value.

I’ll return later to the precise definition of Φ—but basically, it’s obtained by minimizing, over all subdivisions of your physical system into two parts A and B, some measure of the mutual information between A’s outputs and B’s inputs and vice versa.  Now, one immediate consequence of any definition like this is that all sorts of simple physical systems (a thermostat, a photodiode, etc.) will turn out to have small but nonzero Φ values.  To his credit, Tononi cheerfully accepts the panpsychist implication: yes, he says, it really does mean that thermostats and photodiodes have small but nonzero levels of consciousness.  On the other hand, for the theory to work, it had better be the case that Φ is small for “intuitively unconscious” systems, and only large for “intuitively conscious” systems.  As I’ll explain later, this strikes me as a crucial point on which IIT fails.

The literature on IIT is too big to do it justice in a blog post.  Strikingly, in addition to the “primary” literature, there’s now even a “secondary” literature, which treats IIT as a sort of established base on which to build further speculations about consciousness.  Besides the Tegmark paper linked to above, see for example this paper by Maguire et al., and associated popular article.  (Ironically, Maguire et al. use IIT to argue for the Penrose-like view that consciousness might have uncomputable aspects—a use diametrically opposed to Tegmark’s.)

Anyway, if you want to read a popular article about IIT, there are loads of them: see here for the New York Times’s, here for Scientific American‘s, here for IEEE Spectrum‘s, and here for the New Yorker‘s.  Unfortunately, none of those articles will tell you the meat (i.e., the definition of integrated information); for that you need technical papers, like this or this by Tononi, or this by Seth et al.  IIT is also described in Christof Koch’s memoir Consciousness: Confessions of a Romantic Reductionist, which I read and enjoyed; as well as Tononi’s Phi: A Voyage from the Brain to the Soul, which I haven’t yet read.  (Koch, one of the world’s best-known thinkers and writers about consciousness, has also become an evangelist for IIT.)

So, I want to explain why I don’t think IIT solves even the problem that it “plausibly could have” solved.  But before I can do that, I need to do some philosophical ground-clearing.  Broadly speaking, what is it that a “mathematical theory of consciousness” is supposed to do?  What questions should it answer, and how should we judge whether it’s succeeded?

The most obvious thing a consciousness theory could do is to explain why consciousness exists: that is, to solve what David Chalmers calls the “Hard Problem,” by telling us how a clump of neurons is able to give rise to the taste of strawberries, the redness of red … you know, all that ineffable first-persony stuff.  Alas, there’s a strong argument—one that I, personally, find completely convincing—why that’s too much to ask of any scientific theory.  Namely, no matter what the third-person facts were, one could always imagine a universe consistent with those facts in which no one “really” experienced anything.  So for example, if someone claims that integrated information “explains” why consciousness exists—nope, sorry!  I’ve just conjured into my imagination beings whose Φ-values are a thousand, nay a trillion times larger than humans’, yet who are also philosophical zombies: entities that there’s nothing that it’s like to be.  Granted, maybe such zombies can’t exist in the actual world: maybe, if you tried to create one, God would notice its large Φ-value and generously bequeath it a soul.  But if so, then that’s a further fact about our world, a fact that manifestly couldn’t be deduced from the properties of Φ alone.  Notice that the details of Φ are completely irrelevant to the argument.

Faced with this point, many scientifically-minded people start yelling and throwing things.  They say that “zombies” and so forth are empty metaphysics, and that our only hope of learning about consciousness is to engage with actual facts about the brain.  And that’s a perfectly reasonable position!  As far as I’m concerned, you absolutely have the option of dismissing Chalmers’ Hard Problem as a navel-gazing distraction from the real work of neuroscience.  The one thing you can’t do is have it both ways: that is, you can’t say both that the Hard Problem is meaningless, and that progress in neuroscience will soon solve the problem if it hasn’t already.  You can’t maintain simultaneously that

(a) once you account for someone’s observed behavior and the details of their brain organization, there’s nothing further about consciousness to be explained, and

(b) remarkably, the XYZ theory of consciousness can explain the “nothing further” (e.g., by reducing it to integrated information processing), or might be on the verge of doing so.

As obvious as this sounds, it seems to me that large swaths of consciousness-theorizing can just be summarily rejected for trying to have their brain and eat it in precisely the above way.

Fortunately, I think IIT survives the above observations.  For we can easily interpret IIT as trying to do something more “modest” than solve the Hard Problem, although still staggeringly audacious.  Namely, we can say that IIT “merely” aims to tell us which physical systems are associated with consciousness and which aren’t, purely in terms of the systems’ physical organization.  The test of such a theory is whether it can produce results agreeing with “commonsense intuition”: for example, whether it can affirm, from first principles, that (most) humans are conscious; that dogs and horses are also conscious but less so; that rocks, livers, bacteria colonies, and existing digital computers are not conscious (or are hardly conscious); and that a room full of people has no “mega-consciousness” over and above the consciousnesses of the individuals.

The reason it’s so important that the theory uphold “common sense” on these test cases is that, given the experimental inaccessibility of consciousness, this is basically the only test available to us.  If the theory gets the test cases “wrong” (i.e., gives results diverging from common sense), it’s not clear that there’s anything else for the theory to get “right.”  Of course, supposing we had a theory that got the test cases right, we could then have a field day with the less-obvious cases, programming our computers to tell us exactly how much consciousness is present in octopi, fetuses, brain-damaged patients, and hypothetical AI bots.

In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science.  Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness.  Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward.  But I also regard IIT as a failed attempt on the problem.  And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data.  Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ.  Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about.  Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t).  Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists.  And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}).  We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa.  If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.

More formally, given a partition (A,B) of {1,…,n}, let us write an input y=(y1,…,yn)∈Sn to f in the form (yA,yB), where yA consists of the y variables in A and yB consists of the y variables in B.  Then we can think of f as mapping an input pair (yA,yB) to an output pair (zA,zB).  Now, we define the “effective information” EI(A→B) as H(zB | A random, yB=xB).  Or in words, EI(A→B) is the Shannon entropy of the output variables in B, if the input variables in A are drawn uniformly at random, while the input variables in B are fixed to their values in x.  It’s a measure of the dependence of B on A in the computation of f(x).  Similarly, we define

EI(B→A) := H(zA | B random, yA=xA).

We then consider the sum

Φ(A,B) := EI(A→B) + EI(B→A).

Intuitively, we’d like the integrated information Φ=Φ(f,x) be the minimum of Φ(A,B), over all 2n-2 possible partitions of {1,…,n} into nonempty sets A and B.  The idea is that Φ should be large, if and only if it’s not possible to partition the variables into two sets A and B, in such a way that not much information flows from A to B or vice versa when f(x) is computed.

However, no sooner do we propose this than we notice a technical problem.  What if A is much larger than B, or vice versa?  As an extreme case, what if A={1,…,n-1} and B={n}?  In that case, we’ll have Φ(A,B)≤2log2|S|, but only for the boring reason that there’s hardly any entropy in B as a whole, to either influence A or be influenced by it.  For this reason, Tononi proposes a fix where we normalize each Φ(A,B) by dividing it by min{|A|,|B|}.  He then defines the integrated information Φ to be Φ(A,B), for whichever partition (A,B) minimizes the ratio Φ(A,B) / min{|A|,|B|}.  (Unless I missed it, Tononi never specifies what we should do if there are multiple (A,B)’s that all achieve the same minimum of Φ(A,B) / min{|A|,|B|}.  I’ll return to that point later, along with other idiosyncrasies of the normalization procedure.)

Tononi gives some simple examples of the computation of Φ, showing that it is indeed larger for systems that are more “richly interconnected” in an intuitive sense.  He speculates, plausibly, that Φ is quite large for (some reasonable model of) the interconnection network of the human brain—and probably larger for the brain than for typical electronic devices (which tend to be highly modular in design, thereby decreasing their Φ), or, let’s say, than for other organs like the pancreas.  Ambitiously, he even speculates at length about how a large value of Φ might be connected to the phenomenology of consciousness.

To be sure, empirical work in integrated information theory has been hampered by three difficulties.  The first difficulty is that we don’t know the detailed interconnection network of the human brain.  The second difficulty is that it’s not even clear what we should define that network to be: for example, as a crude first attempt, should we assign a Boolean variable to each neuron, which equals 1 if the neuron is currently firing and 0 if it’s not firing, and let f be the function that updates those variables over a timescale of, say, a millisecond?  What other variables do we need—firing rates, internal states of the neurons, neurotransmitter levels?  Is choosing many of these variables uniformly at random (for the purpose of calculating Φ) really a reasonable way to “randomize” the variables, and if not, what other prescription should we use?

The third and final difficulty is that, even if we knew exactly what we meant by “the f and x corresponding to the human brain,” and even if we had complete knowledge of that f and x, computing Φ(f,x) could still be computationally intractable.  For recall that the definition of Φ involved minimizing a quantity over all the exponentially-many possible bipartitions of {1,…,n}.  While it’s not directly relevant to my arguments in this post, I leave it as a challenge for interested readers to pin down the computational complexity of approximating Φ to some reasonable precision, assuming that f is specified by a polynomial-size Boolean circuit, or alternatively, by an NC0 function (i.e., a function each of whose outputs depends on only a constant number of the inputs).  (Presumably Φ will be #P-hard to calculate exactly, but only because calculating entropy exactly is a #P-hard problem—that’s not interesting.)

I conjecture that approximating Φ is an NP-hard problem, even for restricted families of f’s like NC0 circuits—which invites the amusing thought that God, or Nature, would need to solve an NP-hard problem just to decide whether or not to imbue a given physical system with consciousness!  (Alas, if you wanted to exploit this as a practical approach for solving NP-complete problems such as 3SAT, you’d need to do a rather drastic experiment on your own brain—an experiment whose result would be to render you unconscious if your 3SAT instance was satisfiable, or conscious if it was unsatisfiable!  In neither case would you be able to communicate the outcome of the experiment to anyone else, nor would you have any recollection of the outcome after the experiment was finished.)  In the other direction, it would also be interesting to upper-bound the complexity of approximating Φ.  Because of the need to estimate the entropies of distributions (even given a bipartition (A,B)), I don’t know that this problem is in NP—the best I can observe is that it’s in AM.

In any case, my own reason for rejecting IIT has nothing to do with any of the “merely practical” issues above: neither the difficulty of defining f and x, nor the difficulty of learning them, nor the difficulty of calculating Φ(f,x).  My reason is much more basic, striking directly at the hypothesized link between “integrated information” and consciousness.  Specifically, I claim the following:

Yes, it might be a decent rule of thumb that, if you want to know which brain regions (for example) are associated with consciousness, you should start by looking for regions with lots of information integration.  And yes, it’s even possible, for all I know, that having a large Φ-value is one necessary condition among many for a physical system to be conscious.  However, having a large Φ-value is certainly not a sufficient condition for consciousness, or even for the appearance of consciousness.  As a consequence, Φ can’t possibly capture the essence of what makes a physical system conscious, or even of what makes a system look conscious to external observers.

The demonstration of this claim is embarrassingly simple.  Let S=Fp, where p is some prime sufficiently larger than n, and let V be an n×n Vandermonde matrix over Fp—that is, a matrix whose (i,j) entry equals ij-1 (mod p).  Then let f:Sn→Sn be the update function defined by f(x)=Vx.  Now, for p large enough, the Vandermonde matrix is well-known to have the property that every submatrix is full-rank (i.e., “every submatrix preserves all the information that it’s possible to preserve about the part of x that it acts on”).  And this implies that, regardless of which bipartition (A,B) of {1,…,n} we choose, we’ll get

EI(A→B) = EI(B→A) = min{|A|,|B|} log2p,

and hence

Φ(A,B) = EI(A→B) + EI(B→A) = 2 min{|A|,|B|} log2p,

or after normalizing,

Φ(A,B) / min{|A|,|B|} = 2 log2p.

Or in words: the normalized information integration has the same value—namely, the maximum value!—for every possible bipartition.  Now, I’d like to proceed from here to a determination of Φ itself, but I’m prevented from doing so by the ambiguity in the definition of Φ that I noted earlier.  Namely, since every bipartition (A,B) minimizes the normalized value Φ(A,B) / min{|A|,|B|}, in theory I ought to be able to pick any of them for the purpose of calculating Φ.  But the unnormalized value Φ(A,B), which gives the final Φ, can vary greatly, across bipartitions: from 2 log2p (if min{|A|,|B|}=1) all the way up to n log2p (if min{|A|,|B|}=n/2).  So at this point, Φ is simply undefined.

On the other hand, I can solve this problem, and make Φ well-defined, by an ironic little hack.  The hack is to replace the Vandermonde matrix V by an n×n matrix W, which consists of the first n/2 rows of the Vandermonde matrix each repeated twice (assume for simplicity that n is a multiple of 4).  As before, we let f(x)=Wx.  Then if we set A={1,…,n/2} and B={n/2+1,…,n}, we can achieve

EI(A→B) = EI(B→A) = (n/4) log2p,

Φ(A,B) = EI(A→B) + EI(B→A) = (n/2) log2p,

and hence

Φ(A,B) / min{|A|,|B|} = log2p.

In this case, I claim that the above is the unique bipartition that minimizes the normalized integrated information Φ(A,B) / min{|A|,|B|}, up to trivial reorderings of the rows.  To prove this claim: if |A|=|B|=n/2, then clearly we minimize Φ(A,B) by maximizing the number of repeated rows in A and the number of repeated rows in B, exactly as we did above.  Thus, assume |A|≤|B| (the case |B|≤|A| is analogous).  Then clearly

EI(B→A) ≥ |A|/2,

while

EI(A→B) ≥ min{|A|, |B|/2}.

So if we let |A|=cn and |B|=(1-c)n for some c∈(0,1/2], then

Φ(A,B) ≥ [c/2 + min{c, (1-c)/2}] n,

and

Φ(A,B) / min{|A|,|B|} = Φ(A,B) / |A| = 1/2 + min{1, 1/(2c) – 1/2}.

But the above expression is uniquely minimized when c=1/2.  Hence the normalized integrated information is minimized essentially uniquely by setting A={1,…,n/2} and B={n/2+1,…,n}, and we get

Φ = Φ(A,B) = (n/2) log2p,

which is quite a large value (only a factor of 2 less than the trivial upper bound of n log2p).

Now, why did I call the switch from V to W an “ironic little hack”?  Because, in order to ensure a large value of Φ, I decreased—by a factor of 2, in fact—the amount of “information integration” that was intuitively happening in my system!  I did that in order to decrease the normalized value Φ(A,B) / min{|A|,|B|} for the particular bipartition (A,B) that I cared about, thereby ensuring that that (A,B) would be chosen over all the other bipartitions, thereby increasing the final, unnormalized value Φ(A,B) that Tononi’s prescription tells me to return.  I hope I’m not alone in fearing that this illustrates a disturbing non-robustness in the definition of Φ.

But let’s leave that issue aside; maybe it can be ameliorated by fiddling with the definition.  The broader point is this: I’ve shown that my system—the system that simply applies the matrix W to an input vector x—has an enormous amount of integrated information Φ.  Indeed, this system’s Φ equals half of its entire information content.  So for example, if n were 1014 or so—something that wouldn’t be hard to arrange with existing computers—then this system’s Φ would exceed any plausible upper bound on the integrated information content of the human brain.

And yet this Vandermonde system doesn’t even come close to doing anything that we’d want to call intelligent, let alone conscious!  When you apply the Vandermonde matrix to a vector, all you’re really doing is mapping the list of coefficients of a degree-(n-1) polynomial over Fp, to the values of the polynomial on the n points 0,1,…,n-1.  Now, evaluating a polynomial on a set of points turns out to be an excellent way to achieve “integrated information,” with every subset of outputs as correlated with every subset of inputs as it could possibly be.  In fact, that’s precisely why polynomials are used so heavily in error-correcting codes, such as the Reed-Solomon code, employed (among many other places) in CD’s and DVD’s.  But that doesn’t imply that every time you start up your DVD player you’re lighting the fire of consciousness.  It doesn’t even hint at such a thing.  All it tells us is that you can have integrated information without consciousness (or even intelligence)—just like you can have computation without consciousness, and unpredictability without consciousness, and electricity without consciousness.

It might be objected that, in defining my “Vandermonde system,” I was too abstract and mathematical.  I said that the system maps the input vector x to the output vector Wx, but I didn’t say anything about how it did so.  To perform a computation—even a computation as simple as a matrix-vector multiply—won’t we need a physical network of wires, logic gates, and so forth?  And in any realistic such network, won’t each logic gate be directly connected to at most a few other gates, rather than to billions of them?  And if we define the integrated information Φ, not directly in terms of the inputs and outputs of the function f(x)=Wx, but in terms of all the actual logic gates involved in computing f, isn’t it possible or even likely that Φ will go back down?

This is a good objection, but I don’t think it can rescue IIT.  For we can achieve the same qualitative effect that I illustrated with the Vandermonde matrix—the same “global information integration,” in which every large set of outputs depends heavily on every large set of inputs—even using much “sparser” computations, ones where each individual output depends on only a few of the inputs.  This is precisely the idea behind low-density parity check (LDPC) codes, which have had a major impact on coding theory over the past two decades.  Of course, one would need to muck around a bit to construct a physical system based on LDPC codes whose integrated information Φ was provably large, and for which there were no wildly-unbalanced bipartitions that achieved lower Φ(A,B)/min{|A|,|B|} values than the balanced bipartitions one cared about.  But I feel safe in asserting that this could be done, similarly to how I did it with the Vandermonde matrix.

More generally, we can achieve pretty good information integration by hooking together logic gates according to any bipartite expander graph: that is, any graph with n vertices on each side, such that every k vertices on the left side are connected to at least min{(1+ε)k,n} vertices on the right side, for some constant ε>0.  And it’s well-known how to create expander graphs whose degree (i.e., the number of edges incident to each vertex, or the number of wires coming out of each logic gate) is a constant, such as 3.  One can do so either by plunking down edges at random, or (less trivially) by explicit constructions from algebra or combinatorics.  And as indicated in the title of this post, I feel 100% confident in saying that the so-constructed expander graphs are not conscious!  The brain might be an expander, but not every expander is a brain.

Before winding down this post, I can’t resist telling you that the concept of integrated information (though it wasn’t called that) played an interesting role in computational complexity in the 1970s.  As I understand the history, Leslie Valiant conjectured that Boolean functions f:{0,1}n→{0,1}n with a high degree of “information integration” (such as discrete analogues of the Fourier transform) might be good candidates for proving circuit lower bounds, which in turn might be baby steps toward P≠NP.  More strongly, Valiant conjectured that the property of information integration, all by itself, implied that such functions had to be at least somewhat computationally complex—i.e., that they couldn’t be computed by circuits of size O(n), or even required circuits of size Ω(n log n).  Alas, that hope was refuted by Valiant’s later discovery of linear-size superconcentrators.  Just as information integration doesn’t suffice for intelligence or consciousness, so Valiant learned that information integration doesn’t suffice for circuit lower bounds either.

As humans, we seem to have the intuition that global integration of information is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it.  But our intuition is wrong.  If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes.

I should mention that I had the privilege of briefly speaking with Giulio Tononi (as well as his collaborator, Christof Koch) this winter at an FQXi conference in Puerto Rico.  At that time, I challenged Tononi with a much cruder, handwavier version of some of the same points that I made above.  Tononi’s response, as best as I can reconstruct it, was that it’s wrong to approach IIT like a mathematician; instead one needs to start “from the inside,” with the phenomenology of consciousness, and only then try to build general theories that can be tested against counterexamples.  This response perplexed me: of course you can start from phenomenology, or from anything else you like, when constructing your theory of consciousness.  However, once your theory has been constructed, surely it’s then fair game for others to try to refute it with counterexamples?  And surely the theory should be judged, like anything else in science or philosophy, by how well it withstands such attacks?

But let me end on a positive note.  In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed.  Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

[Endnote: See also this related post, by the philosopher Eric Schwetzgebel: Why Tononi Should Think That the United States Is Conscious.  While the discussion is much more informal, and the proposed counterexample more debatable, the basic objection to IIT is the same.]


Update (5/22): Here are a few clarifications of this post that might be helpful.

(1) The stuff about zombies and the Hard Problem was simply meant as motivation and background for what I called the “Pretty-Hard Problem of Consciousness”—the problem that I take IIT to be addressing.  You can disagree with the zombie stuff without it having any effect on my arguments about IIT.

(2) I wasn’t arguing in this post that dualism is true, or that consciousness is irreducibly mysterious, or that there could never be any convincing theory that told us how much consciousness was present in a physical system.  All I was arguing was that, at any rate, IIT is not such a theory.

(3) Yes, it’s true that my demonstration of IIT’s falsehood assumes—as an axiom, if you like—that while we might not know exactly what we mean by “consciousness,” at any rate we’re talking about something that humans have to a greater extent than DVD players.  If you reject that axiom, then I’d simply want to define a new word for a certain quality that non-anesthetized humans seem to have and that DVD players seem not to, and clarify that that other quality is the one I’m interested in.

(4) For my counterexample, the reason I chose the Vandermonde matrix is not merely that it’s invertible, but that all of its submatrices are full-rank.  This is the property that’s relevant for producing a large value of the integrated information Φ; by contrast, note that the identity matrix is invertible, but produces a system with Φ=0.  (As another note, if we work over a large enough field, then a random matrix will have this same property with high probability—but I wanted an explicit example, and while the Vandermonde is far from the only one, it’s one of the simplest.)

(5) The n×n Vandermonde matrix only does what I want if we work over (say) a prime field Fp with p>>n elements.  Thus, it’s natural to wonder whether similar examples exist where the basic system variables are bits, rather than elements of Fp.  The answer is yes. One way to get such examples is using the low-density parity check codes that I mention in the post.  Another common way to get Boolean examples, and which is also used in practice in error-correcting codes, is to start with the Vandermonde matrix (a.k.a. the Reed-Solomon code), and then combine it with an additional component that encodes the elements of Fp as strings of bits in some way.  Of course, you then need to check that doing this doesn’t harm the properties of the original Vandermonde matrix that you cared about (e.g., the “information integration”) too much, which causes some additional complication.

(6) Finally, it might be objected that my counterexamples ignored the issue of dynamics and “feedback loops”: they all consisted of unidirectional processes, which map inputs to outputs and then halt.  However, this can be fixed by the simple expedient of iterating the process over and over!  I.e., first map x to Wx, then map Wx to W2x, and so on.  The integrated information should then be the same as in the unidirectional case.


Update (5/24): See a very interesting comment by David Chalmers.

261 Responses to “Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)”

  1. Woett Says:

    Happy Birthday Scott! Glad to hear you’re finally coming to grips with the fact that you’re a mathematician 🙂

  2. Scott Says:

    Thanks, Woett! Yeah, there’s nothing like the suggestion that I “not think like a mathematician” to cause me to stand up and be counted as one. 😉

  3. James Gallagher Says:

    The fact that I’m also wishing you Happy Birthday! proves consciousness exists and can’t be analysed mathematically. 🙂

  4. gasarch Says:

    Valiant disproved his own approach and moved on- alas, if only IIT people could do the same. One key is that Valiant was willing to be proven wrong. Are the ITT people stuck in a Kuhnian Paradigm?

  5. srp Says:

    I’ve seen a simulation by Bosse, Jonker, and Treuer, of Antonio Damasio’s theory of core consciousness, but since it hasn’t swept the field or made any news I’m guessing that people in the field weren’t convinced. From my point of view, that direction seems a priori more plausible than IIT.

  6. Mayra Montrose Says:

    Happy Birthday Scott! Glad to read between the lines that your trip to Puerto Rico was fun and productive! Best wishes, Mayra

  7. JimV Says:

    My own half-baked ideas on consciousness are only useful for answering dualists who claim that there is no possible “materialistic” explanation for consciousness, but I see some vague correspondences with this excellent post, so with apologies, here they are:

    1) What is the function of consciousness?

    I think consciousness is like the operating system of a computer (Windows, unix, etc.), mediating between external inputs and outputs and internal applications. That is, Windows does not know how to do spreadsheet calculations, but it can start Excel, feed external data to it, and show the results on a monitor screen. As far as Windows knows, the numbers it puts on the screen might have been produced by magic. Similarly, there are no nerves which monitor the brain, so ideas seem to come out of nowhere.

    These leaves a lot of details to be figured out, such as if the brain is a sort of computer, what is its programming language? Is it process-oriented, like Fortran and Basic, object-oriented like C++ and Java, list-oriented like Lisp and Scheme, or something else? This could be a very difficult problem, like trying to decipher an alien computer, but in principle it is doable.

    2) Why does consciousness feel the way we experience it?

    I guess this is the “hard problem of consciousness”, but it seems not very important to me, no more than the hard problem of why a rose smells like a rose. By which I mean, although we know the chemicals which produce the smell of a rose and how they are processed by olfactory nerves, that doesn’t explain why a rose’s chemicals don’t produce the smell of an orange – switch their scents and the same processes would be involved. Things we experience have to have some distinctive feel/scent/etc. in order to experience them, and they way do is the way such things feel/smell etc. in this universe.

    So I don’t think there is answer to 2) -regardless of whether one is a naturalist or a dualist.

  8. ungrateful_person Says:

    Happy B’day Scott.

    #LameLegPullAttemot Send about $5 to all your blog readers so that they can treat themselves to an icecream on this occasion.

    Enjoy the day!

  9. Sicco Naets Says:

    The argument that any theory of consciousness must inherently produce results that are in accordance with our common sense understanding of consciousness is absurd.
    How well does that sort of thing work in other fields of science? How “intuitive” is quantum mechanics? Chaos theory? Relativity theory?

    With all due respect, your whole argument is circular. You’re essentially saying: “I have a preconceived notion of what consciousness is, and in my mind it’s not reducible to a deterministic process. Any theory that doesn’t fall in line with my preconceived notion is inherently flawed.”

  10. Luke Muehlhauser Says:

    gasarch,

    Interesting. In what work did Valiant introduce his own approach to consciousness, and in what work did he disprove it? I’m not familiar.

  11. Pete Says:

    Did you ever have that coffee with Max Tegmark? Did it end in a mathematical structure isomorphic to coffee poured on your shirt? 🙂

  12. marc Says:

    happy birthday! http://qcplayground.withgoogle.com/

  13. kwan Says:

    i can’t follow the mathematics, but agree in general with the point that if there exist, or are easy to build, systems with large phi which we don’t intuitively call conscious, then phi becomes useless, or at least fluffy and malleable.

    unless it simply is true. maybe a computer built to specifically “integrate information”, even in a trivial way, can achieve a given (large) value of phi and turn conscious. maybe such a computer already exists and is silently cursing its creators.

  14. Blake Stacey Says:

    Happy birthday!

  15. Timothy Gowers Says:

    Very nice post. It seems to me that it is obvious in advance that the definition just can’t be right, as your counterexample confirms, because it doesn’t (as far as I can see) say anything about self-reflection. Roughly what I mean by that is that if you have a brain process that is automatic that can be interrupted by some higher-level brain process, then the higher-level process is in some sense reflecting on the lower-level one. And you can have several layers of this, and the more layers you have, the more conscious the system is. I’m not proposing a numerical measure here, of course. I just think that any good theory of consciousness should include something in it that looks like the picture I’ve just sketched, which the IIT theory doesn’t.

  16. domenico Says:

    Happy birthday to Scott!
    I think that a definition of consciousness must be universal, so that it can be understand from each person, with each knowledge, a simple vocabulary definition.
    A brain have consciousness, so that a single thought is a chemical reaction caused by a sensor input, or a reflection: there is a change of the inner state, with energy that carries information.
    The photodiode is a good example, if it connected to a little circuit it change the inner state, but a blind photodiode don’t have “consciousness”, but a man without sensor input have consciousness.
    The chemical net that give consciouness (brain) can be reduced to always smaller dimension, and the chemical net have ever consciousness (insects); a dna of a bacteria have chemical reaction like a brain, so that there is a change of a inner state because of external stimuli, or inner stimuli.
    If I had to write a vocabulary definition of consciousness, I would write a state of matter (a memory with multiple phase transition) with an inner release of energy (a fire, a star, a smartphone, a virus, a single neuron); otherwise a net of interacting matter, and energy, is conscious.

  17. Michal Says:

    Happy Birthday, Scott! I think IIT is interesting. I like the idea that brain is just a unique one-way hard-disk like a fingerprint human beings use to access and feed into a larger consciousness. Like without the senses at large punched in and online feeding from the brain tissue into the consciousness, the brain tissue is useless and cannot be read offline.

  18. Mike Says:

    Happy birthday Scott.

  19. Scott Says:

    Sicco #9: I think you misunderstood this post. First of all, nowhere did I say that consciousness is not reducible to a deterministic process. But more important, it’s extremely common in science that when a new theory is proposed, with pretensions to generality (and to telling us even about exotic situations far removed from what we know), people first check that the theory gives sensible results for simple cases where they thought they already knew the answers. If it doesn’t, then the burden is squarely on the theory’s advocates to explain why that’s OK!

    So, to take your examples: when quantum mechanics came along, one of the first things physicists checked was that it reduced to familiar classical physics in the limit when ℏ was negligible. Likewise with relativity, which reduces to Newtonian physics when all speeds are tiny compared to that of light. And chaos theory better not have predicted that chaos is so pervasive that applies even to (say) the motion of Halley’s comet over the timescale of human civilization! Because we know the latter is predictable.

    Closer to our topic, suppose a philosopher announced a new theory of morality with some counterintuitive results: the theory said that murdering people for thrills was OK (or even a moral obligation), whereas helping old ladies across the street was the ultimate evil. Now, would you be inclined to take seriously what this theory said for more complicated, ambiguous moral dilemmas, like the trolley problems?

    If you wouldn’t—well then, that’s all I’m saying about theories of consciousness. Suppose your theory of consciousness predicts that a bag of Cheetos is conscious, whereas you and I are not. Then either you need to convince me that my pre-theoretic intuitions about what “consciousness” even means and what sorts of systems we should ascribe it to were so badly broken that I should trust your theory over those intuitions, or else I don’t particularly care what your theory says about the “harder” cases, like the consciousness of a fetus, a frog, or an AI.

    And crucially, if we’re not allowed to apply the above sort of sanity check—the “Cheetos test,” if you will—to proposed theories of consciousness, then what sort of sanity check are we allowed to apply?

  20. me Says:

    Thanks for the post. I always wondered what a professional computer scientist has to say about the theory since calculating Phi seems to be a CS problem. Now one of the best ones thought about it and immediately found a counterexample. I read interviews of Giulio Tononi in which he mentioned the computational difficulty of calculating Phi, maybe it would have been wise of him to seek the dialog with theoretical computer scientists at that point. The counterexample seems absolutely valid, saying the (mathematically formulated) theory shouldn’t be approached by mathematics sounds like saying the theory shouldn’t be tested by logical reasoning.

  21. Scott Says:

    Luke #10: Leslie Valiant, as it happens, is extremely interested in computational models for the mind, but in the case gasarch and I were talking about, back in the 1970s, he was only thinking about a particular approach to proving circuit lower bounds. For more details, see the fourth-to-last paragraph in the OP.

  22. JS Says:

    Hi, isn’t Dan Dennett’s Fable of Two Blackboxes a counterexample to your claim you cannot simultaneously disagree with “hard problem of consciousness” and not have everything explained by reduction to neuroscience? In particular, I think what he calls “intentional stance” addresses it.

    His position is interesting, IMHO. He thinks there is no “hard problem”, it’s just a bunch of magic tricks. However, the explanation (and understanding) of these requires a high-level description of the emergent processes, and not just understanding what each of the individual neurons do.

    Maybe I read you incorrectly. I particularly like Dennett because he looks at empirical data about consciousness, doesn’t just blindly theorize about it.

  23. Scott Says:

    JS #22: Dennett is the example par excellence of someone who thinks that there’s no “hard problem,” and that problems of that sort can only be “dissolved” (i.e., made to go away, explained to be non-problems) rather than solved. And yes, that’s a perfectly consistent position, and I think the fact that it’s consistent is consistent with what I said. I only said that you can’t simultaneously claim that there’s no hard problem, and that your favorite theory solves the problem.

  24. Scott Says:

    Pete #11:

      Did you ever have that coffee with Max Tegmark? Did it end in a mathematical structure isomorphic to coffee poured on your shirt?

    There are people like Lubos, who I often disagree with intellectually and in addition find to be spiteful human beings. Then there’s a much larger number of people who I disagree with intellectually on this or that issue, but have nothing against as people: like they say, “it’s just business.” Max is someone who I disagree with intellectually on several issues but actively like as a person. We had a very pleasant meeting (though not, as it happens, over coffee, so there was none to spill).

  25. James Cross Says:

    Scott,

    Thanks for writing this. I need to reread this but some brief comments here.

    I wrote on Tegmark’s paper on my own blog:

    http://broadspeculations.com/2014/02/17/consciousness-state-of-matter/

    I come at this from a completely different perspective – somewhat more of evolutionary biological perspective.

    Tegmark and Tononi both seem to approach the problem of consciousness in an abstract manner disconnected from living matter which is the only material we can reasonably confident is (or might) be capable of consciousness. I argue that living matter itself possesses integrated information and the difference between living matter that possesses little or no consciousness and organisms with greater consciousness is primarily the degree to which the living material can operate in near real time. In this view consciousness is a potential property of living material. Integrated information might be necessary but is not sufficient to produce consciousness.

  26. jonas Says:

    Hi, Scott!

    I don’t understand your argument with the expander graph. It seems too general to prove anything. I think that if you have a random expander graph of size Omega(n^(1+epsilon)), then you can find any bounded degree graph of size n on this expander graph as a topological subgraph, which means you can program the expander graph to compute any logical circuit of size n if you just put the right logical gates to its nodes. Thus, your expander graph could represent some useless random function, your vandermonde transformation function from the previous example, or a Turing-complete computer.

    As everything depends on what logical gates you choose to put on the expander graph, this doesn’t seem to prove anything new that you couldn’t previously prove. Worse still, an expander graph is something that you might not be able to realize physically as wires, because there’s simply not enough space for all the edges. As such, it’s more difficult to imagine an expander graph as a real physical system than to just imagine a circuit that computes the Vandermonde transformation.

  27. Scott Says:

    Jonas #26: Sorry, I meant that we could achieve pretty-good information integration using a bipartite expander graph together with some fixed, simple choice of logic gates at the nodes: for example, XOR gates. Indeed, that’s exactly what’s done with LDPC codes.

    And you’re absolutely right that expander graphs can be difficult to realize physically, because of the problem of there not being enough space for all the edges. In fact, the architecture of the human brain faces precisely that problem: there’s a reason why the actual cognition goes on in the “gray matter” near the surface, and the “white matter” in the middle is basically just for connecting different parts of the gray matter to each other! If you’ve ever seen the inside of a Cray supercomputer, it has the same design strategy: processing units on the outside, enormous tangle of wires in the middle for relaying signals between the processors.

    But the above illustrates a crucial point for this discussion: namely, whatever geometric difficulties there are in physically realizing expander graphs, those difficulties apply to the brain just as much as to artificial systems! And hence, if those difficulties impose some effective upper bound on Φ in the one case, then they do the same in the other.

  28. Rahul Says:

    I loved the concluding part:

    Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

    I get the feeling that more and more of frontier science (especially in physics) is falling in the non-falsifiable or practically non-falsifiable categories. And that is annoying.

  29. Scott Says:

    Rahul #28: I understand the feeling, but submit that it’s at least partly due to selection bias. There are plenty of areas of science, and subareas within each science, where people understand exactly what they’re talking about and make definite progress at a rapid rate. But, for precisely that reason, those areas are not the most fun ones to argue about on blogs. 🙂

  30. John Kubie Says:

    Great post.
    The arguments against IIT are piling up. I really like the concept of the “pretty-hard problem of consciousness”. Given these objections, the one thing that IIT might be OK at is saying how much consciousness a human brain has in different states. We could call that the “limited pretty-hard problem”. In terms of real-world tests, this seems to be what Tononi is doing.

  31. John Kubie Says:

    In response to Sicco #9.

    IIT relies heavily on intuition. One of the things I like about IIT is that it takes “intuitive”, subjective properties of consciousness as “postulates”. Tononi and Koch accuse other computational theories of ignoring “facts” of subjective experience and working solely from the ground up. ( In addition, IIT has “axioms” that are physics.)

    One objection I have to IIT is its list of postulates. Its very heavy on the perceptual side, with nothing on the output end. It also does not account for a self-world boundary. (nor, as I see it, does IIT).

    In addition to postulates and axioms, Tononi describes a critical “identity”: –>

    brain states map to conscious states (specific quale)

    This is clearly materialist. It also solves the hard problem by assuming it.

    Glad to see a computer scientist quash this. Tonini says that computers don’t have consciousness because they are feed-forward systems and have no feedback. Seems to me, a non-computer scientist, preposterous and simplistic.

  32. Sandro Says:

    Namely, no matter what the third-person facts were, one could always imagine a universe consistent with those facts in which no one “really” experienced anything.

    Accepting the possibility of p-zombies necessarily implies dualism. Any dualist theory of this sort is necessarily less parsimonious than a monist theory. Ergo, I should never prefer the dualist theory.

    Thus, irrespective of p-zombies’ metaphysical emptiness, they are simply bad epistemology. Although I happen to also agree with Dennett that p-zombies are logically incoherent.

    When you apply the Vandermonde matrix to a vector, all you’re really doing is mapping the list of coefficients of a degree-(n-1) polynomial over Fp, to the values of the polynomial on the n points 0,1,…,n-1. […] But that doesn’t imply that every time you start up your DVD player you’re lighting the fire of consciousness. It doesn’t even hint at such a thing.

    Stated as definitive conclusions, this argument is begging the question. You don’t know what consciousness is or how it works, but somehow conclude that the above process is not consciousness.

    Your point about the large value of phi of this mathematical model as compared to the brain is a much stronger point against IIT. By which I mean that we can gauge the plausibility of IIT given the relative phi of the brain and Vandermonde matrices, but we can’t definitively conclude that high-phi “doesn’t hint at [consciousness]”, and that active DVD players are not conscious to some extent.

    I’m so tempted to call this Scott’s hi-phi stereo argument… 😉

  33. Sandro Says:

    Oh, I also disagree with your connecting p-zombies and the hard problem of consciousness. You can still accept and attempt to address the hard problem of consciousness seriously without accepting the validity of p-zombies.

  34. Jay Says:

    Scott #0, on Hard problems.

    Would you agree with the following proposition?

    “Physics might provide some interesting theories, but it will never solve the Hard problem of Solipicism”

    If no, what is the difference? To me that’s an example of an “Hard problem” where physics had it both ways, in the sense that we never defeated solipcism on logical grounds, but by showing that one hypothesis (there’s an universe beyond our consciousness) leads to far more interesting stuff than the opposite hypothesis.

    In other words, don’t you think that the day we will solve the “pretty hard problem of consciousness”, almost no one will still care about the “Hard one with a big H”?

    Sicco Naets #9,

    There’s a more charitable reading, especially as Scott said “common sense” not common sense. Suppose one theory says that the persons in persistent vegetative state are not conscious, and some day later one show that using some new neuroimaging technology we can communicate with these patients. Then this theory will goes against “common sense” if not common sense.

    Scott #0, on ITT.

    ITT makes three main assertions:
    1) consciousness is a normal physical process, e.g. it respects the physical version of the Church-Turing thesis
    2) consciousness is an intrinsic property, e.g. we can calculate its amount without taking the outside into account

    (srp #5 mentionned one well known alternative theory that challenge this assertion, as do your own writings about free will)

    3) Phi is the specific computation measuring the amount of consciousness

    To me you’ve just shown that 3) is wrong, and that’s great thank you. However I doubt this will convinced Koch for example, because most persons interested in ITT are interested because of 1) and 2) rather of the particular algorithm 3). So, maybe the interesting question now is: can you generalize your proof to any computation that assumes point 2)?

    Scott #0, happy birthday!

  35. Scott Says:

    Jay #34: Yes, like I said, you always have the option of dismissing the “hard problem” as meaningless, or at least as scientifically sterile (like most of us do with the solipsism problem). And yes, if the “pretty-hard problem” were someday solved, it seems likely that many of the people who thought they cared about the hard hard problem would actually find themselves totally satisfied (though I imagine that other people would continue arguing about the hard problem 🙂 ).

    Now, regarding IIT: yes, of course I confined myself to arguing that Tononi’s Φ—i.e., the thing he named his book after—doesn’t work as a consciousness-meter, nor does anything like it (i.e., anything that sets out to measure “integrated information”). And of course I can’t rule out the possibility that some other intrinsic consciousness-meter will someday be discovered that works, thereby resolving the Pretty-Hard Problem.

    Having said that, your “defense” reminds me a bit of the Freudians who reply to their critics by saying: “yes, OK, but you’ve merely shown that all the specific things Freud and his thousands of followers talked about for their entire careers—Oedipal complexes, transference, repressed memories, castration anxiety, reaction formation—were loads of made-up hooey. And that’s great thank you. However, most Freudians didn’t become Freudians because of any of the specifics, but simply because they wanted to understand the mind. And you still haven’t refuted Freud’s central contention, which was that there’s some way of understanding the mind.”

    What are the critics to do in response, except smile and agree? 😉

  36. Scott Says:

    Sandro #32: Fine, so let’s all just agree that we’re discussing the Pretty-Hard Problem and not the Hard Hard Problem. Why we’re not discussing the Hard Hard Problem is a subsidiary question that need not concern us right now.

    I’m curious: in your view, if a theory of consciousness can’t be definitively rejected for predicting that your DVD player is more conscious than you are, then on what grounds can such a theory be rejected? Are there any grounds, short of logical inconsistency?

  37. Felix Lanzalaco Says:

    I would like to echo the previous points here that we should stick only to constructing mathematical arguments that we can conceive of as physical systems. So as Scott shows if we use a matrix computation in isolation we achieve the process of information integration and we can also conceive of the physical system.

    The refutation itself does not integrate all the parts of the theory such as its evolution over time. i.e. Phi autoregressive describes how the current version of a system predicts its previous state, in such a way that it improves the predictions of the operation over the partitioned states in isolation. The Cray computer has some crude analogy to brain topology, but is low in Phi by means of the fact its connection to CPU ratio is lower than the brain, but if it were topologically similar to a brain, say like the Spinnaker system.. why would it not be conscious ? Regression of the integration mechanism would have to improve outcomes over its previous physical state. i.e. Be able to compare on the difference between current state and what would occur if it were to lower its Phi and become more partitioned. Is there a physically conceivable integration example for this we could say is not conscious ?

    Phi could have a thermodynamic definition in line with simulated annealing, or a more clear topological description. None which would be out of line with its current form so its a framework in progress. In summary I think Phi provides a starting framework. I don’t see it completely disproved by taking particular examples such as the integration aspect into isolation, but of course its good to see it tested like this as the process has to be fruitful whatever happens.

  38. Dan Fitch Says:

    Scott, thank you for writing this.

    I don’t have much to contribute other than to say I’m glad these discussions are happening across all these disciplines, because the Pretty Hard Problem is pretty damned interesting to discuss and research.

    Keep digging, everybody!

  39. Gary Williams Says:

    Hi Scott,

    Thanks for this post. I agree with the spirit of your criticism.

    However, I would be curious what you think of the following paper, which claims to have indirectly solved the practical computation problem of measuring phi in a real brain.

    King et al (2013) “Information Sharing in the Brain Indexes Consciousness in Noncommunicative Patients” Current Biology

    Abstract: “Neuronal theories of conscious access tentatively relate conscious perception to the integration and global broadcasting of information across distant cortical and thalamic areas. Experiments contrasting visible and invisible stimuli support this view and suggest that global neuronal communication may be detectable using scalp electroencephalography (EEG). However, whether global information sharing across brain areas also provides a specific signature of conscious state in awake but noncommunicating patients remains an active topic of research. We designed a novel measure termed “weighted symbolic mutual information” (wSMI) and applied it to 181 high-density EEG recordings of awake patients recovering from coma and diagnosed in various states of consciousness. The results demonstrate that this measure of information sharing systematically increases with consciousness state, particularly across distant sites. This effect sharply distinguishes patients in vegetative state (VS), minimally conscious state (MCS), and conscious state (CS) and is observed regardless of etiology and delay since insult. The present findings support distributed theories of conscious processing and open up the possibility of an automatic detection of conscious states, which may be particularly important for the diagnosis of awake but noncommunicating patients”

    I mention this study because it satisfies your “common sense” criterion for theoretical validation: the algorithm satisfies the “gut-check” that vegetative patients are less conscious than MCS patients which are less conscious than normal people, etc.

    Also, I really like the “Pretty hard problem of consciousness”.

    I believe, though, that Ned Block came up with a similar problem he called the “Harder problem of consciousness”, which he called the epistemic analog of normal Hard problem.

    http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/HarderProblem.pdf

    Also, with regard to people complaining about the use of “common sense” as a check-point on our theories, the obvious analogy is with “fixed points” in thermometry. In his book Inventing Temperature Hasok Chang shows the importance of “fixed points” for the measurement of temperature. At the earliest stages, the fixed points are common sense – if a thermometer says that ice is hotter than fire then we know by common sense the thermometer must be ill-calibrated or made of poor thermometric materials. Eventually scientists landed on boiling water as a fixed point – but it’s really interesting how long it took for this to be accepted and there are also deep epistemic circularities involved that Chang thinks cannot be directly solved.

    It turns out the direct measurement of anything is wildly difficult, even so mundane a thing as the temperature of boiling water. But it raises a problem Chang calls the problem of nomic measurement:

    1. We want to measure unknown quantity X.

    2. Quantity X is not directly observable, so we infer it from another quantity Y, which is directly observable.

    3. For this inference we need a law that expresses X as a function of Y, as follows:X = f(Y).

    4. The form of this function f cannot be discovered or tested empirically because that would involve knowing the values of both Y and X, and X is the unknown variable that we are trying to measure.

    Chang formulated the problem of nomic measurement for temperature but it also straightforwardly applies to attempts to measure consciousness.

  40. Nicolas McGinnis Says:

    Sandro, #32, you say:

    “You don’t know what consciousness is or how it works, but somehow conclude that the above process [a DVD player] is not consciousness.”

    Suppose ITT was correct, and that a high Φ value was sufficient and necessary for ‘consciousness’: then DVD players and human minds have something in common, a new predicate ‘Φ-consciousness.’ This challenges our ‘pre-theoretic’ conception of what consciousness is, but so what about our intuitions? All kinds of theories are counter-intuitive.

    Still, some scientists remain perversely interested in the differences between DVD players and people. In particular, it seems that people (and many animals) have certain properties that DVD players lack: call this set of properties {ψ}, which may include, for instance, certain kinds of creative abilities or intentional states. (You can be as behaviorist as you like defining the set {ψ}.)

    Now, it appears that the set of ψ-properties is independent of Φ-consciousness value in the following sense, even if ITT is true, for while the conditional (1) is true, (2) is not:

    (1) If ITT is true, then, [ψ-properties → Φ-consciousness].

    (2) If ITT is true, then, [Φ-consciousness → ψ-properties].

    Which is what Aaronson say saying: (high values of) Φ-consciousness might be a necessary condition for ψ-properties, but it is clearly not sufficient. The problem is that we’re not interested in Φ by itself; we’re after an explanation of ψ.

    So even if you are right, and both people and DVDs have Φ-consciousness, we’re only going to need a new concept (‘person-consciousness’) to explore the peculiar properties people (and animals, etc.) have. The ones we’re interested in. The ones DVDs lack.

    (It’s like trying to explain fire by showing us that oxygen is always present during a fire, then theoretically reducing fire to the presence of oxygen. Someone points out that H2O has O in it; so water is part ‘fire.’ This seems counter-intuitive. But, aha, you claim that water *must* be fire since water has high oxygen values. Sure, our intuitions tell us no, but we have a great theory of fire, finally!)

    You *could* object that, for all we know, DVD players have {ψ}. This would amount to a kind of global skepticism about explanation: for all we know, any phenomena has ‘hidden’ properties that make our favourite theories come true (see: Quine-Duhem thesis).

  41. Bob Hearn Says:

    Excellent post — thank you.

    I have exactly the same complaint of trying to “have their brain and eat it too” about Dehaene’s otherwise excellent book “Consciousness and the Brain”. (In fact I was going to use that phrase in an Amazon review I haven’t yet written.)

    More generally, I am put off by most “consciousness” research, because I’m unconvinced the term means anything specific, or is even worth investigating as such. 99% of things written about consciousness can be immediately dismissed because they fail to define what they are even talking about, assuming that our intuitive notion of “consciousness” (1) actually means something, and (2) is consistent from person to person. In my view the word is too overloaded with fundamentally different meanings, many of them inherently vague, to be useful as a field of study.

    Dehaene’s book pretends to address this problem, but doesn’t really. The actual neuroscience research discussed in the book, relating to the things most of us would associate with the term “consciousness”, is extremely fascinating and well presented, and will give anyone who is not very current on the literature food for thought. (I say this as someone who has done postdoctoral work in neuroscience-inspired AI, but been out of the field for a few years.) This is the type of consciousness research that is interesting: what does the brain actually *do*, in various scenarios in which we think we are (or are not) conscious? But the “consciousness research” label really only serves, in my opinion, to make the research sound sexier, and get more grants. (Somewhat like “quantum teleportation” in the quantum-information theory world — which has nothing to do with intuitive notions of teleportation.)

    For the above reasons, the one place I might disagree with you is that your “Pretty-Hard Problem of Consciousness” is “one of the deepest, most fascinating problems in all of science”. Again, because it seems ill-defined to me. Our intuitive notions of what should and should not be viewed as conscious likely relate to somewhat arbitrary aspects of our world view, which exist for the sole reason that they are useful ways for us to slice up the world, and not because there actually is such a thing as “consciousness” that our notions actually refer to. Or, if there is a reasonable, concrete thing that can be meant by “consciousness” (as Dehaene tries to argue), then there is really nothing special about it compared to any other question about how the brain accomplishes task X, Y, or Z.

  42. Sasho Nikolov Says:

    In the Simons Institute, Les Valiant talked about a model of the brain, and, as far as I recall, expansion played an important role. (Here is a video: http://simons.berkeley.edu/talks/leslie-valiant-2013-05-29.)

    It sounds to me like IIT is just taking one property of the human brain and making a huge leap by declaring it a defining property. To give an imperfect analogy, we have been studying graph-theoretic properties of social networks and we have found many interesting things. However, we do not go off and claim that, say, high transitivity ratio defines a “society”, do we?

    So then wouldn’t the IIT line of research be a lot more fruitful if the IIT people claimed a much more modest (and sane) goal of studying one aspect of consciousness? I am sure there are interesting and hard questions about how and why tight integration arises in the brains of complex organisms, and these questions can be studied without claiming integrated information to be a characterization of consciousness. You know, baby steps before grand theories.

  43. Scott Says:

    Nicolas #40: That’s very well-said, thanks!

    In particular, I think you’ve provided the key to resolving my disagreement (if it is one) with Bob Hearn #41. Namely, if one doesn’t like the term “consciousness” (because it’s too overloaded, vague, etc.), then one should simply drop it from the discussion, and say instead:

    “I’m interested in a principled theory of whatever the quality is that most humans appear to have, at least judging from their behavior; that goes away under anesthesia; that dolphins, cats, and frogs also appear to have, though to progressively lesser degrees; that develops gradually from infancy; and that electrons, bananas, and DVD players appear not to have, or to have only negligible amounts of. I’d like to know, for example, whether a suitably-programmed computer would also have this quality, or whether it would have the quality if I programmed it in certain ways or built it out of certain materials, but not otherwise.”

  44. Scott Says:

    Sasho #42:

      So then wouldn’t the IIT line of research be a lot more fruitful if the IIT people claimed a much more modest (and sane) goal of studying one aspect of consciousness? I am sure there are interesting and hard questions about how and why tight integration arises in the brains of complex organisms, and these questions can be studied without claiming integrated information to be a characterization of consciousness. You know, baby steps before grand theories.

    Completely agree; couldn’t have said it better!

  45. Anon Says:

    Does it make sense to calculate Φ for entangled qubits? Or quantum states?

  46. Scott Says:

    Anon #45: I’m sure you could define a quantum generalization of Φ, as you can define a quantum generalization of just about anything. But I’ll leave that as a challenge for others. 😉

  47. Anon Says:

    Scott #43: Your paragraph raises a question that has bothered me about this topic:

    In what sense is “the thing that goes away under anesthesia” ALSO “the thing found progressively less and less in animals”?

    Having seen both animals and humans anesthetized I can’t think of any good way to reconcile these two notions. Less anesthetic needed -> less consciousness? Seems absurd.

  48. Anon Says:

    You don’t want to be known as the inventor of QIIT?

  49. Koray Says:

    I’m in total agreement with Sicco #9. There’s no reason to suggest that consciousness is a thing, even a measurable quantity, nor is there a reason that “intuitively” we roughly know the “strict” ordering. We don’t even know what the word actually stands for.

    This is similar to an ancient Egyptian suggesting that “capacitance” is a real phenomenon and intuitively clay pots have more of it than cows, then slapping some equations together to justify his intuition.

  50. Stiz Says:

    Hi Scott,
    I generally agree with you that IIT is something like a “good guess, but wrong” situation, but I think so for reasons drastically different from the logic you presented here, which I must take issue with. You seem quite comfortable asserting that the premise for this argument holds because “no sane person” would deem these complex systems conscious. I can’t, though, see why you would be so sure such systems aren’t conscious. Indeed I can’t envision any reason to have ANY idea whatsoever which entities might have consciousness and which might not.

    Before you drop me in the “not a sane person” bucket, let me try to explain 🙂 As far as I can tell, you seem to be, in part, conflating the presence of conscious with the presence of an animate physical system. I can understand why one would jump to such a conclusion, given that we are clearly familiar with entities contained in the overlap, but I see no physical or philosophical reason to presume that conscious things are a subset of animate things. In each of the systems you describe, we could study the physical structure of the system down to the atomic level, and it’d be perfectly clear from such an analysis that these systems are completely incapable of changing information other than in the respective designed ways.

    So what exactly is it you would expect from these systems as indications of their consciousness? To me, the assumption that if they aren’t changing their environment or trying to communicate, they must not be experiencing it seems no different than making the same assumption about a hypothetical blind, deaf, and mute quadruplegic. We don’t give them the ability to do so, so they just don’t have it. And unless one believes in “The Secret”-type quantum voodoo, they can’t evolve to develop the ability either.

    I’m quite good at missing the obvious, so if that’s happening here, please feel free to point it out directly!

  51. Scott Says:

    Koray #49 and Stiz #50: See clarification (3) and comment #19 (as well as Nicolas McGinnis’s comment #40). One more time:

    If you tell me you’re fine with a “theory of consciousness” that predicts that my Vandermonde contraption would be billions of times more conscious than you are, then I’ll tell you to help yourself to that theory! 🙂 But I’ll also doubt that you’re using the word “consciousness” to mean anything resembling what I mean. While I admittedly don’t know what “consciousness” really is, I take it to be something for which the external evidence typically involves intelligent behavior, of a sort that humans have, dolphins have to a lesser degree, and the Vandermonde matrix does not have.

    So, if you want me to pick a different word for this other quality that interests me—say, “blorfness”—while you take “consciousness” for the thing the Vandermonde matrix maximizes, that’s fine with me. But crucially, I suspect Tononi and the other IIT researchers would then agree that they, too, were interested all along in “blorfness,” and not in this other quality that you’ve insisted on calling “consciousness”! For otherwise, why would they have spent so much time talking about the human brain, whose Φ-value is mediocre at best? Why haven’t they concentrated their intellectual energies on systems with better Φ-values, like Vandermonde matrices?

  52. Kenny Tilton Says:

    Interesting post. “This approach has nothing to offer but let’s shoot it down anyway.” Allow me to be as careless with my time: IIT has nothing to offer on the Hard problem, but let’s see if we can save it anyway.

    To a degree I am with Tononi (a pure maths conjuring up of mega phi is not a challenge to phi arising in a natural system) but in the end you are right: you provided a counterexample. IIT needs to fold up its tent or tighten up the definition of phi to exclude DVD players.

    Now I had to cheat off my dorm roomate to squeak thru Calc III so I will leave it to the smart guys, but is there a mathematical way to differentiate massive interdependence arising from cleverly selected matrix operations (see what I mean about my math aptitude?) and interdependence/integration that arises over time in a system B (endowed initially only with local integration) when B is exposed to an environment W, such that B becomes able to operate with increasing effectiveness (defined, say, as the ability to survive long enough in W to spin off other Bs)?

    Is that the first derivative I am after? This growth in phi is crucial because I think Tononi can rightly object that your counterexample mechanically generates mega but uninteresting phi, and indeed the subject is still consciousness and what we are after is the interesting kind of phi that lets us play ping pong while drinking beer. The DVD player is simply built with all this phi. It defeats the current definition of IIT, but that just means we need to mathematically limit counterexamples to those in which phi arises unguided as an emergent property given merely local integration, crucially producing a more effective system requiring integration supported by but otherwise unrelated to the local integration (such that one could aspire to building an equivalent system using, say, integrated circuits instead of nerve cells).

    Success with that maths would align nicely with the common sense view of conciousness. It would describe a B that starts as local connections fed by data from W and becomes B’ with higher order connections, with phi measured at the higher order.

    Would that save IIT before we brush it aside?

  53. Scott Says:

    Oh, and Koray #49:

      This is similar to an ancient Egyptian suggesting that “capacitance” is a real phenomenon and intuitively clay pots have more of it than cows, then slapping some equations together to justify his intuition.

    Your example strikes me as a good one, for the exact opposite of the point you were trying to make! If your ancient Egyptian had pursued his “capacitance” idea further—e.g., by trying to create a quantitative theory of the water-carrying capacities of different sizes and shapes of clay pots, a theory that could tell him which pots had more “capacitance” than which others—and if he’d also been Sicilian rather than Egyptian, he would’ve been Archimedes.

  54. Sicco Naets Says:

    Scott: My major point of disention with what you wrote is that accord a level of merrit to intuitive, layman’s interpretations of consciousness; that those interpretations simply don’t deserve.

    Using the analogy of physics, you equate our intuitive understanding to Newtonian physics and a more comprehensive theory of consciousness with quantum mechanics. But that is an unfair analogy: Newtonian physics are “pretty accurate” for a variety of applications; for instance, if you’re an architect, Netwonian physics are sufficient for constructing a house that doesn’t fall down. You don’t need quantum mechanics until you start trying to explain certain behavorial characteristics of photons, for instance. So in a nutshell – Newtonian physics is “good enough” for a lot of technical problems; and you don’t need quantum mechanics until you deal with some fringe problems. Newtonian physics also forms an internally consistent system that makes consistent predictions and can be mathematically modeled (even if those predictions don’t always line up with reality).

    Now looking at consciousness – we don’t have anything that is even close to a Newtonian definition of consciousness. A layman’s intuitive “sense” about consciousness isn’t a model, instead, what we have is a pre-scientific understanding of consciousness; akin to when people though the earth was flat and you would fall off if you travelled too far towards the edge. The basic sense that you just “know” that a human is consciousness and a rock is not isn’t a model, it’s not a theory, you can’t build an algorithm with it; you can’t even test it. It’s just that: an intuition.

    And you don’t need a more workeable theory to show that everyone’s intuitive understanding of consciousness is seriously flawed – simple observations of specific neurological phenomena shows our intuitions about consciousness are wrong. Examples: split brain conditions, the way your brain mascerades the blind spot in your vision, the buny-hop experiment showing consciousness lags intent, or generally, any of the books by Oliver Sacks will show you a laymans “understanding” (and that’s being generous) completely fails when presented with actual test cases.

    I don’t fundamentally have a problem with the main drive of your argument against ITT -I agree that it merely measures the amount of “inter-connectedness” of a given information system and at best is collorary to consciouness; rather than offering a profound explanation for the nature of consciouness. But I think the theory isn’t completely useless and I’ll try and explain why.

    I personally think that the person that has come closest to defining what consciousness is, is Douglas Hoffstader. Hoffstader (and I’m not going to do this justice, but I’ll take my best stab at it) suggests that a core characteristic of consciousness is the ability of the system to reflect on it’s own reflections. I.e. there’s a certain type of feedback loop; where at least parts of the system are involved in creating representational states of it’s own internal workings. But one key part of consciousness also seems to be that it has a certain amount of… depth… to the experience. Part of the richness of consciousness lies in the fact that I can throw out the word “boat”, and it conjures a myriad of images in both of our minds: sails, water, wind, sunshine, waves, etc. Concepts you can chose to expand on in your mind (essentially following the connections, which lead to further connections, and so on); or you can leave these concepts unexplored (i.e. “John, are you coming by car or boat?”)

    And ITT seems to touch on this aspect: it posits that a key component of consciousness is the “richness” of the experience; which it defines as the amount of connections between information states. In that sense, I think Hoffstader’s notion of a recursive feedback loop and ITT’s notion of high amounts of interconnectivity together form a pretty good starting point for exploring WHAT consciouness is – to aproximate what the human brain is capable of, you need both: you need a system that can reflect on it’s own ponderings AND that has high amounts of semantic data to give these ponderings … meaning.

  55. Maze Hatter Says:

    You say:

    “I’ve just conjured into my imagination beings whose Φ-values are a thousand, nay a trillion times larger than humans’, yet who are also philosophical zombies: entities that there’s nothing that it’s like to be.”

    That is like saying:

    “X is something, but nothing is X”.

    That’s a contradiction.

  56. Scott Says:

    Kenny #52: My guess is that it would be unbelievably difficult to “patch” the definition of Φ in any way that got rid of all counterexamples like mine. It’s true that, depending on the exact definition, you’ll need to tweak the counterexample—as I did, for example, in switching from the original Vandermonde matrix V to the repeated-rows variant W (to escape an ambiguity in Tononi’s definition). But the basic problem that the counterexamples illustrate is not one of detail.

    So for example, you say we should only count integrated information if it “emerges unguided,” rather than being put in from the start? The trouble is, I could easily design a system that started out with local connections only, then slowly changed its connections in a random way so as to evolve into an expander graph. There still wouldn’t be anything in the system that you’d want to call “intelligent,” but it would have an “organically” large Φ.

    To me, then, equating consciousness or intelligence with Φ is sort of like defining “humans” as “featherless bipeds.” When someone brings in a plucked chicken, the right response isn’t “OK, but maybe if I just add some additional conditions, like that the featherless biped has to be at least 4 feet tall, and not have a beak…” It’s better to rethink the whole idea that any definition along these lines can possibly give what you want.

  57. Michael Says:

    Excellent critique. With regards to the numerical examples, there is IIT version 3 now ( http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003588) which is formulated without the normalization that you use to blow up the numbers.

  58. Scott Says:

    Michael #57: Thanks; I’d missed that version! Would you happen to know what they now use in place of the normalization, to get rid of wildly-unbalanced partitions? (Yes, I should RTFP, but asking you is faster. 🙂 )

  59. Scott Says:

    Maze Hatter #55: Not only is it not a contradiction, but the rest of my post gives an explicit example of what I consider to be one such being—namely, the Vandermonde contraption (which can have Φ as large as you want).

    In fairness, though, maybe you meant something different: more like “supposing, hypothetically, that we had a solution to the Pretty-Hard Problem—i.e., a reliable numerical measure of a physical system’s ability to convince other systems of its consciousness—it would be contradictory to imagine a system that scored high on the measure but wasn’t actually conscious.”

    If that’s what you meant, then I’ll count you as a Dennettian zombie-denier and Hard-Problem-dissolver, which I regard as a perfectly-reasonable philosophical stance (though not the only reasonable stance).

  60. jonas Says:

    I was re-thinking what I said about the expander graph, and how they can be realized in space, and it turns out that I was wrong in what I said.

    A random expander graph on n nodes can be realized in O(n^2) space. This will require the edges to be quite long, about Theta(n) length, which is not a problem here, although that’s what was the source of my confusion.

    You might think that O(n^2) is not small enough, and that our computers are implemented more efficiently, but this is not the case. A real computer with n bits of memory is implemented with hardware taking up Theta(n) space (and the constant factor is quite efficient too, thanks to DRAM). This, however, comes at the price of limited parallelism: no more than O(1) bits of the memory can be accessed at any one time, so a computation that accesses all the memory must require Omega(n) time to run. Thus if you imagine the computer’s complete evolution in time as a uniform circuit (in the sense TCS uses the word “circuit”), the whole circuit takes up Omega(n^2) volume laid out in space-time. It might be possible to break these bounds a bit. If you imagine an extremely parallelized computer, such as a modern GPU, that might be able to do Theta(n) memory and do Theta(n) computation at any one time, but it definitely cannot access all that memory randomly.

    At this point, I don’t know enough to be able to tell how much space-time you need for a circuit implementing one of those linear super-concentrators with high Phi that you mention; nor do I know how much space-time you really need for a universal pointer machine (or RAM machine, these differ in only a log-factor) with a given run time bound.

  61. Michal Klincewicz Says:

    Hi Scott,

    Please write this up as an article and send it to a philosophy or philosophy/psychology journal for publication.

    All the best,

    Michal

  62. Maze Hatter Says:

    For the record, my position is the world view that matter exists fundamentally, and that mind emerges later is wrong and the Hard Problem is evidence of that.

    It is a little known fact that Newton split time into absolute relative. “Relative time” he wrote “is a measurement of duration based on the means of motion.”

    What we have to do is split matter into two, the same way.

    “Relative matter is a measurement made by an observer.”

    *absolute matter creates minds, which create relative matter.*

    This, I feel, is what Plato was talking about in the allegory of the cave, where the reality of the prisoners was the result of the pattern matching game they played with the shadows on the cave wall.

    I feel it is what Leibniz was talking about, his absolute matter was called “monads”.

    And I feel this is kind of what Hugh Everett was talking about in _The Relative State Formulation of Quantum Mehanics_.

    Basically, that rocks, gases, birds, bees, molecules, electrons, human brains are all concepts originating from a mind.

  63. lohankin Says:

    Scott,
    while you are at it, what is your view on “single-cell consciousness” idea?
    Let’s assume the following formulation:
    1) every cell is conscious
    2) “me” is a name of one particular, “central” cell. It collects inputs from others, and whatever it feels while doing so – we call “our” consciousness.
    3) there’s nothing else known to be conscious.
    (There’s a bit more to it, but you can easily reconstruct the rest).
    In this theory, consciousness is a property of a cell (=bacteria). How this property comes to being we don’t know. However, this definition allows to answer some questions, e.g. “is a colony of bacteria conscious?”. It is conscious iff we can prove (experimentally) that exactly one of them is “central”.

  64. Sam Hopkins Says:

    If what you want out of a consciousness-yardstick ψ is that ψ assigns consciousness to and only to things that behave vaguely like humans, then what is the problem with taking ψ to be the Turing test?

  65. Scott Says:

    Michal #61:

      Please write this up as an article and send it to a philosophy or philosophy/psychology journal for publication.

    Yeah, I considered that! But

    (a) I probably reach a larger audience (even a larger informed audience) this way than I would through a journal,

    (b) I’d have to invest lots of time formatting the paper, adding details and references, and learning the language and culture of a philosophy or psychology journal, and

    (c) I have tenure, and don’t care about maximizing my publication count (even in theoretical computer science, let alone faraway fields).

    So what, in your view, would I gain by journal publication? I’m not just asking rhetorically; I really want to know—because I’m thinking about going “direct-to-blog-post” more often! 🙂

  66. Scott Says:

    Gary #39: Thanks; I just read the paper by Ned Block that you linked to! It took me a while to figure out what he meant by the “Harder Problem,” but now that I have, I don’t think it’s particularly similar to my “Pretty-Hard Problem” at all. For one thing, the Pretty-Hard Problem is meant to be easier than the Hard Problem, whereas the Harder Problem is meant to be … well, harder! 🙂

  67. Dániel Says:

    I believe that the concept of consciousness “does not carve reality by its joins”. It’s an anthropocentric notion, a case law hodge-podge mixed mainly from “self-inspection” and “intelligence”, but also “agency”, “ability to communicate”, “triggering empathy”, and others. Of course humans have this strong instinctive feel that they do have it and it’s something important. And they really do have it, like they really do have ears and nose, and it’s really something important for them. But that doesn’t mean that it’s an especially interesting concept for anyone/anything other than humans.

    Let me turn this philosophizing into a prediction. A far future prediction, but still. By the time they’ll have machines with human-level intelligence, consciousness will not be a design consideration, and the ethical status of these machines will not depend in any way on their consciousness, whatever its correct definition would be for a machine. Future society will either go all the way in the behaviorist direction, focusing on the ability to interact and cooperate, or they’ll go in the meat-chauvinist direction, requiring specific mind organization or even mind implementation.

  68. David Says:

    The task of a good pretty hard theory can’t *just* be to produce results agreeing with commonsense intuition – that can be done by an infinite number of equally arbitrary Giant Look-up Tables, each yielding different answers for the nonobvious cases!

  69. Ryder Dain Says:

    Thanks very much for the post, Scott, and a fröliche Geburstag, too. The discussion here’s been delightful, and helps a lot for the less computationally-savvy reading your arguments; further, as maybe a point (d) on #65, is that we get to watch peer review unfold here instead of being beholden to some editorial board’s intellectual chops.

    I have just a small question in relation to the end of Gary William’s #39 post, on the topic of nomic measurement and fixed points. You point out (and rightly so, I believe) that a high Φ-value does not provide sufficient evidence of consciousness; but, does IIT demarcate itself enough from the given evidence on coma patients and DVD-players that we could make a more modest claim: that a minimal Φ-value is necessary for consciousness in the first place? In this respect Gary’s mention of havig a “fixed point” from which to start a serious empirical investigation becomes more relevant, I imagine- I’m just not sure whether you’d be amenable to that sort of claim.

  70. srp Says:

    Re: the analogies with classical physics, I think some of the commenters have hit on a real question–whether our intuitions about consciousness are like Aristotle’s intuitions about mechanics. The idea that things will stop moving if you stop pushing on them is a pretty strongly observed “commonsense” regularity. Abstracting away from friction, as Galileo and then Newton did, was a conceptual leap changing the entire category structure for thinking about motion.

    So, while I really like Scott’s argument, there is the danger of, as Daniel #67 says, not carving reality at the joins when we use our intuitive notions about consciousness. The danger is not that the DVD player might be more conscious than the human but that what is holding the DVD player back from consciousness may be the equivalent of friction in classical mechanics: Something that is pervasive and practically important, but conceptually non-fundamental. I personally doubt that this is the case, but my confidence level is not as high as Scott’s.

  71. Scott Says:

    Sam #64:

      If what you want out of a consciousness-yardstick ψ is that ψ assigns consciousness to and only to things that behave vaguely like humans, then what is the problem with taking ψ to be the Turing test?

    David #68:

      The task of a good pretty hard theory can’t *just* be to produce results agreeing with commonsense intuition – that can be done by an infinite number of equally arbitrary Giant Look-up Tables, each yielding different answers for the nonobvious cases!

    Your two points are closely related. And my answer to both is the same: in saying we want a “theory,” it’s implicit that we also want the theory to be reasonably “simple” and “natural” (just like with anything else in science). More specifically, we want some fairly short algorithm that takes as input a description of the physical state of a system, and that tells us how conscious that system is. The algorithm is not allowed to call a human judge as an oracle, which rules out the Turing test (while the short part rules out the giant lookup table).

  72. Stiz Says:

    I had read clarification 3, and didn’t necessarily make the connection, but I think clarification 2 (which I think I totally misinterpreted then partially ignored the first time) and comment 40 clear up what your intentions were. I suppose my ulterior motives were showing in my approach…perhaps I was trying to talk you into a more tangential conversation about what consciousness IS, instead of just what it isn’t.

    I personally think this mechanized, information-based sort of thinking is on the wrong track, and if you’re going to make consciousness ~synonymous with intelligence and creativity, then why bother talking about it?

  73. Scott Says:

    Ryder #69: As I said in the post, yes, I think it’s possible that any conscious process would necessarily generate a large Φ as a byproduct of whatever else it’s doing. Or even if not, that a large Φ would tend to be correlated in practice with consciousness, under certain conditions. Or maybe not—I don’t know, and it’s an interesting question. What I’m confident about is only that having a large Φ can’t possibly characterize consciousness, since it’s clearly not a sufficient condition.

  74. Scott Says:

    lohankin #63: I can’t tell whether your “single-cell consciousness” idea was meant seriously, but if it was … dude! what happens if the “central cell” of my brain dies, as individual cells do all the time? Do I become a zombie from that point forward?

  75. Sam Hopkins Says:

    Scott #71: But what’s interesting about your requirement is that if you already had access to artificial intelligence, then your algorithm could be the Turing test: you just instruct your AI to perform the Turing test and report its judgment back to you.

  76. Scott Says:

    Everyone: Lots of people have taken me to task for my appeal to “common sense” in stating the Pretty-Hard Problem of Consciousness—and in retrospect, yes, maybe I should’ve defined the problem more carefully.

    On the other hand, it’s striking that so far, not a single one of these critics has taken up my challenge: if we can’t reject a theory of consciousness for predicting that, let’s say, toasters are billions of times more conscious than humans, then when can we reject it? What does a theory of consciousness have to do, by your lights, in order to get itself falsified? It’s not a rhetorical question—I’d really like to know.

  77. Scott Says:

    Sam #75: Ah, but then there’s also the Occam’s Razor requirement. Ideally, I’d like my consciousness-meter to be described by a relatively short algorithm.

    Notice that, despite its failure to capture consciousness, IIT does pass the ‘algorithm’ and ‘Occam’s Razor’ requirements with excellent marks, which is why I consider it a worthy attempt on the problem.

  78. Sandro Says:

    Scott #36:

    I’m curious: in your view, if a theory of consciousness can’t be definitively rejected for predicting that your DVD player is more conscious than you are, then on what grounds can such a theory be rejected? Are there any grounds, short of logical inconsistency?

    Logical consistency, predictive/prescriptive power, axiomatic parsimony and finally explanatory power are the ordering relation I use for theories. Not perfect, but justifiable I think.

    Intuitions often fall under both predictive and explanatory power. A theory should to be able to predict where/when the property in which we’re interested ought to be, and the theory ought to explain why we have the intuitions about said property that we do, and if applicable, explain why said intuitions are wrong. IIT fails the latter two tests as you’ve adeptly demonstrated, so I would definitely prefer a better theory.

    Failures in explanatory power are not grounds for absolute rejection though. For instance, the Copenhagen interpretation of QM would never have passed this test either, but quite a bit of good physics was done despite its explanatory limitations.

  79. srp Says:

    Scott #76 asks: “On the other hand, it’s striking that so far, not a single one of these critics has taken up my challenge: if we can’t reject a theory of consciousness for predicting that, let’s say, toasters are billions of times more conscious than humans, then when can we reject it? What does a theory of consciousness have to do, by your lights, in order to get itself falsified? It’s not a rhetorical question—I’d really like to know.”

    I’m not a critic, per se, but as I pointed out in #70 the analogy between today’s level of understanding of consciousness and Aristotle’s understanding of motion is uncomfortably close. The worry is NOT that the DVD player is more conscious than the human, but that the true underlying consciousness-creating factors get canceled out in the DVD by the hypothetical equivalent of friction.

    Newton’s first law appears to be simply wrong in everyday life–things stop moving when we stop pushing. It takes a leap of understanding to realize that the “special” low-friction setups focused on by Galileo and Newton, e.g. ballistic motion or balls rolling down inclines or sliding ice blocks or pendulums, are the better way of approaching the problem.

    A theory of inertia predicting that a large ice block pushed across a flat wood surface had more inertia than a small piece of rubber pushed across the same surface would appear to violate common sense. “If we can’t reject a theory of inertia that says that an ice block has more inertia than a rubber block then when can we reject it?” The answer is that we have to, say, throw the large ice block and the small rubber block with the same force to perceive the true inertial difference. That removes most of the friction obscuring the underlying mass-inertia relationship. Similarly, we would need to be confident that the equivalent of friction wasn’t masking the IIT-consciousness relationship to be confident of rejecting it.

    I’m pretty confident of that myself, but the evidence for my belief isn’t what I would call airtight. It’s just that I can’t come up with a plausible friction equivalent in this case, which could just be a paucity of imagination or perspicacity.

  80. Sandro Says:

    Nicolas #40:

    Which is what Aaronson say saying: (high values of) Φ-consciousness might be a necessary condition for ψ-properties, but it is clearly not sufficient. The problem is that we’re not interested in Φ by itself; we’re after an explanation of ψ.

    Are we? What makes you so sure that conscious experience is ψ and not Φ? Mere conscious experience is the hard problem of consciousness, ie. qualia like “redness”. Whatever other ψ properties you’re interested in, like creative abilities or intentional states, it’s irrelevant to the hard problem.

    A DVD player playing an undamaged DVD could very well be experiencing qualia analogous to the ease of solving a problem we know how to solve, and a DVD player playing a damaged DVD could very well be experiencing something analogous to the frustration we feel when tackling a tough problem we’re not sure how to solve.

    I personally doubt it, but this is a again merely an intuition about what algorithmic consciousness ultimately is.

  81. c_x Says:

    Maybe it’s filled with hubris, but why can’t consciousness just be the nerves embedded in soft physical materials, and that the movement and reaction of those materials will be the content of the consciousness. I don’t do the math stuff you get into, but from a pure philosophical point of view, I think to prove that there is a hard problem of consciousness, we first have to find elements of consciousness that do not exist as any signal or motion or material in the physical world that corresponds to that conscious element. And the color red or an emotion does not count. The philosophical zombie has also been debunked by this logic I feel, since we still haven’t described all elements of consciousness within the physical nervous system and brain.

    We’re talking billions of neurons and trillions of synapses, as well as tons of ganglia, all changing values in a rapid pace in response to the physical, soft body it is embedded within, and already we’re declaring a hard problem? It doesn’t seem logical to me, not from a scientific point of view. Also we ARE the nervous system. It’s not some extra thing that we observe, we are the body. I think IIT is an OK beginning when wanting a measurement of some sort, I couldn’t say how accurate it is, but tying it to complexity and how it’s connected seems like a good starting point.

  82. Scott Says:

    srp #79: If IIT had said: “yes, this Vandermonde contraption would’ve been trillions of times more conscious than a human, but it’s not because of a consciousness-destroying friction that IIT also postulates and tells you how to incorporate into the calculation of Φ; and that, as you can check for yourself, acts only on your contraption and not on human brains”—well, that would’ve been fine!

    But it’s a general property of any falsified theory that it can later become “un-falsified,” if enough mitigating factors are discovered to nullify what looked like wrong predictions. As you point out, that’s basically what happened with Newton’s First Law, and I concede that in principle it could happen with IIT.

    Having said that, IIT as it stands contains no hint of any such “consciousness-destroying friction”—so it seems fair to say that the burden is now 100% on the IIT proponents’ side, not on the skeptics’.

  83. Scott Says:

    c_x #81: IIT aims to give a general criterion for deciding when a physical system is conscious and when it isn’t. That question—what I called the “Pretty-Hard Problem”—is the one that really interests me here, and it’s one that your proposal doesn’t seem to touch. For example, you talk repeatedly about consciousness arising from “nerves embedded in a soft body.” Well, is the softness actually important here? Would nerves embedded in a hard body also give rise to consciousness? If not, why not? And what about nerves is important? Could we still get consciousness if we replaced the nerves by functionally-equivalent microchips?

  84. Nicolas McGinnis Says:

    Sandro #80:

    I don’t know what to make of the suggestion that consumer-grade electronics ‘experience’ their functioning, except to worry about any moral responsibilities I might have towards my old laptop—I’ve been meaning to recycle it, but that might be murder! As a vegetarian, this leaves be perplexed.

    More seriously: I disagree that intentional states don’t factor in to the question, but I won’t pursue this line of thought. Rather I wonder how the IIT theorist would answer the Vandermonde variant to Searle’s old Chinese-room experiment.

    Suppose you have a person in a room with a book of instructions. Paper comes in through one slot; the person, following some set of instructions, uses the input on the paper to work through some simple arithmetic, and sends the result out through the other slot. Little does this person know that the operations they are carrying out, in a piece-meal basis, is a Vandermonde matrix—with precisely the kind of high-Φ value characteristic of consciousness.

    Now the person stuck in the Vandermonde Room has no idea that this is what they are doing. They are following a set of instructions. The symbols are meaningless cyphers. But to the outside observer, the inputs and outputs of the Room appear to have very high Φ. Therefore, we must conclude that the room is ‘conscious.’

    But what is conscious? The paper? The instructions? The slots? The mereological sum of the things in the room? (It can’t be the person, at pains of homuncularism.)

    Note that the Dennettian response doesn’t work here: Dennett’s critique of the Chinese room is, roughly, that we underestimate the massive complexity of a translation ‘program’ (which would require memory, self-amendment procedures, meta-knowledge, and more).

    The Vandermonde matrix is dead simple in comparison. We know how it works. Fourier transforms compress our MP3s. A patient person, or team of persons, could do it by hand, in pencil, over a few years. Measurable information integration will have occurred. But the theory predicts that the system of papers, pencils, and persons was ‘conscious’ and ‘experienced’ itself in the process. This simply can’t be right. (You could insist it’s *possible,* somehow; at which point I can only wish the theorist best of luck exploring such possibilia.)

  85. Aaron Sheldon Says:

    It is very presumptuous to undertake any discussion of consciousness without a deep consideration of Helen Keller, The Butterfly and the Diving Bell, or any other cases of locked in consciousness.

    Any reasonable definition of consciousness must include the necessary condition that a system is conscious only if it is able to learn a new language. Ostensibly learning context, semantics, syntax, and symbols through pidgin, creole, and finally onto full blown language. Even in the most severe cases of being locked in, it as least theoretically possible to develop a language through which to communicate.

    Any measurement of consciousness in a context and semantic free language is absurd. Nothing points to this folly more clearly than the a priori assumption of the existence of a language made in both the IIT discipline and this post.

    Nothing more plainly demonstrates this then human history itself. New peoples discovered each others consciousness chiefly through learning and developing languages.

    If you want to start trying to find a place to start defining or measuring consciousness I can think of no better place to start than with the process by which to physical systems would come to agreement on abstraction and context (X means Y, wait no X means Z, etc…)

  86. Jared Says:

    Scott #65: The reason to publish (hopefully in an open access journal) is posterity: it will be easier for researchers 50 years from now to find out what was the concrete counterexamples were to early theories of consciousness like IIT, for example. At least, it seems worth it to me to deliberately preserve the highest quality arguments rather than just relying on archive.org. I don’t know enough about the extent to which IIT is taken seriously to know whether this is worth publishing (and even if I did I still might not know since I’m not a philosopher).

    Nevertheless, regarding (b) and (c), the obvious strategy with those constraints is to explain to a philospher colleage that you’re not sure if this is interesting enough to publish, but to let you know if they know anyone that might be interested in publishing a technical refutation to IIT, making it clear that you don’t care much about credit (so you’d be happy to be the last author) but in turn you would prefer not to learn how to write up a philosophy paper and solely provide the technical argument. That seems like a reasonable position, and ultimately you’ll either find a lead author without excessive effort or not.

  87. Aaron Sheldon Says:

    Lets try this again, except punchier:

    A necessary condition for consciousness is the ability to, at least theoretically, translate, or cross compile, between any formal languages. As far as I know there does not exist a Turing machine that can cross compile between arbitrary choices of Turing complete languages.

    In principle, a conscious entity given sufficient time can.

  88. Phil Says:

    Very interesting post! I have an objection to your argument, which goes like this:

    What if it is impossible to come up with a criterion for consciousness that is invulnerable to the kind of counterexample you provide? In other words, for any quantitative measure of consciousness, it is possible to construct some computer program which scores highly on that measure, but which on intuitive grounds is clearly not conscious.

    You might think this is pessimistic, but I find it highly plausible. Indeed, the series of blog posts by Schwitzgebel that you cite argues that a variety of popular theories of consciousness actually prove that the United States is conscious! [On a side note, Schwitzgebel appears to be arguing that the US actually IS conscious, though I’m happy to instead take his argument as a reductio ad absurdum of these various theories.] I also think that if you accept, for example, a Searle-style Chinese book argument against the possibility of machine intelligence, than you must also accept that perfect quantitative measures of consciousness are impossible, since any such measure would inevitably be foiled by the proper “Chinese book.”

    Given all this, my inclination is to think that, while IIT fails to solve the (unsolvable) problem of perfectly identifying consciousness, it is still potentially a useful heuristic. We can take on faith that DVD players and Cheetos are not conscious, but still use IIT to ask whether a monkey, a fruitfly, or a nematode should be considered conscious. The above-mentioned (Gary Williams, #39) Current Biology paper seems to support this view.

    Thoughts?

  89. luca turin Says:

    Great post and happy birthday. I particularly love your statement that Tononi’s theory is head and shoulders above the “not even wrong” ones. I am puzzled by one thing: why is Giulio Tononi not part of this discussion ? Is he unaware of it ? If so can you invite him ?

  90. Matthew Says:

    Having thought about this for a while, I’m not convinced that you have proven your point yet. This Phi doesn’t refer to the equation, but to the system modelled by the equation.

    Now you refer to a computer as such a system, but this is like saying that if I write a programme to solve a physical equation then this programme becomes a physical system modelled by the equation on which it is based. But it seems to me that the programme is just another form of the equation. So the programme, and the computer on which it runs, is just another model for the original physical system modelled by the equation.

    Any physical property refers not to the computer but to the physical system it refers to. In order for your reasoning to hold up the computer would have to share the same relevant physical properties of the original physical system. ‘Entropy,’ for example, does not refer to the computer but to the physical system it models. Now maybe in this case you can show that the entropy of the computer running the programme is the same as that of the physical system, but does this necessarily hold for all properties of a system?

    It seems to me that to complete the argument you either need to provide an example of a physical system that is modelled by the Vandermonde matrix or else show that any physical property (or at least Phi) is held also by a computer programme and not just the physical system it models.

  91. c_x Says:

    Scott #83: Yeah I was talking about Chalmer’s hard problem regarding the first part. I have my own pet theory about the soft tissues, I believe it would be much harder to get consciousness in a hard material, because it wouldn’t be able to change in response to the environment. It seems to me you’d need liquids, soft tissue that can bend or shape, various chemicals and so forth, at least for a bodily experience. Maybe chips placed in soft tissue could achieve it. You can still get the signalling of neurons I suppose, but maybe not a dynamic bodily experience. Again, just my pet hypothesis.

    As far as the pretty-hard problem, since I don’t necessarily think there is any hard problem, I’m not sure that finding out whether something is conscious or not, is all that interesting. If it’s “all in the materials”, so to speak, then the experience will reveal itself with brain mapping and so forth. Like I said before, maybe Phi can be an indicator, but I have the same worries that you do, that it can be too general, and that there are variables not taken into account. But at the same time, do we know of anything as complex as the central and peripheral nervous system? Is that complexity and interconnectedness special, in that if you count the right variables, you will get an uniquely high quantity of variable states, compared to another non-living system with a lot of parts? That’s what I was thinking about, but I don’t know.

  92. James Cross Says:

    A note on the “single cell” theory and anesthetics.

    The list of chemicals that can serve as anesthetics includes some very different chemical substances – ether, nitrous oxide, chloroform, and xenon. Xenon, for example, is mostly inert – it reacts hardly at all with other elements. So how can xenon affect brain chemistry if it does not react with other elements? Here’s another interesting observation. Anesthetics administered to one-celled organisms, such as amoeba, at about the same proportional dose that would render a person unconscious, cause them to become immobilized. Are amoebas conscious?

    There is probably something in the Hameroff thoery about microtubules being involved with consciousness. Perhaps microtubules involved in cell motility have been re-purposed in the nervous system during evolution.

  93. domenico Says:

    Aaron Sheldon #85

    I have a problem: if the language is necessary for the consciousness, then a bacterium that live in a community, with chemical signal communication, have some change in the behaviour; the problem is: if I translate the chemical signal in words (like a biologist make), and the behaviour in words (like a biologist make) then there is a biological semantic, and sintax, that descrive the communication, with a bacteria language that it is a chemical language.

  94. Scott Says:

    luca turin #89: I did indeed invite Giulio Tononi, Christof Koch, and Max Tegmark to this discussion. I got a nice quick email from Max, which led to clarification (5) above, but haven’t yet heard back from Giulio or Christof.

    (update added later: I’ve now received nice notes from both of them, and Giulio tells me he’s working on a response)

  95. Scott Says:

    Aaron Sheldon #85 and #87:

      It is very presumptuous to undertake any discussion of consciousness without a deep consideration of Helen Keller, The Butterfly and the Diving Bell, or any other cases of locked in consciousness.

    And it is very presumptuous to call other people presumptuous, if they discuss consciousness without bringing up your favorite pet examples! 😉

    Seriously: your idea that consciousness requires the ability to translate between any two formal languages is worth throwing on the table, but I don’t think you’ve sufficiently thought through how you would tell whether that ability was present or absent in a given system. After all, even your laptop can translate between formal languages, if you “teach” it the languages by giving it compilers and decompilers for them! And a human needs to be taught the languages before he or she can translate them as well.

    Again, the central merit of Φ, in my view, is that at the end of the day, there’s a formula that you can more-or-less mechanically apply, and which is alleged to tell you whether your system is conscious or not. You don’t need to rely on anyone’s verbal description of what consciousness is, with all the after-the-fact “no, that doesn’t really count” lawyering that humans are infinitely prone to.

  96. Scott Says:

    James Cross #92: I don’t see how the fact that amoebas can be immobilized with anesthetics even suggests that they’re conscious. Why not say that the effect of an anesthetic is to immobilize cells, whether or not they’re conscious, and that that can sometimes have the side effect of removing consciousness?

    By analogy, shooting someone with a gun can destroy their consciousness, and shooting an apple can also destroy the apple, but that doesn’t mean apples are conscious.

  97. Scott Says:

    Matthew #90:

      It seems to me that to complete the argument you either need to provide an example of a physical system that is modelled by the Vandermonde matrix or else show that any physical property (or at least Phi) is held also by a computer programme and not just the physical system it models.

    You raise a very good objection, but it’s one that’s already answered in the post itself! (Specifically, in the 3 paragraphs beginning, “It might be objected that…”)

    To recap: while an ordinary microchip running a simulation of the Vandermonde system might not be organized in a way that produces an especially large Φ-value, we could easily imagine building a giant physical network of logic gates that was organized like an expander graph (and that had a correspondingly Φ), but that still had no capabilities whatsoever except to apply the Vandermonde matrix, or to apply the encoding and decoding operations of an LDPC code. And I take it as axiomatic, if you like, that such a tangle of wires would not bring a Vandermonde-consciousness into the world.

  98. James Cross Says:

    #96

    My question about amoebas was rhetorical.

    The point is the mechanism of action of anesthetics. They are often chemically inert. They immobilize amoebas. They make us unconscious. Hameroff suggests this has to do with microtubules since they are involved with cell motility.

    I think the starting point for consciousness in evolution is probably with worms. The first worms had the same body form that we have. It has a mouth with a mass of nerves (brain) nearby and a digestive tract with nerves (spinal cord) running beside it. The brain evolved to control the mouth, guide the head to food, find prey and/or avoid predators.

  99. Scott Says:

    Phil #88:

      What if it is impossible to come up with a criterion for consciousness that is invulnerable to the kind of counterexample you provide? In other words, for any quantitative measure of consciousness, it is possible to construct some computer program which scores highly on that measure, but which on intuitive grounds is clearly not conscious.

    Yes, it’s possible that there’s no solution to the Pretty-Hard Problem much simpler than, “run the Turing Test on your system and see whether it passes!” Or maybe not—I don’t know, and I’m extremely leery of passing from the fact that I can’t see how to solve a problem, and that this-or-that attempted solution failed, to the conviction that the problem is unsolvable.

    Whatever the answer, I completely agree that a good way for IIT to go would be to scale back its claims and ambitions, from explaining the nature of consciousness to exploring one aspect of information-processing that’s clearly not consciousness, but that might often be correlated with consciousness in practice.

  100. mambo_bab_e Says:

    Great post.

    I do not completely approve of Tononi’s IIT. However I am interested in the consciousness of photodiode, and integration of information. IIT is a little close to my hypothesis.

    There is a risk for IIT from the very first if one has a doubt for existence of consciousness, because it has one side that the problem arrives at a conclusion of the problem of believable / unbelievable.

    And Pretty Hard Problem seems to mean that it seems difficult to understand what the consciousness is.

    I have already a hypothesis consciousness including meta-cognition, free will, and seamless access between the [usual] episodic memory and memory of [genes] like instinct.

    Please see my blog and twitter.

  101. lohankin Says:

    > but if it was … dude! what happens if the “central cell” of my brain dies, as individual cells do all the time?

    Not all cells, and not all the time. E.g. neurons controlling muscles are pretty tough, they are not prone to sudden death (except via trauma). Loss of such neuron will make control of muscle impossible. If only axon is damaged, it can grow a new one, but if the body of the cell dies – no chance to replace it.

    Lifetime of neuron generally doesn’t have a fixed limit: http://phenomena.nationalgeographic.com/2013/02/25/neurons-could-outlive-the-bodies-that-contain-them/

    Certainly, when “me”-cell eventually dies, for whatever reason, the prognosis is not good 🙂
    This is the only theory I’m aware of that explains our consciousness without “holistic mysticism”, and makes testable predictions. True, it brings human to the level of bacteria, in some sense, which might be shocking to some, that’s why the theory isn’t going to become popular any time soon :-).

  102. Aaron Sheldon Says:

    To #93 and #95

    One simple retort in a thought experiment.

    If it is physically impossible to negotiate a means of communication with a system then how can you ascertain its degree of consciousness?

    Even our own isolated consciousness involves internal dialog.

    As for the compiler example, if teaching were as easy as coding a compiler for two particular well defined languages (which really is just a mapping) our education systems would not have the short comings that they it does. Teaching is much more a negotiation then it is a method of programming.

    It would seem that it is a worthwhile and potentially fruitful hypothesis to explore is that a necessary (but not sufficient) condition of consciousness is being able to negotiate novel forms of communication.

    By this criteria then a dog, or a horse, or a dolphin, or yes even bacteria, have a degree of consciousness in so much as they communicate among themselves, and can learn novel simple languages with, say, human trainers.

  103. Dániel Says:

    Scott #99: Why would the Turing test be any kind of solution to the Pretty-Hard Problem? Do you really claim this, or I’ve misread you? To me this seems like exactly the kind of conflation I’ve criticized in #67.

  104. Matthew Says:

    Scott,
    So, you’ve conjured up a few mathematical entities (low-density parity check codes, expander graphs, linear-sized superconcentrators) that exhibit high Phi, but that no one would deem–using common sense–as conscious. What you fail to appreciate (since you didn’t mention it in your article) is the “shape” in qualia space for which Phi is a simple metric (the “height” or absolute magnitude of the shape). The complexity of this shape corresponds to the richness of (inner) experience, so I wonder what your counterexamples look like in phase space (?). What is the repertoire of “concepts” (Tononi’s definition) of one of your counterexamples?
    As for p-zombies: IIT 3.0 (http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003588#s4) explains that a zombie is a low Phi system, for example, a network based purely feed-forward architecture, that produces the same output for a given input as a system with high Phi would.
    I also don’t see how you could refer to IIT as being unintuitive when it takes a handful of basic facts of our experience (phenomenology) as axioms. IITC is not panpsychism!
    Matthew

  105. Scott Says:

    Matthew #104: I don’t see how the “the ‘shape’ in qualia space for which Phi is a simple metric”—whatever on earth that means—does anything to refute my counterexamples. Could you please try to state your point more clearly?

  106. Scott Says:

    Dániel #103: Well, if you had an algorithm that could pass the Turing Test, then if nothing else, you could use that algorithm to simulate a human being’s intuitive judgments as to whether some other entity was conscious or not! So, by definition, you would then at least have an algorithm that passed the “common sense” test that I mentioned in the post. On the other hand, it’s true that such an algorithm presumably wouldn’t pass the “few bits to write down” test, and for that reason (if not for others), would be unsatisfactory as a theory of consciousness.

  107. fred Says:

    I would think that consciousness has more to do with amount of self-similarity than this integrated information metrics.

    Without considering self-similarity, a brain is mostly a simulation of its own environment. So the integrated information in the environment should be correlated to the one in the brain. But that’s true of any simulation (isomorphism between two systems).
    Consciousness is more about a simulation taking its own state into account in a recursive manner (the more recursion, the more conscious).

  108. fred Says:

    What would be the amount of integrated information of a system made of two brains (or one brain exhibiting multiple independent personalities) vs a system made of one brain?

  109. Sicco Naets Says:

    Scott asks: “if we can’t reject a theory of consciousness for predicting that, let’s say, toasters are billions of times more conscious than humans, then when can we reject it?”

    We reject it if it fails to make predictions about observable phenomena, orif those predictions themselves fail; the same as you would in another field of science. The question you posing is not strictly related to consciousness, it’s purely epistemological in nature.

  110. Stiz Says:

    Scott, I think what Matthew is saying in #104 is that the difference between the richness of what we experience and the presumed absence of such richness in a computer or DVD player could very well have something to do with the internal interconnectedness of the information, which would be apparent if you were to construct a network schematic of the information, but clearly isn’t apparent in the “scalar” notion of \Phi.

    This brings to mind Hofstadter’s GEB and the notion of strange loops and their hypothetical role in consciousness, which have been strangely absent in not only this thread, but seemingly any discussion anywhere of IIT and it’s predecessors and/or contempories.

    I think it’s quite likely that the interconnectedness of the information plays a crucial role in distinguishing our experience from that of a complex machine, even if we take on faith that these machines are experiencing something at all.

    Still, such a new hypothesis would presumably face the same troubles in attempting to become a theory, given that verifiability would still seemingly be elusive.

    To address your question of what CAN we do to test such models, I suspect it may come down to needing to build into or onto such a machine the ability to first interpret definitions of awareness, experience, and consciousness and then to simply answer the question, “Is this happening to you?”

    Personally, I’m starting to suspect that the essence of consciousness lies in the interactions themselves, at the elementary particle level (i.e., imagine that every boson (force carrier particle) carries with it a quantum of conscious experience), and that our brain is only special in that the photon/info entanglement is maxed out, with biology having nothing direct to do with it. If this were the case, then there is likely some hypothetical field involved that should give rise to physical predictions. I’ve ventured a wild guess that the “dark” parts of the universe have something to do with it, but I’ll be much more likely to pursue such a outlandish notion if and when current dark matter searches come up empty.

  111. Sandro Says:

    Nicolas McGinnis #84:

    Rather I wonder how the IIT theorist would answer the Vandermonde variant to Searle’s old Chinese-room experiment.

    Not sure how they would answer, but the “Systems” response to the Chinese Room is perfectly valid here.

    Your reductionist attempt to ascribe “consciousness” to some individual component is a fallacy, because that simply leads to infinite regress (well where does *that* component’s consciousness come from?).

    An analogy might help: whence does the transparency of water come from? Does it come from the hydrogen, or the oxygen that make up H2O? As I hope you can see, the question itself is fallacious in posing this loaded question. The composition of certain properties of both hydrogen and oxygen result in the property known as “transparency”, and so it is with the Vandermonde Room in IIT.

  112. Scott Says:

    Timothy Gowers #15 and fred #107: You both (like Douglas Hofstadter and others) have suggested that recursion is part of the essence of consciousness. Much like with Φ, I can easily believe that a capacity for recursion—for reflecting on oneself reflecting on oneself reflecting on, etc.—is one necessary condition for any entity that I’d want to regard as having a “high level of consciousness.” But there seem to me to be two big problems:

    (1) As I’ve said before, I believe that dogs, horses, etc. (and certainly dolphins, apes, and human toddlers) have moderate levels of consciousness, but it’s not clear that any of them have the capacity for recursive self-reflection that we’re talking about.

    (2) Even more seriously, and analogously to my objection to Φ, I could easily write a computer program that simulated itself simulating itself simulating itself, etc. … indeed, for far more levels of recursion than any human has ever achieved! … yet that didn’t do anything one would want to call conscious. (Indeed, almost any recursive algorithm, say Quicksort, would arguably fit the bill here. 🙂 ) For this reason, it seems to me that recursion—just like information integration—can’t possibly be a sufficient condition for consciousness, let alone for a high level of consciousness.

  113. Scott Says:

    Sicco #109:

      We reject it if it fails to make predictions about observable phenomena, orif those predictions themselves fail; the same as you would in another field of science. The question you posing is not strictly related to consciousness, it’s purely epistemological in nature.

    OK. So then, if “DVD players (or the Vandermonde contraption, etc.) are much more conscious than humans” doesn’t count as a (failed) prediction about an observable phenomenon, then by your lights, does IIT make any predictions about observable phenomena? If so, what are those predictions? And if not, then arguing in the alternative, do you think IIT should be rejected on the ground of unfalsifiability?

  114. Scott Says:

    fred #108:

      What would be the amount of integrated information of a system made of two brains (or one brain exhibiting multiple independent personalities) vs a system made of one brain?

    That’s one of the cases that IIT does handle well. If you have two independent brains, then the integrated information will be 0 (or close to 0), since you can consider a partition (A,B) that places one brain in A and the other brain in B, so that there’s little or no dependence between the two. Ditto, but to a lesser degree, for someone whose corpus callosum has been severed.

  115. Matthew Says:

    Qualia space:
    If a set of elements forms a complex, its concept space is called qualia space.

    Concept space:
    Concept space is a high dimensional space with one axis for each possible past and future state of the system in which a conceptual structure can be represented.

    Complex:
    A set of elements within a system that generates a local maximum of integrated conceptual information Phi_max. Only a complex exists as an entity from its own intrinsic perspective.

    Concept:
    A set of elements within a system and the maximally irreducible cause-effect repertoire it specifies, with its associated value of integrated information Phi_max. The concept expresses the causal role of a mechanism within a complex.

    I’m quoting these definitions from the paper that I referred to earlier, and I guess I’m wondering what kind of complexes and/or concepts your counterexamples would exhibit granting that they exhibit high Phi. Sorry, I wouldn’t even know where to begin to do that, and yes I think Stiz #110 is reading me correctly. Not only is Phi important, but the complexes and concepts that form within qualia space.

  116. Dániel Says:

    Scott #112:

    I don’t think Douglas Hofstadter would accept plain old simulation and recursion as being enough here. (Tim and Fred, would you?) I think the kind of strange loop they are talking about is more like the ability to directly sense/inspect one’s own internal computation process. But you could of course create a counterexample for such a definition, too.

    When arguing with people who hope for some nice and tidy definition of consciousness, you just have to point to any random element from the symmetric difference between the fuzzy common-sense notion and their nice and tidy abstract version. But I’d say the more general reason these counterexamples exist is that a hodge-podge of a set can’t be identical to a nice and tidy set.

    As I wrote, I believe that consciousness is an inherently anthropocentric concept, so I like the following “definition”: X is conscious if it is not silly to ask “what is it like to be X?”. The source of anthropocentrism here, of course, is that it is humans who do the asking.

    Marvin Minsky is in the Gowers/Hofstadter camp. He wrote that “Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing.” But I would add that it’s a specific kind of summary, not just the general ability to self-inspect.

  117. Scott Says:

    Matthew #115: I don’t know what complexes or concepts my counterexamples would give rise to. If anyone who’s studied IIT more than I have wants to take a crack at that question, they’re welcome to. In order to think about it, I would need definitions of the terms much more mathematically precise than the ones you quoted (recall that it took me a while even to extract a definition of Φ clear enough for me to work with).

    But what I was really looking for, more than the definitions of terms, was the logic of an argument. In your mind, what if anything could possibly be true about “complexes” and “concepts” that would invalidate the counterexamples I gave?

  118. Ross Says:

    One could think of additional criteria one would want to more fully characterize cognition/consciousness, beyond the ‘necessary but not sufficient’ integration of information.

    But first, allow me to suggest that there should be an evolution operator similar to a hamiltonian that works as a continuous operator over the state of consciousness if we want to capture a human mind instead of a digital device.

    Okay, so what additional constraints align with our intuition about consciousness?

    First, a consciousness should be ‘aware of itself’. This means that it should not only be powerful enough to model its expected inputs/operators and model its *own functioning*, both approximately with error (think PAC), but it should also be able to separate the two models.

    We’ll call this the Quine Criteria.

    Second, a consciousness ‘function’ should be resilient to both error and to small perturbations one wouldn’t characterize as error. That is to say a consciousness, at least one that undergoes loosey goosey phenomenological first person perception and experience, constantly exists in different highly related states. It’s not obvious that a ‘static’ consciousness can’t exist, but certainly we are interested in characterizing what types of consciousnesses could be like ours and currently there are no examples of static consciousnesses to points to.

    There may be more extra criteria, but these are what I can think of.

    Now for ‘mathematizing’ these criteria.

    First we have to add the Quine Critieria: for some subset E (for ego) of outputs from F, altered slightly by the external environment (operator) as E’ and fed back into F, the internal state of F must always contain an approximation of F that is within some factor of the correlated information between E and E’. I do not know exactly how to modify Φ to achieve this, and could use some help.

    Next, for resiliency: take any small F_delta, we must have that Φ(F) ~= Φ(F+F_delta).

  119. Matthew Says:

    What would invalidate your examples as being counterexamples would be if the complexes or concepts generated by them turned out to be trivial (of low number and/or of low Phi) or have some kind of obvious relation between them… I’m way out of my league here, but I suspect that there’s more to IIT than a high value for Phi.

  120. Scott Says:

    Matthew #119: Well, the things I read (including Koch’s book) were pretty explicit about a large value of Φ being the defining criterion for consciousness in IIT. However, if you’re right that there’s an additional criterion involving complexes or concepts, then quite possibly one would need to modify my counterexamples so that they satisfied the extra criterion as well. I don’t expect it would be that hard, but at this point, I prefer to leave it as an “exercise for the reader.” 🙂

  121. Jay Says:

    Scott #35, on the hard problem

    So well said! Half the time you talk about this I disagree, half the time I couldn’t agree more. I think the two things that bugs me is 1) you seem to think that the hard problem has more beef than solipcism, why? 2) you seem to take for granted Chalmer’s point that we have no access to what happens within other’s consciousness. We do! Or we do, except if we bother with solipcism.

    Scott #35, on ITT

    You thought I was defending ITT because you “of course” cannot extend your proof? LOL! No, I was actually, literally, and probably too naively, asking you to please extend your proof. I consider half half the likelihood that consciousness is purely intrinsic. If you could extend your proof to any intrinsic computation of phi-like quantities, that would represent a very important contribution, at least to my eyes.

    Matthew #115,

    Do you understand what that means? That’s not rhetorical question, I have just no idea how to interpret/operationalize these definitions and would be glad if you can help.

    #all,

    So interesting thread! Thx all for your comments. 😉

  122. Luke Muehlhauser Says:

    Scott #21: Ah, I thought gasarch was saying that Valiant disproved his own theory of consciousness and then moved on. Thanks for clarifying.

  123. Jr Says:

    Great post.

    Consciousness has always struck me as an unsolvable problem, just like the question of why is there something rather than nothing. Unfortunately, not all scientists agree, and this can cause them to say some pretty silly things.

  124. Rahul Says:

    Scott:

    I understand the feeling, but submit that it’s at least partly due to selection bias. There are plenty of areas of science, and subareas within each science, where people understand exactly what they’re talking about and make definite progress at a rapid rate. But, for precisely that reason, those areas are not the most fun ones to argue about on blogs.

    That, yes. But also, I feel journalists & popular sci. authors keep getting drawn to these grand questions or theories of fantastic reach or profoundity.

    And there seems a correlation that these tend to be the very same theories that have all the lack of falsifiability pathologies.

    Sometimes, it almost seems like funding agencies suffer from the same bias.

  125. David Chalmers Says:

    Scott — this post is great. I’d love to see an information-based account of consciousness perhaps along the lines of IIT work out, but this gives me some pause. Giulio tells me he’ll respond here soon, so I’ll wait to see what he says before forming a definite view.

    For now, here are some thoughts on your “Pretty Hard Problem”, which I like. It seems to me that there are a few different problems in the vicinity that are worth distinguishing.

    The first distinction is between PHP1: “Construct a theory that matches our intuitions about which systems are conscious” and PHP2 “Construct a correct theory that tells us which systems are conscious”. Here I’d say that PHP2 is more important and fundamental than PHP1. As scientists and philosophers we want our theories to be correct. One might think that intuitions are our best guide to a correct theory here, but that’s far from obvious, and in any case even on this view we only care about intuitions as a potential guide to the facts. And certainly I’d say that IIT is intended as an answer to PHP2 rather than PHP1.

    The second ambiguity is between PHP3: “Construct a theory that tells us which systems are conscious” and PHP4: “Construct a theory that tells us which systems have which states of consciousness”. Here I’d say PHP4 is more important and fundamental. Obviously PHP4 subsumes PHP3 but not vice versa. We’d like a theory of consciousness to do more than just give a yes/no judgment, and even IIT goes beyond that by telling us about degrees of consciousness. (Its answer to PHP3 is pretty trivial: almost all physical systems are conscious.)

    Recombination of these two ambiguities gives us at least four potential versions of PHP. But I’d say the PHP of most interest is a combination of PHP2 and PHP4 above. That is, the canonical PHP should be: Construct a theory that tells us, for any given physical system, which states of consciousness are associated with that system?

    An answer to the PHP so construed will be a universal psychophysical principle, one which assigns a state of consciousness (possibly a null state) to any physical state. One can then see IIT as a partial answer to the PHP. It’s partial in that it gives us only a degree of consciousness rather than a full specification of a state of consciousness. Presumably two systems could have the same degree of consciousness even though e.g. one has only visual consciousness and the other only auditory consciousness, and insofar as phi just measures degree of consciousness, it won’t distinguish these systems.

    The PHP so construed is somewhat harder than the versions involving PHP1 and PHP3. But it’s still easier than the original hard problem at least in the sense that it needn’t tell us why consciousness exists in the first place, and it can be neutral on some of the philosophical issues that divide solutions to the hard problem.

    Most philosophers and scientists should accept that the PHP is a least a meaningful problem with better and worse answers. As long as one accepts that consciousness is real, one should accept that there are facts (no matter how hard to discover) about which systems have which sort of consciousness (and which potential systems would have which sort of consciousness). And presumably some formulations of those facts will be simple and more universal than others, so that the question of finding the simplest universal principles that encapsulate those facts will be a meaningful one (at least given a measure of simplicity). Such a principle seems a worthy thing to aim for even on many different philosophical views.

    This sort of principle is more or less what I was calling a “fundamental theory” of consciousness in my book _The Conscious Mind: In Search of a Fundamental Theory_. Of course given such a principle, one may take different philosophical views of its status. A nonreductionist like me will hold that it’s a fundamental law of nature connecting two different things (a physical state and a state of consciousness), while a reductionist may say that it’s an identity claim saying that a state of consciousness just is a certain physical state. But both sides can agree that such a principle is something to aim for.

    I take it that both sides can also agree that something like IIT is at least a candidate partial answer to the PHP. Right now it’s one of the few candidate partial answers that are formulated with a reasonable degree of precision. Of course as your discussion suggests, that precision makes it open to potential counterexamples.

    Your reasoning suggests that IIT isn’t a great answer to problems akin to PHP1 (matching our intuitions about specific systems), but perhaps Tononi could say that our intuitions here are misleading, so that IIT may still be a good answer to problems akin to PHP2 (matching the facts). Now, if our intuitions about specific systems were our only guide to the facts, then if IIT gets PHP1 wrong we should also lose confidence in it as an answer to PHP2. But now a lot depends on to what extent there are other guides to PHP2 that can trump our intuitions about specific systems. Presumably here Tononi could appeal to introspective evidence from phenomenology, and also to factors such as the simplicity/elegance/plausibility of underlying principles. In any case, at least formulating reasonably precise principles like this helps brings the study of consciousness into the domain of theories and refutations.

  126. Scott Says:

    Jay #121:

      Half the time you talk about this I disagree, half the time I couldn’t agree more. I think the two things that bugs me is 1) you seem to think that the hard problem has more beef than solipcism, why? 2) you seem to take for granted Chalmer’s point that we have no access to what happens within other’s consciousness. We do! Or we do, except if we bother with solipcism.

    There’s a set of philosophical questions that (for want of a better word) I’ll call the “vertiginous questions”:

      Why is there something rather than nothing?

      How can a clump of neurons, performing computations in accord with the laws of physics, possibly give rise to first-person subjective experience?

      How can one person ever truly understand the subjective experience of another person—even a close companion—or know whether that person’s phenomenology was similar to or different from her own?

    Even if one thinks the vertiginous questions are ‘scientifically contentless’ and therefore dead-ends, I believe the questions deserve to be treated with respect, if for no other reason than their audacity. I’d even say that the ability to formulate this sort of question is part of what makes us human. It’s possible that a good novelist is better-placed to defend that proposition than a computer scientist like me—I’d especially recommend one of my favorite novels, Rebecca Goldstein’s The Mind-Body Problem.

    But then there’s the “solipsism question”:

      How can I know that I’m not the only entity in the universe with subjective experience, and that all other people—no matter how similar their behavior is to mine—are zombies who don’t really experience anything, and who I can therefore treat purely as means to ends?

    Despite its superficial similarity to the vertiginous questions, the solipsism question also has an arrogance to it (or even moral monstrosity) that prevents me from according it the same respect.

  127. James Cross Says:

    Regarding recursion there is the radical plasticity thesis which argues “conscious experience occurs if and only if an
    information processing system has learned about its own representations of the world. To put this claim even more provocatively: consciousness is the brain’s theory about itself, gained through experience interacting with the world, and, crucially, with itself.”

    http://srsc.ulb.ac.be/axcwww/papers/pdf/07-PBR.pdf

    I could also add that for organisms we presume to be more conscious (for example, ones that can pass the mirror test) a key part of the learning is the interaction with other conscious beings. In other words, they tend to be social creatures.

    In this view I think we could think of consciousness not as simple recursion but more as something that is a product of the feedback required for an organism to interact with its environment – both the non-living and living parts of it. Some of it may be hard-wired (“learned” through evolution) and some learned through experience. Humans with their relative neurological underdevelopment at birth are probably the least hard-wired of creatures.

    Consciousness, while it may require some threshold of information storage/processing capacity, may not actually be directly dependent on it. For example, the European Magpie passes the mirror self-recognition and other behaviors indicating consciousness.

    http://en.wikipedia.org/wiki/Eurasian_magpie

  128. Ninguem Says:

    Maybe I don’t understand what you mean by submatrix, but the minor of the Vandermonde matrix with rows (or is it columns?) [1,p-1] and [1^3,(p-1)^3] has determinant zero modulo p.

  129. Luca Turin Says:

    David Chalmers #125
    Completely agree with your categories. I would also add that we may eventually need a proper zero for the consciousness scale, and this may be hard to get. Imagine we had some sort of meter connected to a pointy thing at the tip of which was a consciousness sensor. Would the meter start to budge before we brought the thing up to our head? Would we have to go through the skin and into the brain to get a signal? Would it register something if stuck in our arm? This may end up a bit like electrochemistry, where relative potentials are easy and absolute ones hard. Delighted to hear Tononi will chime in soon, btw.

  130. Gil Kalai Says:

    Nice post! Scott, do you have counterexamples to Tononi’s informational Φ when you restrict yourself just to “biological systems” like, human organs (other then the brain), a single cell, a community of bees, etc. or even to more general physical systems in nature (which are neither man-made or man-imagined).

    (Also- what is the evidence that Φ is indeed large for the human brain? Of course, even if Φ is useful in some class of systems to distinguish conscious systems it does not mean that it captures the essence (or an essential element of) consciousness.)

  131. Scott Says:

    Ninguem #128: Good catch! I’ve edited the post to clarify that, if you want the full-rank property, then p should be not merely larger than n but sufficiently larger.

    However, note that, even if p is not sufficiently larger than n, you should still “almost” have what you want, in the sense that all sufficiently large submatrices will have close to full rank. I need to run now, but will expand on that as soon as I have time (unless someone else wants to save me the trouble 🙂 ).

  132. Alex Mennen Says:

    David Chalmers #125: I realize that I have little background in philosophy of mind and you’re the expert, but I’m still going to have to object to the PHP1 versus PHP2 distinction. It seems to me that PHP2 relies on the existence of an unambiguously defined thing that corresponds to what we call consciousness (e.g. the “systems are conscious if and only if God has endowed them with a soul” theory, and subtler variants). I’m sure there must be a term for theories of this sort, but I don’t know what it is.

    PHP2 asks for a correct, unambiguous characterization of what has consciousness, which only makes sense if there is a definitive notion of how such a characterization may be correct or incorrect.

    However, perhaps some part of PHP2 may be rescued even without that. I propose PHP1.5: “Construct a theory that correctly predicts what our intuitions would be about whether a system is conscious or not if we understood the system in detail and had reflected sufficiently about what we really mean by consciousness.”

  133. Anon Says:

    Scott #126,

    I’d like to defend Solipsism (or at least a version of it that has been bothering me).

    First of all, there seems to be a huge asymmetry between me and everyone else in the world: namely, only I’m me, and I’ve only ever seen the world from my perspective. If everyone else was replaced with p-zombies, then by definition I wouldn’t be able to tell. Given that, how can I even define other people’s consciousness? I don’t see how to state the Hard Problem for other people.

    I don’t support using other people purely as a means to an end (although it’s hard for me to justify why). However, I think this asymmetry *can* have practical consequences in the following setting. Suppose someone builds a Mars teleportation device that works by creating a copy of you on Mars and killing the original (in particular, I’m assuming here that the freebit picture is false). Then I’m not sure if I would go into such a device, but I would definitely encourage my friends and family to go on a trip to Mars . It seems irrational not to.

  134. Scott Says:

    Gil #130: I believe the evidence that Φ is large for the brain is basically that the brain, from what’s known about it, appears to have excellent expansion; and (as I argued in the post) good expansion will typically produce a large Φ-value, unless something specifically prevents it from happening. But really, I’m giving IIT the benefit of the doubt on this point. Obviously, if Φ turned out to be small for the brain, that would only strengthen the skeptical position.

    And yes, it’s possible that there are also biological or other naturally-occurring systems with large Φ-values that are “obviously unconscious” intuitively (e.g., a highly-connected liver). But I can’t think of any slam-dunk examples—possibly because I’m not a biologist, or possibly because there aren’t any. So, this is a second point on which I’m willing to give IIT the benefit of the doubt. The one thing I can say with great confidence is that it wouldn’t be hard to construct an artificial system that had an enormous Φ-value, but that was “obviously unconscious.”

  135. Maxwell Rebo Says:

    Scott,

    Great post! It’s good to see some mathematical arguments applied to these questions, both for stress testing the theory and just for good pondering material.

    While the claims of IIT as it pertains to consciousness are bold, I’m still finding its phi value (integrated information) as a rather useful metric for the following reasons: it gives you some clues about which topologies of a system maximize the uniqueness of inner representations of inputs. Whether it has anything to do with capital-C Consciousness or not is, in my humble view, immaterial. If it provides some useful insight into system design, then that is good enough for the curious engineer.

    That said, the debate about its true nature and implications is interesting. Carry on, gentlemen.

    Maxwell

  136. Scott Says:

    David Chalmers #125: Thanks very much for the comment! As I once told you, reading The Conscious Mind as a teenager had a significant impact on my thinking, so it’s an honor to have you here on my blog.

    Now, regarding your distinction between PHP1 and PHP2: for me, like for Alex Mennen #132, the key question is whether it’s possible to articulate a sense in which a solution to the Pretty-Hard Problem could “still be verifiably correct,” even though it rendered absurd-seeming judgments about consciousness or unconsciousness in cases where we thought we already knew the answers. And, if so, what would the means of verification be?

    It’s worthwhile keeping in front of us what we’re talking about. We’re not talking about a scientific hypothesis that contradicts common sense in some subtle but manifestly-testable way, in a realm far removed from everyday experience—like relativity telling us about clocks ticking slower for the twin on the spaceship. Rather, we’re talking about a theory that predicts, let’s say, that a bag of potato chips is more conscious than you or me. (No, I have no reason to think a bag of potato chips has a large Φ-value, but it will suffice for this discussion.)

    I don’t know about you, but if the world’s thousand wisest people assured me that such a theory had been shown to be correct, my reaction wouldn’t be terror that I had gravely underestimated the consciousness of potato-chip bags, or that I’d inadvertently committed mass murder (or at least chipslaughter) at countless snacktimes. My reaction, instead, would be that these wise people must be using the word “consciousness” to mean something different than what I meant by that word—and that the very fact that potato-chip bags were “conscious” by their definition was virtually a proof of that semantic disagreement. So, following a strategy you once recommended, I’d simply want to ban the word “consciousness” from the discussion, and see whether the wise people could then convey to me the content of what had been discovered about potato-chip bags.

    By contrast, suppose there were an improved consciousness-measure Φ’, and suppose Φ’ assigned tiny values to livers, existing computers, and my Vandermonde system, large values to human brains, and somewhat smaller but still large values to chimpanzee and dolphin brains; and after years of study, no one could construct any system with a large Φ’ that didn’t seem to look and act like a brain. In that case, I wouldn’t be able to rule out the hypothesis that what people were referring to by large Φ’ was indeed what I meant by consciousness, and would accordingly be interested in knowing the Φ’-values for fetuses, coma patients, AIs, and various other interesting cases.

    What I’m missing, right now, is what sort of state of affairs could possibly convince me that (a) potato-chip bags have property X, and (b) property X refers to the same thing that I had previously meant by “consciousness.”

  137. Mark H. Says:

    For me, the reason of consciousness must be the evolution, in a similar way as life. Surely must be a finite equation that indicates the degree of consciousness and it must depends of their interactions with its environment. Lazily, I see a rough hierarchy: life, intelligence, consciousness. For me, it is possible that a future robot have some quality superior to consciousness.. :/

  138. Shmi Nux Says:

    Scott,

    In the spirit of measurability, I think one quantity worth analyzing is “introspection/self-awareness level”, which can be defined (for an algorithm) in absolute terms, as, say, the number of bits of its own code available for analysis, and in relative terms as the ratio of that to the total number of bits in the algorithm.

    One can then conjecture that the higher the two numbers are, the more conscious the algorithm is. Of course, one has to define what “available for analysis” means. For example, it could be as “simple” as being able to use its own code as an input. Then programs like quine, with 100% relative introspection level, are still not very conscious since they are not complex enough in absolute terms, and animals like insects are not very conscious because they don’t introspect much in relative terms. Whereas humans’ prefrontal cortex allows for a much higher relative, not just absolute, relative introspection value.

    Hopefully something like that could be testable and falsifiable. For example, if there are complicated algorithms which can do something non-trivial with their own code as an input, yet are clearly not conscious according to the common definition, such a criterion would be falsified.

  139. Jay Says:

    #134 #130, Tononi would disagree, but the cerebellum is likely the slam-dunk example you’re looking for.

    According to ITT the cerebellum is an example of low Φ-value because it’s structure is highly modular. But primary sensory cortices are also very modular, and tought to include at least part of the conscious theater. Moreover, there are recent evidences that cerebellar activities are involved in long-range synchronisation of cerebral activities (for the gamma band, which is also our best NCC to date). So, it’s becoming hard to defend that cerebellar activities could have much lower Φ-value than the cerebrum.

    http://www.jneurosci.org/content/33/15/6552.full

  140. Peli Grietzer Says:

    Scott #136: How about a theory that’s simple, assigns the correct *specific* states of consciousness to brains (e.g. you can mathematically derive from this simple theory that such-and-such neural activity corresponds to the phenomenology of smelling roses, and this neural activity is in fact the one that our equipment registers when people hooked to electrodes smell roses), and also assigns states of consciousness to bags of potato chips?

  141. Abel Says:

    It’s really interesting to consider what else could one do to evaluate a solution to the PHP other than previous intuition (and therefore be able to recognize a solution to PHP2 that is not a solution to PHP1)…the only thing that comes to mind is the fact that different humans/cultures do indeed have different intuitions about the topic, so if a theory disagreed with our intuition, but at least agreed with some others, that could be a plus.

    It might also be instructive to look at previous situations where a theory has been accepted that countered previous intuition. For example, looking at physics, one could consider the case of Galileo thoughts on inertia, and most of quantum mechanics. But here it wasn’t too hard to produce experiments that result in perceptions that support the new theories and go against our previous intuition. Not clear at all that this would extrapolate to the topic of consciousness…

  142. Gil Kalai Says:

    Scott (#134)

    So if Tononi’s thesis is actually:

    “For natural biological or physical systems, consciousness is characterized by a high Φ-value. ”

    Then your critique and conclusion loose much of their relevance.

  143. Anonymous Programmer Says:

    lohankin #63 idea of a central consciousness neuron is worth consideration. The idea that the brain might be a fancy quantum computer was dismissed in the past because neuron to neuron communication was too slow because neurotransmitters had to traverse a synapse.

    But if intercellular quantum computers are hard to imagine — intracellular quantum computers might be much easier to imagine. Theoretically quantum computers of 1000s of qubits could be quite small easily fitting in a microtubule or other small structure — am I right? Evolution had billions of years to find a way.

    Also most neurons do live exactly as long as you do. More important neurons might be better protected and repaired.

    The central neuron might have many more qubits than the supporting neurons keeping it in charge.

    A conscious free will quantum computer would be different than a textbook quantum computer because it would be applying its own unitary operators to itself, entangling and partially reducing its own wave function itself using its own free will to think and decide. A quantum computer that can experience pain and pleasure and have its own agenda.

    If a quantum computer can really store 2^qubits complex numbers — storage isn’t a problem and neither is processing speed.

    If the above is true than it would have astounding implications! You could move your microtubule from your central neuron to a carefully designed body of your choosing — essentially cheating death. You could design a robot body that works great in the vacuum of space or deep underwater and transfer your central neuron microtubule to it — to completely control it. Essentially a soul transplant. A future that promises no limits on lifespan and you get to pick your own body — so I hope it is true!

  144. mambo_bab_e Says:

    I am interested in IIT, and interested in the consciousness of photodiode, and integration of information. On the other hand, it is difficult to understand zombie. Tononi might think it might be necessary to explain no consciousness of zombie. However it is necessary to aware that zombie become conscious if the system changes a little more to complicated. Chalmers’ philosophical zombies was interesting. But it is not necessarily important to distinguish zombies from conscious one.

    And I felt IIT showed it was conscious if photodiode changed sensor sensibility after input. Is such a complicated constraint necessary? Was it necessary for phi calculation for non consciousness? Or was it necessary for existence of zombies?
    Prerequisite: Whales, fish, octopuses, ants, China, USA, internet and photodiode is also conscious. Even microbes so.

    At last, I wanted explanation for memory system as well as consciousness, because it is connected closely to consciousness. I expect it next time. And I wanted more explanation for red triangle. It can be irreducible while associated memories work for consciousness?

  145. domenico Says:

    Is life consciousness ?

  146. Maze Hatter Says:

    It might be worth pointing out that making a mathematical model of a mind would include making a model of something that observes and measures.

    A single model, solving both the hard problem and the measurement problem.

  147. Scott Says:

    Peli Grietzer #140: That’s a very good example! Yes, I suppose that such a theory, if it existed, would indeed provide evidence that potato-chip bags were conscious—and amusingly, it would also let us calculate what it was that the potato-chip bags were thinking about! On the other hand, such a theory strikes me as a tall order verging on the impossible—because, if you wanted to calculate (say) the extent to which a given physical system was thinking about roses, then you’d first need a mathematical encoding of the concept of “rose,” and it’s not clear how you’d get that without referring to the very brain state on which you wanted to do the measurement. So maybe it would be more “reasonable” to hope for a mathematical theory that only told you the extent to which a given physical system was thinking about some mathematical concept—like, say, 17? But, when you think about the complicated, messy way the brain represents a concept like “17,” even that seems extremely unreasonable…

  148. Scott Says:

    Gil #142:

      So if Tononi’s thesis is actually:

      “For natural biological or physical systems, consciousness is characterized by a high Φ-value. ”

      Then your critique and conclusion loose much of their relevance.

    Yes, but Tononi explicitly states his thesis more broadly—talking (for example) about why computers designed as we design them now have small Φ and therefore aren’t conscious.

    (Also, as I said, and as Jay #139 affirms, it’s far from clear to me that there aren’t “natural” counterexamples. Right now, I can only say with certainty that there are artificial counterexamples, but that might have as much to do with my math/CS background as with the facts.)

  149. Jay Says:

    Anonymous Programmer #143,

    The idea of a central consciousness neuron goes against many evidences, such as blind sight following large damage to V1.

  150. rrtucci Says:

    How about: self-consciousness of a being = how hard said being fights for self-survival.

  151. Scott Says:

    rrtucci #150: So, the Zealots of Masada (or others who chose death throughout history) weren’t conscious?

  152. James Cross Says:

    Scott #148

    Natural counter example.

    The Eurasian Magpie is a highly intelligent bird in the crow family. It is about 17-18 inches long but half of that is the tail. Yet Eurasian Magpies demonstrates intelligent behavior indicative of consciousness. They pass the Mirror Self-Recognition test which seems to be the gold standard for measuring animal intelligence. They use tools, they store food across seasons, and have elaborate social rituals including expressions of grief. Eurasian Magpies with their small brain should have significantly less integrated information than chimpanzees and bonobos yet they seem to have capabilities similar to apes. Emery in a 2004 article actually refers to them as “feather apes”. So we might presume that Eurasian Magpies possesses some level of consciousness, perhaps roughly equivalent to that of an ape.

    Of course, the more proper measure is not total brain size but brain size relative to body size. Eugene Dubois evolved a formula to relate body mass and brain size. Roughly speaking, as body mass increases brain size increases at the ¾ power. When organisms are plotted on a graph with this relation, they fall somewhere on or near the line that represents this relation. Humans, apes, dolphins, dogs, cats, and squirrels for example fall above the line. Other organisms, such as hippos and horses, fall below. Actually the Eurasian Magpie has about as large a brain relative to its body size as an ape does to its body size. If we assume that some portion of the brain is unavailable for consciousness, still the amount left over for the Eurasian Magpie must be significantly smaller than the amount available to the ape.

    It might be possible to argue that a smaller amount of “extra” brain in the Eurasian Magpie might be as integrated as a larger amount of “extra” brain the ape. But this argument would undermine the observation that the proper measure is not total brain size but brain size relative to body size. There would be no reason that small brains relative to body size could not be highly conscious. Yet when we look at where species fall out on the Dubois line, species below the line really do seem to be less conscious and species above it seem to be more conscious.

  153. Aaron Sheldon Says:

    I suspect testing for consciousness is like catching a liar. You have to ask a lot of detail questions, and make careful observations, and even then you can still be fooled.

    Put a little more concretely, any algorithm that takes as its inputs the stimuli and response from an entity and returns a measure consciousness can be fooled by a Turing machine. The degree to which it can be fooled depends on the complexity class that the inverse problem belongs too, taking the desired degree of consciousness, and producing the necessary responses for the stimulus.

    Sounds like there could be a nice little no-go theorem in there.

  154. Scott Says:

    James #152: Well, that’s a possible counterexample, but in order to make it convincing you would need to tell me something about the actual internal structure of the European Magpie and human brains, that would convince me that Φ is much smaller for the former than for the latter. After all, part of the whole idea of Φ is that it’s not measuring something quite as crude as brain weight, or (brain weight)/(body weight)3/4. So one can’t lay the failures of the latter measures at Φ’s feet, at least without doing a calculation that suggests that they’re basically proportional in practice anyway.

  155. Peli Grietzer Says:

    Scott #147: Can’t you run the same argument to prove the impossibility of physics? I agree that we have to replace ‘mathematically derive’ with something more nuanced here, but I think the same descent between levels of explanation
    that allows us to have physics even though we don’t have a mathematical coding for ‘Johnny throws a ball into a hoop’ (and we couldn’t efficiently use fundamental physics to calculate Johnny’s throw even if we did) should work in the PHP4 case.

    (I think that a more realistic version of my example might be something like: a complete or partial ‘fundamental psychophysics’ theory that implies that bags of potato-chips are highly conscious, possibly entailing some specifics about what they’re thinking but not necessarily allowing us to efficiently derive these implications, and theoretically grounds and unifies the patchy but effective ‘macro-psychophysics’ theories that we developed over the next hundred years of studying brains.)

  156. MadRocketSci Says:

    Interesting idea for measuring consciousness.

    “You are hereby summoned to answer the complaint that you are a philosophical zombie, and reply with your answer to the plaintiff’s attourney in….”

    My own ideas about consciousness:

    It seems to me that for something to be ‘experiencing’ the world, at least to the extent that point to something in the physical world and call that an experience, there has to be some sort of internal representation of the external situation/world. (This is a bare-minimum sort of thing, not a statement that X is concious). Also, there has to be some sort of evaluation going on on that representation. This internal representation of the external world (or simulations thereof) I’ve always identified with the “mind’s eye”. (Criterion A)

    In conciousness as we know it, the purpose of all that brain matter is to take that input and figure out what to do about it (even if what to do includes simply ‘thinking certain things’ about it or adjusting certain internal memories/states). The simplest life that reacts to it’s environment doesn’t do a lot of thinking about it, but it does have some impression of the world held in it’s nervous system and uses that to actuate whatever simple motions it needs to react. (Criterion B)

    This is sort of similar to the “integrated information” metric given above, but I would divide the system into something I could identify as the environment and something I could identify as the entity in question. (This in opposition to some minimum over all possible partitions.) It seems to me that phi, in asking if a dog is conscious, would try to evaluate, among all the other subsets, a third of its kidney and a pie wedge of its water dish.

    Before the entity acts, the environment has no real reason to gain information from the entity, but the entity should be taking in info from the environment.

    I suppose a digital camera would register to some extent via criterion A – it is definitely forming an internal representation of what its sensor detects. It doesn’t do a lot of complicated things with it, or react to the environment though (criterion B) in any interesting way. It just stores pictures.

    If the loop with a digital camera were closed – say you had a digital camera, an image evaluation algorithm, and a control routine trying to control a robot arm to do something (catch a ball, make sure the arm held a bucket in place when being loaded, etc) – then you have something a little more towards what we would think of as conscious. (Still nowhere near what we regard as aware, sub insect but more than a rock.) Your dog might react to such a thing as if it were alive for a while (before getting bored).

    In your matrix example – you’ve got some universally scrambled thing – no partition is identified as the environment or the entity. I suppose something floating in an abstract Platonic void might be conscious in some strange way, but only of things inside its own head.

  157. MadRocketSci Says:

    Intuitively, we’d like the integrated information Φ=Φ(f,x) be the minimum of Φ(A,B), over all 2n-2 possible partitions of {1,…,n} into nonempty sets A and B. The idea is that Φ should be large, if and only if it’s not possible to partition the variables into two sets A and B, in such a way that not much information flows from A to B or vice versa when f(x) is computed.

    What if you draw a box around an entity whose integrated information you are attempting to evaluate, but that box falls across some barrier into a region (however small!) that said entity doesn’t experience or interact with. It can’t see around that corner, it is locked in a soundproofed room, you include, along with your target entity and it’s immediate environment, a small bit of something it couldn’t possibly interact with (a rock on Pluto). Then it would seem to me that for that particular partition Phi(A,B) would have to be zero, and the minimum would have to then be zero, even if A contains a working human being and his immediate environment.

  158. Scott Says:

    MadRocketSci #157: Yes, that’s a good observation! I should’ve mentioned this in the post, but for precisely that reason, Tononi includes a prescription in which you “snap” to the subsystems for which Φ is large, and ignore the larger systems containing those subsystems for which Φ is small. In other words, Tononi would say that the system {you, rock on Pluto} does indeed have no consciousness over and above the consciousness of the “you” component, and that the calculation of Φ confirms that intuition. I think formalizing this is the point of Tononi’s notion of “complexes” (someone correct me if I’m wrong here).

  159. Peli Grietzer Says:

    Scott #147: Oy, I missed an aspect of your comment – I thought you were objecting to the doability of strong PHP4 theories in general, not just of ones that welcome potato chips. But you weren’t talking just about the *complexity* of ‘thinking about a rose’ but about the fact that we can only access human-brain examples of thinking about a rose, so how could we possibly come up with potato-chip-inclusive criteria for thinking about a rose, right? That said, I think my argument above might still hold: it seems possible that we can start from a theory that just retrodicts the intuitive structural relations between the phenomenal states affiliated with various brain states, move on to prediction (using drugs and neural manipulation and first-person reports), and through generalization and reduction and unification end up with a fundamental theory that has a lot to say about potato chips.

    (Isn’t this in some sense the way things turned out with physics, with certain aspects of cosmology and QM having untestable unintuitive implications about things you wouldn’t think you’d ever need to talk about when you set out to explain why balls fall to the ground at the same speed? You expressed uninterest in discussing some of these implications unless they can show a payoff, but I think that’s very different from saying that implications of this kind disqualify a theory or that implications like this won’t arise. )

  160. James Cross Says:

    Scott #154

    You are arguing what I already argued against in the last paragraph of #152.

    All observational evidence of consciousness suggests that the brain weight body weight ratio is related to consciousness. I know since you are a mathematician among other things you feel compelled to argue this on mathematical terms.

    Let me ask you a question.

    This post contains a bit of mathematical argument among other more generally logical or experiential arguments.

    Are these arguments – mathematical and otherwise – a product of consciousness or a product of unconscious activity remembered in consciousness?

    I think the latter is more likely the case. Consciousness may be more like a appendage – a third leg so to speak – in our neurological activity.

  161. Scott Says:

    James #160: OK, but obviously, if the brain-to-body-weight heuristic fails in some particular case (e.g. European magpies), then you can’t blame that failure on a different heuristic like Φ! You need to consider each heuristic on its own terms.

    Also, I could tell you that I (usually) feel pretty conscious when I blog, but if you doubt such things at all, then you’d have no particular reason to trust my self-report. Was your comment written consciously or unconsciously? 🙂

  162. Jay Says:

    James #160, nope, the brain-to-body mass ratio fails in several cases. The ¾ power trick helps minimizing the oddities, but can’t help for ants should be far clever/conscious than humans.

  163. Darrell Burgan Says:

    Hi Scott – I loved your post. The whole idea of studying consciousness rigorously is fascinating to me.

    One question, though. In your post, you wrote:

    “I conjecture that approximating Φ is an NP-hard problem, even for restricted families of f’s like NC0 circuits—which invites the amusing thought that God, or Nature, would need to solve an NP-hard problem just to decide whether or not to imbue a given physical system with consciousness!”

    My question is: doesn’t nature do this continuously on even trivial phenomena? If one believes Brownian motion deterministically arises from more fundamental physics, then calculation of the trajectory of any given particle is computationally intractible. But nature seems to do it effortlessly. So is computational complexity truly fundamental or does nature possess some kind of short cut that allows it to calculate things quickly that we cannot?

  164. Christos G Says:

    A discussion as interesting as the article itself. I really enjoyed it. By the way, being an alien I am still learning your language and I would like to know, what is information?

    Yesterday, I landed on a rainy place. I got out of my ship and when 5 raindrops hit me I opened my umbrella. Was the raindrop information? What was their entropy (I needed 5 to open the umbrella)? What was the status of the raindrops that did not reach me after I opened the umbrella? Are they still information?

  165. Clive Says:

    It seems likely to me that the specific kinds of conscious experiences that I have (“qualia”, etc.) have been selected for as part of my evolutionary history.

    Without trying to pin down the details of how such experiences are generated, if we allow that it could be possible that a “pleasurable experience” may have arisen from, say, something like holding your hand in a fire, then we might also expect that evolution would have tended to weed out the individuals that had that kind of “brain”.

    Also, intuitively at least, I have a very strong sense that my consciousness is not an epiphenomenon. In other words it plays some causal part in determining my actions in the world.

    From this it seems unlikely to me that consciousness arises in any completely deterministic system, and that something like a kind of “free will” and consciousness are intertwined.

    IIT appears to skip merrily past these kinds of issues, providing (at best) a net that may provide some kind of starting point for indentifying possibly consciousness systems but certainly not (IMO) being sufficient by itself.

  166. Scott Says:

    Christos #164: If your civilization was capable of building spacecraft capable of bringing you to Earth, then you’re obviously just as qualified to speculate about the nature of information as we humans are. I presume, in particular, that you have analogues of Shannon’s information theory, Turing’s computability theory, etc., which you could use as a starting point. If you want to ask humans about something where they actually know more than you do, try asking about (e.g.) kittens, the waxing of body hair, or Game of Thrones.

  167. Scott Says:

    Darrell #163:

      If one believes Brownian motion deterministically arises from more fundamental physics, then calculation of the trajectory of any given particle is computationally intractible. But nature seems to do it effortlessly.

    You’re confusing computational intractability with just ordinary unpredictability, due to our not knowing the initial state to sufficient precision. Brownian motion, chaos, etc. in physical systems supply plentiful examples of the latter, but not (we don’t think) of the former.

    A scalable quantum computer, if we could build one, ought to provide examples of the former—letting us solve problems (like factoring integers) that we plausibly believe are classically intractable.

    If you want to know whether your favorite physical system provides examples of computational intractability, the crucial question to ask is this: could I actually use this system to solve some well-defined mathematical problem—a problem for which I could easily feed my laptop all the relevant input data, but my laptop would nevertheless take an astronomically long time? If so, why don’t I try it?

    Crucially, if the only example of a problem you can give is “predict what this system will do,” and if your laptop isn’t even given a “fair shot” at the problem (because it doesn’t have complete knowledge of all the relevant aspects of the initial state), then you’re not talking about computational intractability, but about chaos or something else.

  168. James Cross Says:

    Scott #160

    The brain to body weight heuristic does not fail for any case that I know of. So anybody thinking Φ is the critical measurement has to argue that the Φ of the magpie is approximately equivalent to the Φ of an ape. Possible? Yes, likely, I think no.

    The radical plasticity theory I linked to earlier, especially when tied to observation that much of the learning of the more conscious species is social learning, explains a lot:

    1- More social organisms will have larger brain to body weight ratios.
    2- More social organisms will be more conscious.
    3- Our subjective sense of self-awareness is learned through interaction with other conscious entities. (mirror neurons?)
    4-We recognize consciousness in other organisms because we learned our own consciousness through interaction with them.
    5- Most of what the brain does is unconscious (Freud maybe was right after all). This includes developing math theorems. It also conforms to observational evidence in brain scans that neurons involved in recognition or decision-making fire before consciousness becomes aware of the corresponding thought.
    6- In learning something new we use a great deal of consciousness but once it is learned it requires less consciousness. I concentrated a lot learning to ride a bike. Now I ride and think about consciousness.

  169. James Cross Says:

    Jay #162

    Just saw your comment about ants. See my points 1 and 2 above.

    Clever and conscious are not exactly equivalent but clearly ants and bees are among the most intelligent of insects. Bees have complex communication patterns (waggle dance) and some species of ants “farm” mold.

  170. Christos G Says:

    Scott #166: My ship’s name is “empty bottle”, I am navigating the waves of ideas in search of a seaside where someone will pick me up, not to read my non-existing message but to give me one to carry.

    Our civilisation is advanced, but we are zombies, it is just our civilisation as a physical phenomenon and not us who has insights and speculations. Anyone knows more than me, no one more than the Multivac, and Multivac bet me $5 that if you define consciousness upon measurement of something whose nature you only speculate, then you have more issues than the boundary conditions of formulas as exposed in your article.

    I have asked many subjects if kittens are conscious, but I got different answers.

  171. J Says:

    Actually Scott what do you think about the Indian Institutes of Technology? Does the hype that 90% of their alumni being over achieving geniuses hold good?

  172. Jay Says:

    James #162,

    You said the body weight heuristic does not fail for any case that you know of. All I’m saying is there exist many counterexamples :
    -several Bee species exist, some social some solitary; same brain to body weight ratio
    -Heterocephalus glaber forms colonies with some surprizing social-insect-looking features, not solitary talpas; same brain to body mass ratio
    -elephants are considered more clever than rats, despite lower brain to body weight ratio
    -horses are considered more clever than frogs, despite lower brain to body weight ratio
    -ants and small birds are not believed far clever than most humans, despite the lower brain to body weight ratio of the latters

    …hope it can help.

  173. Scott Says:

    J #171:

      Actually Scott what do you think about the Indian Institutes of Technology?

    I’m all for them! Some of my best friends are IIT grads.

      Does the hype that 90% of their alumni being over achieving geniuses hold good?

    I don’t think there’s ever been an institution in human history, 90% of whose graduates were “geniuses.”

  174. James Cross Says:

    Jay

    The ratio is a useful guideline not a law.

    It stands to reason it might breakdown somewhat at the lower or higher extremes of body masses.

    What is fairly clear, however, is that species that we think to be highly conscious have a relatively high brain to body mass ratio.

    At any rate the Eurasian Magpie seems to exhibit signs of conscious behavior and is clearly intelligent. It has a much smaller brain than apes so, unless the information in the magpie brain is a lot more integrated than the information in the ape brain, the amount of integrated information has no direct relationship with consciousness with the possible exception of some threshold amount might be required to enable consciousness at all.

  175. Darrell Burgan Says:

    Scott #167: you said:

    “… you’re not talking about computational intractability, but about chaos or something else.”

    If I’m understanding correctly, if initial conditions really are known precisely, and nature really is deterministic, there is no such thing as intractability because the incremental changes from one state of the entire system to the next are always tractable?

    But I thought one of the fundamental changes about quantum theory was that outcomes are not deterministic for the same inputs? Doesn’t nature have to do some kind of “probability computation” every time a measurement is made?

  176. Scott Says:

    Darrell #175: If the only issue is that your desired output is probabilistic, then that’s not an issue of computational intractability either. You simply need to phrase the problem appropriately: either ask for a computation of the probabilities, or ask for a sample from the distribution (in which case, of course you’ll need a randomized algorithm to have any chance, but that’s commonly considered and not a big deal).

    Quantum mechanics does raise questions of computational efficiency, but the reason it raises those questions is not because of the mere occurrence of probabilities in the theory! Rather, it’s because of the appearance of amplitudes, which are the things you square to get probabilities, but which evolve in a totally different way than probabilities before a measurement is made. A quantum computer would take advantage of precisely those differences—using amplitudes to solve problems that we think are intractable even for a classical probabilistic computer.

    In general, there’s only ever an issue of computational intractability if you have inputs and outputs, and there’s a well-defined mathematical relationship between the two, and the issue is just that getting to the outputs given the inputs, although possible in principle with a computer, requires an inordinately large number of steps. Anything else is something other than computational intractability. Accept no substitutes! 🙂

  177. Niek Says:

    Dear Scott,

    I hope that all “IIT researchers” of this world regain “conscienceless” after reading your post with its elegant counterexample, and realize that they should start working on something else.

    Somewhere in your post you make a side note about the complexity of computing the entropy (“calculating entropy exactly is a #P-hard problem”)

    i am unfamiliar with this result and I’d like to learn about it. Do you have a pointer to a reference where this result is explained/proved?

  178. Niek Says:

    correction: i wanted to write “regain consciousness”.
    (I am not a native English speaker, and my ios autocorrect function made it even worse)

  179. quax Says:

    Scott, you seem to misunderstand how birthdays are supposed to work, you are supposed to receive gifts, rather than share such a nice one as this IIT take-down with the world 🙂

    Anyhow, thanks for this great post, very impressive how exquisitely polite you are about it. In my own mind I always translated IIT into another two letter acronym that starts with a B and ends in S.

    But at least they were mathematical about it and so allowed you to work your magic.

  180. Scott Says:

    Niek #177: A canonical #P-complete problem is counting the number S of satisfying assignments to a Boolean formula φ(x1,…,xn). Now, it’s easy to produce a single bit that equals 1 with probability p=S/2n+1: to do so, simply flip a coin, and if it lands tails then output 0, and if it lands heads then pick a uniformly random assignment to φ and output 1 if and only if the assignment is satisfying. But then, from exact knowledge of the Shannon entropy H(p), you could work backwards to determine p, and therefore S. This proves that calculating the Shannon entropy of an efficiently-samplable probability distribution is a #P-complete problem.

    However, the whole point of that remark was to explain that exact computation is the wrong thing to focus on—you can get that the problem of computing Φ is incredibly hard, but that hardness result is purely an artifact of demanding the answer to an absurd precision, and tells you nothing interesting about Φ itself. For that reason, a much better question to ask is the computational complexity of approximating Φ to a reasonable precision. In that case, it’s known that the problem of estimating the Shannon entropy of an efficiently-samplable probability distribution is a complete problem for SZK (Statistical Zero Knowledge)—see for example this survey by Goldreich and Vadhan. And using that, one can derive that the problem of approximating Φ is in the complexity class AM (Arthur-Merlin). I leave it as a challenge for others to show that approximating Φ is NP-hard, and also perhaps to achieve a better upper bound than AM.

  181. Scott Says:

    quax #179: Thanks! I’ve been working on being more polite in my criticisms, as I know that’s an area where I need to improve. It definitely helps when others are polite to me, as they’ve been in this discussion.

  182. J Says:

    Scott #173: Regarding the IITs in India. Thank you for your response. Since you mentioned you have students from IIT, would you take a non-IITian that eats chicken as a student from India (India is a huge country and IIT takes in just 500 cs students every year in its different campuses and most of those happen to be vegetarians)? All the IITians I know are vegetarians and I assume the sample size is large enough to assume most IITians are vegetarians. I also know the IIT almuni have made good contributions to CS and have respect for many of them.

  183. Tyler Says:

    I have to say I’m a little confused, after skimming the Max Tegmark article linked to in the post, that Max would challenge Scott on IIT. In the article, he gives several other important properties of conscious systems (such as autonomy), and says “Just like integration, autonomy is postulated to be a necessary but not sufficient condition for a system to be conscious.”

    I think he even goes on to talk about error-correcting codes as examples of things with large Phi value, and I don’t think anyone thinks extremely long error-correcting codes are as conscious as humans. It seems clear that many IIT researchers think of integration as the core of consciousness. However, it didn’t seem like Max is one of them.
    (And yes, I might just be defending Max since I love his MUH ideas.)

  184. Arko Bose Says:

    Scott,
    I have been a silent reader of your blog for a while now, but could not resist posting my own (perhaps senseless, as would be determined by your delightful criticism 🙂 ) two cents after I read this very interesting piece on IIT.

    Unfortunately, I have not had the time to read through ALL the 181 comments (I have no clue how you manage it), so maybe I am just going to unnecessarily repeat an idea here.

    As convincingly pointed out by you, IIT seems to be a necessary condition for consciousness. So, what would be a sufficient condition? I believe it is the condition of showing a tendency to lower the entropy of stored information in a system. Allow me to elaborate.

    Take a system A with a memory device B that can store information. With just this much definition, A can be a conscious as well as an unconscious system. However, as soon as we endow this system with the ability to “process” information, we find that this system begins to show emerging signs of consciousness.

    The essential test of consciousness (IMHO) seems to be the presence of a bias in the possible responses to stimuli. The higher the randomness is in the response of system A to stimuli, it would appear that lesser the degree of consciousness is in A.

    But bias shows low entropy. As soon as system A begins to process information in such a way as to show a statistical bias in its responses (decisions/conclusions, etc.) to information (which is what any stimulus is, at an elementary level), I think we can agree that this system begins to show signs of consciousness. The quicker such a bias is shown towards larger and larger sets of stimuli, the higher the consciousness.

    Thus, it seems that a possible sufficient test for consciousness could be whether a system A shows a natural tendency to lower the entropy of stored information.

    Of course, one may object that from this perspective even the Google translator would seem conscious. And indeed, I think it is: to a degree, that is. Consciousness must be determined against specific sets of stimuli. The Google translator is conscious to the set of many possible input sentences in many languages. Human beings are unconscious to incident nutrinos.

    Thus, a DVD player is conscious to digitally stored information in a DVD, while human beings are not conscious to such information, until a DVD player decodes it for him, allowing the human being to process – hence store – this information in a low entropy form in his brain.

    Thus, to repeat (possibly fallaciously), a system’s consciousness is determined by:

    1) Specific sets of input information. (E.g., I will show very biased responses to jazz, but very random responses to a nutrino shower.)
    2) Whether this system shows a natural tendency to lower the entropy of such information after storing it in its memory device.
    3) The greater the number of sets of input information whose entropy a system can lower in its memory device, and the quicker it can achieve this, and with fewer external intervention, determines the extent to which the system is conscious.

    Of course, I think it is clear that all forms and levels of consciousness will remain vulnerable to the type of recursion that Timothy Gowers referred to: consciousness does not provide an escape from sufficiently sophisticated illusions. Interestingly, the Hindu religion refers to this problem as “Maya”.

    All forms of severe criticism wholeheartedly welcome. 🙂

  185. rrtucci Says:

    What! No way! I’ve never been accused of being polite before. Scott, your insults and innuendo really cut to the quick! I’ll never read this blog again (although I may post comments in it)

  186. Darrell Burgan Says:

    Scott #176:

    Accept no substitutes!

    I guess I keep clinging to the idea that all the phenomena in nature could be completely simulated by a computer, if only we had the right algorithms, that computation is more fundamental than physics. I think my world view must be backwards. Computation emerges from nature, not the other way around.

  187. Darrell Burgan Says:

    By the way, Scott, I wanted to thank you for being such a good teacher, in addition to being such a tremendous scientist. Your patience in dealing with folks like me who are completely new to quantum computing is really appreciated.

    I’m about 1/3 of the way through your book now. Trying not to go on to the next chapter until I have a fair understanding of the previous one, but not always successful. 🙂

  188. asdf Says:

    Happy birthday Scott, and here is some new unrelated weirdness from David Deutsch and Chiara Marletto that you probably already know about:

    http://www.scientificamerican.com/article/a-new-theory-of-everything-reality-emerges-from-cosmic-copyright-law/

  189. Scott Says:

    J #182:

      would you take a non-IITian that eats chicken as a student from India

    Eating chicken certainly wouldn’t disqualify such a student. Eating quail might.

  190. Scott Says:

    Darrell #186: Nothing I said was incompatible with the view that “nature emerges from computation.” If there’s no way to exploit some phenomenon in order to solve a hard problem, then one could say: there’s no evidence that Nature itself has to solve a hard problem in order to generate that phenomenon for our viewing pleasure! The supposed “hardness” might only ever have been in a particular human representation of the phenomenon.

  191. fred Says:

    A consciousness needs also to be defined in terms of a specific environment in which it exists. An intelligence is only as “good” as its own environment – the environment needs to be stable, but also rich and dynamic.
    I think this is one of the reasons of the lack of progress in AI in those last decades – the illusion that somehow we can replicate/define an intelligence out of a vacuum.
    The patterns of the planet earth from a billion years ago were already showing the signs of the potential for the apparition of consciousness. How far back is this true? The very fundamental laws of how the physical world assembles must carry this potential… do atoms have in themselves the intelligence to assemble cities?
    A consciousness closed on itself (imagine severing all sensory inputs on an adult brain) would degenerate into an incoherent mess of fragmented shards riddled with psychosis. The consistency of the external reality is helping keeping our minds together, like an anchor.

    To me one of the biggest mysteries is what it means to “implement” a consciousness (the same question applies to what it means to “implement” any computation).
    Because the state of a brain (its evolution both in space and time, including the in/out sensory input data) can always be described by a unique integer (being different based on the representation). What does it then mean to then find that same integer in a different context (say, in the patterns of grains of sands on a beach)… it probably goes back to the nature of “information”, i.e. is there an absolute definition of information or is it a strictly relative/subjective concept?

  192. Scott Says:

    asdf #188: Thanks for the link! Yes, I saw that paper; Deutsch was kind enough to send me a draft a few weeks ago. I wish I had something more insightful to say about it. I enjoyed reading it (as I enjoyed Deutsch’s previous paper on constructor theory); I very much like the idea of taking “which tasks can and can’t be achieved” as the basic notion in physics. Of course, with any paper of this kind, the real question is what payoff you get in return for accepting the starting assumptions. And so far, the payoff seems to consist entirely in the reorganization of certain concepts (like information and copyability) that many people, possibly including me, would be content just to take as primitives, or to define in particular contexts (like classical or quantum information theory) where they acquire more mathematical “meat.”

    On the other hand, had I read Deutsch’s original papers on quantum computing in the 1980s, my reaction probably would’ve been similar! “Yes, this is a very lovely idea, so it’s a shame that nothing too spectacular seems to come out of it yet…” In that case, I would’ve been right in a sense, but I should’ve been looking myself for the spectacular thing that Deutsch correctly intuited was there! 🙂

    At any rate, I think it’s a gross exaggeration to say, as the SciAm article does, that “physicists have no clear definition for what ‘quantum information’ even is or how it relates to classical information.” It’s almost exactly like saying that “physicists have no clear definition for matter or energy.” Maybe not, but physics isn’t about discovering the “metaphysical essence” of things—it’s about understanding what things do!

    And the idea that “if only we knew what quantum information truly was, we could discover more quantum algorithms” would be more convincing if there were even one example of it. It’s true that most of the quantum algorithms we know were discovered in a more-or-less haphazard fashion, but so were most of the classical algorithms we know! Why should we think there’s anything going on here other than the fact that designing nontrivial new algorithms (like proving nontrivial new theorems) is hard?

    Finally, it’s worth noting that there’s been other work on creating common frameworks for classical and quantum information, including that of Lucien Hardy, that of Chiribella et al., and that of Martin et al. on “algebraic information theory.” It would be interesting to compare constructor theory to these other frameworks.

  193. Scott Says:

    In comment #46 I wrote (in answer to a question):

      I’m sure you could define a quantum generalization of Φ, as you can define a quantum generalization of just about anything. But I’ll leave that as a challenge for others. 😉

    A less flippant answer is that people—including yours truly, as it happens—have thought about measures of the “global entanglement” of n-qubit quantum states, where you ask whether the bipartite entanglement is large across a random partition of the qubits into two subsets A and B. See, in particular, Section 5 of my old Multilinear Formulas and Skepticism of Quantum Computing paper.

    Now, the measure I use there isn’t exactly like Φ, for two reasons: first, I was only concerned with there being large entanglement across a random bipartition; I didn’t care about the entanglement being large across every balanced bipartition. And second, I was only interested in static measures of “global entanglement” for an single n-qubit state; I wasn’t looking at dynamics or evolution operators. However, what I studied there is obviously related to Φ—and in fact, in order to make the global entanglement large, I’m driven to Vandermonde matrices and error-correcting codes for pretty much the same reasons as in this post!

    The discussion in Section 5 of my paper is also relevant to the comment of Ninguem #128, about the conditions under which the Vandermonde matrix has the “information integration” property we want (e.g., it doesn’t if the field size is too small).

  194. J Says:

    Sorry for my silly question and thank you for your patience.

  195. Daniel Reichman Says:

    Very nice post. Thanks for writing it.

  196. Arko Bose Says:

    Scott,
    I was really hoping that you would spend a few Scott-minutes to trash my comment as meaningless, high-entropy noise. 🙂

  197. Christos Gio Says:

    Arko Bose #184: You could have a look at von Foerster’s work; he had a very similar definition of organisation based on entropy, and also Schroendiger’s negentropy or entaxy (τάξη/taxy -> order.) But such measurement is failing to differentiate the consciousness of a diode ( if such a concept exists) from that of a human. That is Tononi’s subject of research. Again, if you look at Tononi, that is what he essentially allures to, but applied over networks, where entropy is reduced in the process of information flows. He tries to open the black box of entropy reduction to get some more context.

  198. Abel Says:

    fred #191: It seems like in the IIT approach, the interaction with the outside could be encoded in the updating function f (either as an additional input, or in its structure under suitable assumptions).

    As for the relation of this with AI, it seems like some of the swarm robotics approaches might take more into account the relation with the outside (including other agents). In any case, you might enjoy reading the “Artificial Intelligence” section of this, it gives some historical context about people making these points in the past.

  199. Scott Says:

    Arko #196:

      I was really hoping that you would spend a few Scott-minutes to trash my comment as meaningless, high-entropy noise.

    Err, OK. 😉 You write:

      Thus, it seems that a possible sufficient test for consciousness could be whether a system A shows a natural tendency to lower the entropy of stored information.

    It seems to me that this criterion is open to immediate counterexamples, in the same sense that the Vandermonde matrix was a “counterexample” to IIT. You mentioned Google Translate as a possible counterexample, but why not consider something much, much simpler—like a program that just overwrites your entire hard disk with the all-0 string? Such a program has an enormous tendency to “lower the entropy of stored information.” But if it’s conscious, then again, the meaning of the word “conscious” has been stretched beyond anything I understand.

  200. Serge Says:

    In mathematics nowadays, there are not just deductions and knowledge. There are also opinions. Mathematicians make up their own opinions – their own conjectures – thanks to a psychological process which hasn’t been fully explained yet. Some people believe or disbelieve some statements such as RH or P=NP… and they’re all convinced that they’re right and the others wrong! Likewise, some people believe in God while others don’t. Some people like mathematics but others don’t, etc… How does the mind devise its own opinions? In *my* opinion, whichever theory that tries to explain that will be more than welcome. Is integrated information such a theory? I wish it was…

  201. James Cross Says:

    #196 #199

    Doesn’t consciousness increase entropy not decrease it?

  202. Christos G Says:

    James Cross #201: If I remember my cybernetics journeys it decreases entropy within the system (making sense), but increases the entropy in the surrounding environment.

  203. Arko Bose Says:

    Scott,
    This is why I love your blog: it refines my thoughts. In response to your well-founded objections, let us ask this question: “Is it possible to speak about the consciousness of a system A without first defining what is may be conscious to?”

    The reason I believe that this question is relevant may be motivated by several examples:

    1) The human ears are not “conscious” to acoustic waves whose frequencies lie outside the range 20Hz – 20kHz. Thus, no matter how many inputs my ears receive in this acoustic range, the response of my ears to this range will remain completely random, i.e., unbiased. I cannot process such inputs: my brain cannot lower such an input’s entropy, hence my brain’s response to such inputs remains completely unbiased.

    2) No human being is conscious to the constant shower of nutrinos. Again, no matter how many nutrinous pass through me, my response to them will remain completely unbiased. I cannot process this input of nutrinos: my brain cannot lower the entropy of this input, hence its response to this nutrino shower remains completely unbiased.

    3) Since I know about the existence of two languages Cantonese and Vietnamese, I am conscious that these two languages exist. However, since I understand neither of these languages, I may not profess to be conscious to any sentence in either language. Further, I may not even profess to tell the difference between the two languages. Again, my responses to both these languages and to the differences between them will remain completely unbiased (unless, with the help of some external intervention, I may begin to pick up these languages) since my brain, due to its inability to process such inputs, will not be able to lower the entropy of such inputs.

    I can keep going on, but a little thought will tell you the following crucial fact:

    If you think of ANY system that can be said to be conscious to ANY set of inputs, then it is necessarily (and sufficiently) because its response to this set of inputs will be found to be biased.

    Now, processing any input is essentially a computational procedure. Thus, the harder it is for a system A to process a set S of inputs, the lower is its consciousness to S.

    I claim that the following definition captures the essence of consciousness:

    Given a set of inputs S and a system A, A is conscious to S if and only if its response to S is biased.

    Let me offer a few more examples:

    1) I am conscious to the visible spectrum of electromagnetic waves: my response to this spectrum is not random, because my brain successfully processes such an input.

    2) My body is conscious to ATP: its amount in my body always evokes a biased response: hunger and fatigue.

    3) Everything I am conscious to evokes a very biased response in me.

    4) A stone is conscious to the “inputs” of gravity and electrostatic repulsion: no matter what the source of gravitons is, if these gravitons interact with the stone, its response will always be biased: it will always move toward the effective source of these gravitons.

    No matter what you throw at the stone, the stone will ALWAYS respond in a highly biased way: its outer electrons will ALWAYS feel the repulsion of the outer electrons of the thrown projectile.

    However, no stone is conscious to Cantonese, or any other human language, or sex, or music, etc., because no matter what the input looks like from any of these sets of inputs, the stone’s response to them will never show any bias: the stone is incapable of processing inputs from any of these – and many more – sets.

    The simple program that you spoke about, in accordance with the definition I gave above (could you offer a better definition that captures the essence of consciousness better?), is indeed “conscious” to its set of instructions, but unconscious to this blog, for example.

    Indeed, my Linux distribution is pretty unconscious to any of the Windows system-files and vice-versa.

    So, unless (and I really hope that you render this “unless” dead with your arguments) you can offer me a better definition of consciousness, we see:

    Given a set S_k of inputs, a superset C of categories of all possible S_k, size Z of C, and a system B, we say:

    1) B is conscious to S_k if its responses to elements of S_k are biased (i.e., not random): if B reduces the entropy of the inputs from the set S_k in a finite time T.

    2) Let C represent the superset of all S_k for which 1) above is true. Then, larger the Z and lower the T, higher is the degree of consciousness of A to C.

    In this restricted sense, since every system will respond in a biased way to some or other input (how can anyone define a system without defining its boundaries, and how can any boundary be defined without specifying what this boundary keeps “out”. This essentially ends up defining a set of interactions: the boundary MUST recognize (i.e., process in a finite time) what it needs to exclude.), evidently, to this specific set this system will be conscious.

    I know this will disturb you. But think of any system that YOU think is conscious and you will immediately see that you cannot talk about its consciousness without referring to any set of inputs. You will also see that the moment you specify such a set of inputs to which your system is conscious, you will essentially find that this system’s responses to this set of inputs will always be biased: this system will essentially lower the entropy of this set of inputs.

    As always, your criticisms are warmly welcome.

  204. Scott Says:

    Arko #203: Unfortunately, no amount of verbiage is going to convince me that, if a theory predicts that the all-0 function (i.e., a program that simply erases whatever is in memory) is “conscious”—conscious of anything, doesn’t matter what—then the theory is capturing the same thing that I’m trying to get at with the word “consciousness.” (As always, the fact that I don’t know exactly what consciousness is, doesn’t mean that I can’t be crystal-clear about what it isn’t!)

    If you insist that the all-0 function is conscious, at some point I’ll just reply, “OK, fine, but then I’m interested in a more restricted kind of consciousness—call it conscious consciousness—that the all-0 function doesn’t have. And in order to characterize conscious consciousness, you’ll clearly need to impose some additional conditions; the ones you’ve given so far (like decreasing the entropy of stored information) clearly don’t suffice.”

  205. Guy Inchbald Says:

    You are absolutely right. But your “pretty hard problem”, how to tell if a system is conscious, is trivial – ask it! Setting aside mechanical programming, I would suggest that lies, fantasies and other untruths are the hallmark of a conscious system. If it is capable of understanding and answering your question, then it is conscious.

    The distinction between true and false statements has long been a stumbling block for those who would base all information on the laws of physics. Tegmark has a theory that certain statements called counterfactuals can be rolled into the laws of physics, but I would say there is a qualitative difference between a statement that might be false (a counterfactual) and one that can never be from the outset – a lie, a fantasy, or a simple mistake.

    What I do find pretty hard is to convince some people that the hard problem exists. Clearly, a conscious mind will have identifiable neural correlates with a 1:1 correspondence between correlates and inner experience. A good many people assume that if these are wholly correlated then they must be the same thing, therefore there is no distinction for a “hard problem” to puzzle over. Their criterion for 100% correlation is based on concurrence, not on all reported properties. This is therefore not a logical identity, but they seem to be convinced it is.

  206. Guy Inchbald Says:

    Oops, not Tegmark, I meant David Deutsch. Sorry about that.

  207. Scott Says:

    Guy #205:

      your “pretty hard problem”, how to tell if a system is conscious, is trivial – ask it!

    OK, great. And what if it’s programmed to reply “yes, I’m conscious” in response to that specific question, and has no other capabilities whatsoever? (Like a 2-line computer program that I could write this minute?)

    So presumably you meant something more like, “have a conversation with the system in which it not only affirms its consciousness, but (much more importantly) passes the Turing Test.”

    But then we’re not giving any kind of simple criterion for consciousness, anything that we could program our computers to judge. So this isn’t a solution to the pretty-hard problem at all.

  208. Arko Bose Says:

    Scott,
    I am glad you spoke about such a restriction, because it’s what I had thought about earlier too, before feeling uncomfortable with it.

    Allow me to exemplify my discomfort with such a restriction.

    Let us start with the premise that we don’t know what consciousness is. Then, let us claim that human beings are conscious. Could you offer me an intuitive argument to support this claim, without referring at all to stimuli?

    I hope you can. But in case you cannot, then aren’t all stimuli just another name for external input?

  209. Scott Says:

    Arko #208:

      Let us start with the premise that we don’t know what consciousness is. Then, let us claim that human beings are conscious. Could you offer me an intuitive argument to support this claim, without referring at all to stimuli?

    Yes. The crucial point is that the word “consciousness,” as pretty much everyone uses it, is defined largely by reference to our own example. We don’t have access to some separate, human-independent definition of consciousness, which would allow us even to frame the question of whether it’s possible that toasters are conscious whereas humans are not.

    By analogy, imagine 19th-century scientists built a thermometer that delivered the result that boiling water was colder than ice. The possibility that that was true wouldn’t even merit discussion—it would be immediately rejected in favor of an obvious alternative, that the thermometer was simply a bad thermometer, since it failed to capture our pre-theoretic notion of what temperature is even supposed to mean, which concept includes boiling water having a higher temperature than ice.

    I have to admit, this all seems so obvious to me that I’m flabbergasted at how many times I’ve had to repeat it in this thread, in slightly different words. Maybe it’s because I come from math/CS, where formal definitions of informal concepts automatically get judged better or worse according to how well they capture the original concepts. No one asks whether maybe the calculus definition of continuity is all wrong, so that floor(x) is a continuous function whereas 2x is not. If your definition of continuity delivered that result, then your definition could be rejected for that very reason.

  210. sf Says:

    some quick remarks on the literature:

    new! preprint:

    http://arxiv.org/ftp/arxiv/papers/1405/1405.7089.pdf
    Consciousness: Here, There but Not Everywhere
    Giulio Tononi, Christof Koch
    (Submitted on 27 May 2014)

    but no allusion found there to Scott’s counterexample,
    and it isn’t clear what’s new in this arxiv preprint.

    Maybe these passages are meant to address indirectly
    Scott’s counterexample:
    (hard problem…)
    “But things may be less hard if one takes the opposite approach:
    start from consciousness itself, by identifying its
    essential properties, and then ask what kinds of physical
    mechanisms could possibly account for them.
    This is the approach taken by Integrated Information Theory (IIT)… an evolving framework that provides a principled account for what it takes for consciousness to arise, offers a parsimonious explanation for the empirical evidence, makes
    testable predictions, and permits inferences and
    extrapolations.(iv)

    ftnote iv)

    “If it is not outright wrong, IIT most likely will have to be
    refined, expanded, and adjusted. However, in its current
    form (IIT 3.0), it explains and predicts a wide range of
    phenomena…”

    So it seems the best account of IIT may still be,
    Integrated information theory of consciousness: an updated
    account.
    G Tononi – Archives italiennes de biologie, 2011
    but it is NOT open access.

    Note that starting p.14 it has a section
    = Matching — (also discusses “capture”)
    relating neural dynamics to external environment,
    which many commenters asked about.

    The more recent, and open access, IIT 3.0:
    http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003588

    discussed already by Matthew #104, Stiz #110 etc.

    – seems to be not as complete (re Matching etc)
    as the “updated account”, all this is based on a
    first impression, after a cursory look through the papers.

    IIT 3.0 does contain the ploscompbiol supplement:

    Text S1 Main differences between IIT 3.0 and earlier versions

    some points don’t differ from “updated account” so
    it seems that “updated account” has one leg in the 3.0 era.

    also, points 3–7 of S1 do sketch the new-look Phi, based on the
    earth mover’s distance = emd

    http://en.wikipedia.org/wiki/Earth_mover's_distance
    http://en.wikipedia.org/wiki/Wasserstein_metric

    “takes into account the similarity or dissimilarity of states
    between whole and partitioned
    distributions. Moreover, an extended version of EMD is applied to
    measure the distance between
    whole and partitioned constellations of concepts in concept space”

    interesting aside:
    http://en.wikipedia.org/wiki/Leonid_Kantorovich

    a more detailed new-look Phi is in the plos supplement:
    Text S2 Supplementary methods.
    but promises to be much more complicated – it may be (first guess) intractable to evaluate, or even estimate, as it uses in some sense all irreducible substructures…so may need some good luck to
    find a counterexample.
    also provided there —

    Text S3 Some differences between integrated information and
    Shannon information.

  211. Andrew H Says:

    I think this is a really interesting problem, and I really enjoy some of the comments above. I’m an IIT fan, and I’m working with a version of the theory right now (I’m a psych/neurosci postdoc), so I want to make a defense that I don’t think has showed up yet:

    [First off, the definition of ‘consciousness’ that I adopt, and that I take the IIT to target: consciousness is the simultaneous, unified, complex state within which experience takes place; for humans, the *contents* of this state are obvious, and the object of e.g. experimental psychology. I’m not sure, but we may diverge right there.]

    The point made in the post is well taken, but ultimately I think it’s in error. It’s confusing the system for a description of the system. If I understand the argument correctly, it’s claiming that since an apparently mundane and uninteresting numerical structure (the Vandermonde matrix, from the POV of consciousness science) can be used to produce any desired value of Phi, the Phi concept must be vacuous, since this seriously violates our expectation that a conscious system must be something complex and interesting. There are some other issues in there that seem like finer points that I don’t quite get, so I hope I’m not missing the whole point.

    To make a measurement of Phi, we have to transform it into a set of numbers and ways of transforming those numbers; so here in the counterexample you have a matrix and a set of specified operations. Since the matrix can be of any size, it can produce any value of Phi. So, if we knew the Phi of human consciousness, we could set a human consciousness next to an appropriately scaled Vandermonde with the same Phi, and see for ourselves that they don’t have anything interesting in common.

    But right there I think there’s an extreme disconnect between concepts; the Vandermonde system is a mathematical object or, at best, a computer program, while the human consciousness is a *thing*, a material system, i.e. some substructure of a human brain (I am not a Tegmarkian!). We should put them on equal terms if we want to compare our intuitions about their consciousness qualifications. One way to do that would be to transform the human consciousness into a mathematical object like the Vandermonde system: a matrix of terms and a specified way of transforming this matrix from iteration to iteration (this might actually be the way we would measure the human Phi in the first place).

    If we do that, we now have two matrices, side by side, with different structure (probably of different dimensionality, etc), but producing the same Phi when we carry out the calculations. Does it seem so absurd now that they are both conscious? Really, our intuitions will say that neither of these are conscious systems – they’re both representations or descriptions of systems, and we’re going to accept the nominal consciousness of the human system only because we know what it represents: memory, emotion, perception, etc. If we didn’t already know that, we’d probably think that they were both similarly distant from consciousness.

    But then, what if we do the opposite? Take the human material system (the brain), and set it next to a material system that instantiates the Vandermonde system? What would that be? I don’t know – I figure that there are infinite ways to build a material representation of such a system. But say there is one that is biological, and that carries out its transformations through biochemical electrical circuits. You set that next to the human brain, and then, does it still seem so absurd that this thing cannot be conscious? Of course, it’s not conscious in the psychological sense – it doesn’t have memory, perception, emotion, etc. We don’t know what its consciousness is ‘like’. But there we have… something, a complex, inscrutable material system, integrated and with complex internal structure, doing something that produces a lot of Phi. It’s not psychological, but I will maintain that for a system to be psychological (having certain types of functionally defined mental states) is different in kind from the system being conscious, although to our knowledge the two things are only ever seen hand-in-hand.

    When we look at it this way, I don’t think it’s so obviously absurd. It does require that ‘consciousness’ be detached from normal psychological notions – but this is part of the point of the “inside out” approach of the IIT: generate a theory based on what consciousness seems to be, rather than how it functions, or what it is for. It is probably true that psychological functions are especially good at generating consciousness, but I don’t see how consciousness is obviously *defined by* there being a set of psychological functions (I am not a Dennettian). On this point, I don’t think the ‘reducto ad mathematicum’ is a fatal challenge to the IIT.

  212. Marko H. Says:

    If the consciouness is defined, then other “less” vague concepts as love, justice, etc maybe will be defined “more easily” and it becomes in the (¿unavoidable?) end of the philosophy.

    It is likely that the equation should be defined by a kind of deep learning machine or something like them. Probably it must be related to something where the machine is able to learn about its own state. If this machine has zillion of sensors (sight, smell, etc) and actuators (desire, imagination, etc) and a complex semantic (mother, maths, etc), it should correspond to another common consciouss being.

  213. Jay Says:

    sf #210,

    Thx for the link. What’s new is clarifications on the interpretation of ITT, such as:

    “IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.”

    “for the simple reason that the brain is real, but a simulation of a brain is virtual (including, sensu stricto, its simulated value of Φmax)”

    I feel like discussing etiquette with Queen Mother, and then she farts.

  214. András Says:

    Scott, thanks for a lucid discussion and your careful responses.

    The “argument” facing you seems to have the form: given a Contraption which requires some kind of percussive force to operate, but details of how that force is to be applied are messy and ill-understood; take this handy hammer; and finally claim that the things which the hammer is good at hitting are Contraptions. This attempt to appropriate a perfectly reasonable, if fuzzy, prior meaning of Contraption raises my hackles. I want to shout: just call the darn thing “Phi-consciousness”, move on, and do some science!

    In contrast, your patient responses to the comments are an inspiration to us all.

  215. Ben Standeven Says:

    @Arko Bose: Maybe it would help to present Scott’s argument in a different form?

    Just like you, I would have said that I am not conscious of Chinese or Vietnamese; but by your proposed definition, I am conscious of them: No matter what is said to me in those languages, my response would be, “I’m sorry, I can’t understand you.” But, as a constant function, this is maximally biased, hence maximally conscious.

    @Guy Inchbald:
    But mental experiences and neural correlates are equivalent (in fact, identical) in their reported properties: If I report a mental experience, my report is caused by the neural correlate of that experience. So in fact, it not only is perfectly correlated with my report of my neural state, it actually is my report of the neural correlate.

  216. Arko Bose Says:

    @ Ben Standeven: When I spoke of biased responses, I should have elaborated. Let me do so now.

    When we respond to some statement by saying: “I cannot understand the statement”, it is not actually a response TO the statement, it is actually a meta-statement: we are simply articulating the fact that our brain is unable to process the statement.

    By a biased response I mean this: Let us say you are in the middle of a crowd of people consisting of groups which speak different languages. To which groups will your responses be biased? Those, whose speakers are using languages that you understand. To make this argument more concrete, let us take specific sets of sentences:

    Let us say that in a controlled experiment, the following groups of people are present: Vietnamese, Sri Lankans, Cantonese, Aboriginals, English, French, Spanish, Bengalis.

    They have each been instructed to say anything that they want to you. Now, each sentence that your brain can process will receive a biased response (in the sense that if you hear one of the English speakers say to you: “Are you married?”, then you are unlikelier to answer: “But the Pacific ocean is not the smallest ocean”, than saying: “Yes” or “No”).

    So, how biased your responses (both spoken responses and actions) would be to each group’s conversations? Since your brain would not even be able to process many of the languages, no matter what those groups say to you, neither your speech nor your actions would reveal any bias towards them (think of it this way: you are walking towards the bar when a Bengali says to you: “aapnaar ki peyt kharap?” His question would be unlikely to effect any response in you. You would probably not even look his way: you would continue doing what you are doing without showing any effect of his questions/sentences on your actions).

    But every time you hear an English sentence being spoken, your brain will react, perhaps influencing your own speech and actions.

    This same statistical behavior will reveal itself in differing degrees, based on how much you can process information presented in each of those languages. And by simply recording the bias in your responses and the time it took you to respond, one can determine how conscious you are to Vietnamese, Sinhalese, Cantonese, Aboriginal languages, English, French, Spanish, Bengali.

    Thank you for your criticism. It really helped me refine my own thoughts 🙂

  217. David Pearce Says:

    Phenomenal binding would seem classically forbidden for membrane-bound neurons separated by 3nm-40nm-sized electrical gap junctions and chemical synapses.
    David Chalmers identifies this profound structural mismatch between the physical brain and phenomenal features of our minds as ground for dualism.
    And Max Tegmark is generally reckoned to have debunked quantum mind with his calculations of ultra-rapid macroscopic decoherence timescales.
    And contra Penrose and Hameroff, there is no evidence the unitary dynamics of QM breaks down in the brain.
    But instead of treating Tegmark’s calculations as a reductio ad absurdum of quantum mind, IMO they are better treated as a falsifiable prediction.
    When we can probe the neocortex at the sub-picosecond timescales around which thermally-induced decoherence occurs, IMO we’ll find not computationally and phenomenally irrelevant “noise”, but instead a perfect structural match between the formal-physical and phenomenal properties of mind.
    (cf. the “persistence of vision” watching a 30-frames-per-second movie versus the c.10^13 quantum-coherent frames-per-second conjectured here.)

    Ridiculous?!
    Good.
    This is precisely what we want: novel and testable predictions.

  218. Jay Says:

    self-correction for #121

    “Chalmer’s point that we have no access to what happens within other’s consciousness. We do! Or we do, except if we bother with solipcism.”

    Actually Chalmer clarified the very same point more than 10 years ago, so I stand corrected that believing the former implicates rejecting the latter.

    http://consc.net/papers/scicon.pdf

  219. Kevin S. Van Horn Says:

    “trying to have their brain and eat it in precisely the above way.”

    This post was worth reading just for that amusing if rather disturbing turn of phrase alone.

  220. Kai Teorn Says:

    Philosophical zombies and Searle’s Chinese room are not impossible or incoherent. Their problem is different: they just fail to prove what their inventors are trying to prove. When you’re talking to such a zombie or to the person in Chinese room, you’re not really talking to them; you’re talking to whoever wrote their programs and instructions. The actual “zombie” is just a communication medium – like a phone. Philosophizing about whether a phone gets conscious the moment you hear your mom talking from it is rather pointless.

  221. Scott Says:

    Kai #220: Sorry, I don’t think that argument works. What if the p-zombie had evolved by natural selection, like we did? Then there would be no other entity to ascribe the consciousness to.

  222. Kai Teorn Says:

    Scott: Well, perhaps what I said applies more to the Chinese room experiment where, by definition, there are some Chinese instructions prepared by someone knowing Chinese for the ignorant person in the room to use. Such an instruction-following entity can evolve, but that proves nothing: you’re still talking not to “him” but to the author of the instructions.

    As for fully evolved zombies, they are then just indistinguishable from humans, so even more pointless. Chalmers’ whole argument amounts to “we can imagine that physicalism is false, therefore it is false.” Sorry but no, you can’t imagine that, you can only say that.

  223. Hector Zenil Says:

    Hi Scott,

    We were discussing IIT at my lab a couple of weeks ago and we arrived at similar conclusions. In a recent invited talk I gave at AISB50 at Goldsmith’s in London on cognition, information and subjective computation (slides available here: http://www.slideshare.net/hzenilc/aisb50-presentation) I touch upon IIT exposing two issues together with possible amendments. (by the way my slides have pointers to your nice philo paper).

    One is the issue of IIT implying panpsychism. Disregarding whether one finds it unfortunate, this can be easily solved by setting a threshold. In other words, a minimum phi should be attained before declaring a system to have anything we can identify as human(or animal)-like consciousness. Otherwise it is absurd to call conscious something with Φ = 1 X 10^-100, which is what is implied by Tononi et al. The other main issue is as you point out that ITT should have been introduced as a theory of a necessary condition, and not a sufficient one, for having consciousness.

    But there is another side of even more basic nature of IIT that has perhaps not being explored beyond the discussion of whether it is a measure of consciousness. Is IIT anything interesting as a graph measure? In other words, is there or not any combination of graph-theoretic measures that can account for a measure such as Φ? (either Φ or a similar version, similar in that it captures exactly the intended original idea: how well connected are different parts of a network). By graph theoretic measure I mean a measure or set of measures that do not make use of a non-graph theoretic measures such as Shannon’s entropy. Is Φ orthogonal to any combination of graph-theoretic measures? Is it a non-local version of the clustering coefficient? Studying this may also shed light on possible (or not) faster computations of phi or a variant of Φ.

    By the way, Φ_AR was proposed as a way to analytically speedup the computation of Φ_E (the non-Markovian version of the original Tononi’s Φ version 1 or 2.0) under a Gaussian distribution assumption, in case you wanted to perform some numerical approximations avoiding the combinatorial explosion of the bipartitions.

    One last thing that strikes me is the amount of publicity that few, often controversial but always inflated theories/models are getting in the media as we can witness from all the copycats of the articles on multiverses and consciousness in the last year or so. I think other theories and scientists’s ideas deserve a bit of attention of the media even if the persons behind are not good at selling them as others. Given that the media’s interest is to reach the largest public disregarding prudence or common sense (e.g. the recent Turing test pass fiasco repeated by science media all over the world), perhaps it is on the scientists to find a middle point to hold their horses for the sake of a better and more plural media and a fairer treatment to colleagues and ideas. This does not go to you Scott, who you find often end up participating in the media only after suggesting others and finding them unavailable.

  224. mambo_bab_e Says:

    As I wrote #100 &144 before, I am interested in IIT.

    I heard recently that some plants could protect themselves after knowing damages from a harmful insect of others, as Kyoto university research. They used the word “recognition” naively, but I thought the word was correct.

    Current IIT has the condition: it is zombies if it does not meet the complicated condition. However it seems impolite to the plants. I expect brushup for IIT.

    And again I point the big phi (as nonconscious) is not the problem for current IIT. Even current IIT could show outline of consciousness with a numerical value. I think someone who points the big phi MAY also points photodiodes and microbes problem (as nonconscious).

    At this point I recommend photodiodes and microbes as CONSCIOUS. If there are someone who don’t agree, we need another discussion.

  225. Vikram Says:

    I loved the post Scott! The fundamental problem that you raised in IIT is that your construction can create a higher value of phi compared to what human brain can ever achieve and of course we know that decoding circuits are not conscious or intelligent.

    Do you think the following scenario, if possible at all, can save IIT from your counterexample?

    1. So the idea stems from inflationary cosmology, in the slow-roll inflation model, in some sense, the potential energy of scalar field goes down in after a very short time. At first there is very fast expansion in a bubble, and then the inflation stops as the potential energy hill becomes steeper.

    2. Based on that, in IIT for “calculating” the measure of consciousness we can propose another variable phi_2 that augments phi calculations. So anytime phi is calculated for say the decoding circuit that you proposed, we get a very high phi. But just like the scalar field in the inflation, the value of phi drops down very drastically.

    3. In that sense, we can say as you calculate phi, we get a phi_at_origin and then very shortly after this, the phi value drops under the influence of phi_2 which is the scalar field reaching the steep hill. So the phi drops to a very small or non-zero value.

    4. Maybe in humans, a method to “capture” phi can exist, where phi won’t drop down and the circuit you proposed doesn’t have this same capture mechanism. So in a sense, the circuit might have a very high level of consciousness but it doesn’t have any mechanism to integrate it in a higher-level experience so the phi drops. But our brain can integrate it and so our phi value stays at what it was when calculated.

    5. As we design more and more complicated and integrated machines, maybe this method of capturing phi will also come to be, and those machines can start to feel some level of consciousness. A machine possessing a high level of consciousness may for instance pass the turing test. So with that one can say a machine designed thus can also capture phi and stop it from dropping down. Whereas in other systems that don’t have this level of integration and maybe other variable, they can’t capture it and under the effect of phi_2 it drops down to a very small non-zero value.

  226. marek Says:

    Scott why do you think “stupid” systems e.g. systems that do nothing but apply a low-density parity-check code, should be denied consciousness because they’re not intelligent ? Does consciousness must be necessary “intelligent” consciousness ? Why cant a system that cannot “think” anything be conscious ?

  227. Scott Says:

    marek #226: We’ve already been over that in this thread and the other threads, but briefly—once you expand the notion of “consciousness” to encompass obviously unintelligent entities, like expander graphs and 2D grids (as IIT does), I can’t for the life of me understand why you shouldn’t then expand the notion even further, to encompass 1D grids, non-expanding graphs, rocks, atoms, and just about everything else. I no longer understand what the ground rules are, which allow IIT to ascribe consciousness to certain specific unintelligent entities but not others (see my followup post for my reasons why IIT’s “axioms of phenomenology” leave me unmoved). To put it differently, it’s no so much panpsychism itself that I have a problem with, as the arbitrarily selective panpsychism (so, not exactly panpsychism but expanderpsychism) exemplified by IIT.

  228. marek Says:

    Thank you for the quick response. I’m not sure however about “why you shouldn’t then expand the notion even further, to encompass 1D grids, non-expanding graphs, rocks, atoms, and just about everything else.” I’d more readily attribute consciousness to your expander graph than to a rock, on the grounds that expander graph and a brain are more similar i.e. they have internal macroscopic structure and they can process some information ?

    A side question, is calculating PHI with quantum computers more feasible ?

  229. Scott Says:

    marek #228: If computing Φ is indeed NP-hard (as I think it is), then one wouldn’t expect even an efficient quantum algorithm in general. (See the tagline below the title of this blog! 🙂 )

    Also, keep in mind that there are entities that can process information in arbitrarily complex ways (e.g., 1D Turing machines), yet that have vanishingly small Φ and hence aren’t conscious according to IIT. If we’re trying to design a consciousness-meter, then why on earth should I accept being organized vaguely like a brain in certain physical respects (e.g., being an expander graph) as more important than acting like a brain behaviorally (i.e., being intelligent)? If we’re going to go this route, then should we also give bonus points for an entity’s being wrinkled, gray, and goopy? 🙂

  230. Dmytry Says:

    It could also be interesting to consider Φ of an ideal gas at a given temperature – to leverage the thermodynamics

    After the timeframe roughly equal to the size of the gas volume divided by the speed of sound in said gas (i.e. *faster* than it takes for the brain to ‘integrate information’), every single atom’s state is very highly dependent on the initial state of every atom.

    No information is getting truly lost in the ideal gas, but the information is being mixed up all through the volume, rendering the parts maximally interdependent, much the same as your counter-example with Vandermonde matrix.

    Making jokes about the hyper-consciousness arising from the cremation process is left as the exercise for the reader.

    Now, it’s clear that consciousness, if valid as a question at all, has something to do with ‘information’ being ‘processed’, but how exactly it is ‘processed’ got to be a determining factor in hypothetical objective metrics of subjective states (there got to be many metrics for things such as pain, pleasure, itchiness, and the like).

  231. marek Says:

    maybe one example that behavioral intelligence doesnt produce consciousness automatically, could be our own brains in everyday life; neuroscientists commonly point out that most processes, decisions and “thinking” behind these decisions happen “automatically”, or “in the background” without involvement of consciousness. It’s in line with common experience: sometimes we try to find out a solution consciously and fail, and next morning a ready idea suddenly “appears” in the mind. So if consciousness was necessarily linked with intelligence, perhaps we would consciously perceive all reasoning processes – but we do not.

    And similarly, when dreaming, we sometimes experience qualia rich vivid dreams, even though our reasoning capabilities and intelligence is strongly reduced – but consciousness remains strong.

    Going back to IIT, do you think it can explain why certain sensations are more “neutral” (blue and red), and some are less (toothache and taste of wine, or pleasure and pain in general) ? Can it be explained by the shape of “qualia space” , e.g. if the “shape” is unbalanced the consciousness received it as discomfort, or pain ?

  232. doubt Says:

    Only you can experience self-consciousness. Therefore we can not verify that an system have conscience. Thus any theory of consciousness can not be verified scientifically. Therefore, the scientific method does not allow to study consciousness. Thus consciousness is a part of reality that we can not know. Any theory of consciousness should begin in a very creative way to overcome these limits. Must first say how can verify their predictions. Is there currently something? If not, that we are talking about?

  233. Guy Inchbald Says:

    Harking back to couple of earlier points:

    How do we ask a system if it is conscious? Much the same way we ask a chimpanzee or a dolphin or an octopus, we interrogate its state of mind with experiments and we build a picture of its mental sophistication – we develop a relationship with it and we test that relationship. Turing never said how complex or sophisiticated his test would need to be, nor how long it would take. If you can fool the test, then it isn’t a properly-designed test. None of the cute “it fooled ’em” tests have properly exercised the relationship between human subject and machine.

    Are inner experiences identical to their neural correlates? A 1:1 correlation does not make them identical. There is a 1:1 correlation between the existence of a heads and of a tails of every coin, but they are not the same existence.
    Also, the 1:1 correlation of neural activity with experience is an unprovable hypothesis. It may be that a certain pattern in Alice’s mind always corresponds to a certain perception of redness, but what about Bob’s mind? Maybe that pattern creates quite different qualia (elements of conscious experience) for him, maybe he actualy experiences something closer to the qualis that Alice experiences when she sees blue. There is no way we can ever know. To philosophers, this has long been the nub of the “hard problem.”

  234. Axel Boldt Says:

    Fascinating post and fascinating discussion!

    Scott, you say your intuition tells you that the Vandermonde example is not conscious and that its large Phi value therefore invalidates IIT. I would like to probe your intuition a bit further. Do you reject the Vandermonde example as unconscious because it is too regular, as it is given by a matrix that’s very simple to describe, or do you reject all functions S^n -> S^n (together with a state x from S^n) as unconscious? Or phrased differently: do you think some of these functions (with S and n held fixed) are more conscious than others, and the Vandermonde example belongs to the less conscious ones?

    It seems to me this latter position is the only one consistent with your statements. Because if you believe all functions S^n -> S^n to be equally unconscious, then you could have dismissed the definition of Phi immediately; no need to construct an elaborate counterexample.

    Now if I’m right and you do recognize some functions S^n -> S^n to be more conscious than others, it would be really interesting to hear a justification for this, because it might direct our search for a better definition of Phi.

    Cheers!

  235. Integration, complexity, recurrence, intuition | Peter's neurophysiology blog Says:

    […] very basic matrix operations that are conscious when quantified using Tononi’s theory (see here and here for nice, although very wordy discussion that also include a reply by […]

  236. Andy Says:

    Scott #136: It seems that you are working with a definition of consciousness that goes something like “similarity to a human” or “similarity to a human brain.” You are equating degree of consciousness with intelligence, as though a stupid entity couldn’t feel it’s simple thoughts with as much vividness as a human. I don’t feel any more alive and conscious when I’m tackling difficult mental problems than when I share simple moments with my lover, so why should I expect such a consciousness-intelligence correlation?

  237. Cosmin Visan Says:

    I wonder how IIT saves consciousness from being epiphenomenal. Also a philosophical zombie has IIT. So what’s the point of a system having qualia over and above the mere physical structure. I don’t understand why all these people making these theories fail to acknowledge that their theories make consciousness epiphenomenal. When will they ever take free will seriously into account and remove the epiphenomenality of consciousness in their theories ?

  238. Joachim Keppler Says:

    Scott,

    A lot of facets have already been covered in this interesting thread. Since many aspects of the discussion have a fairly strong mathematical character, I would like to contribute some additional thoughts from the point of view of a physicist.

    To start with, I totally agree with you in that the postulates underlying IIT are not compelling. I am also not confident about these postulates. And I also cannot see any rigorous derivation of PHI. Hence, there is no reason to believe that just this quantity should be the universal indicator for the amount of consciousness of a given system.

    Notwithstanding the above, I think that the basic ideas behind IIT are definitely a step in the right direction. This becomes particularly obvious from one of Tononi’s papers from 2012 where he writes: “From the perspective of IIT, one can formulate a tentative scenario that may help to form a tentative model of possible neural substrates of consciousness … A plausible scenario for characterizing such dynamics is in terms of transient attractors … According to IIT, several aspects … of transient attractor dynamics appear well suited to information integration.” Here, Tononi is on the right track (see below).

    As far as your criticism of IIT is concerned, I would like to seize the remark made by David Chalmers that our intuition is not necessarily a good guide to a correct theory of consciousness. Against this backdrop, I consider it doubtful to use paradigm-cases to guide our choice of a formal definition of consciousness. To be clear, I also share your view that rocks, walls and toasters are most probably not conscious. However, these objects must not be declared unconscious by definition as long as we have not fully understood the fundamental principles and mechanisms behind conscious systems. Rather, conclusions such as “a toaster is certainly not conscious” must be an output of a theory of consciousness that incorporates these fundamental principles.

    Taken as a whole, I am convinced that it will be unrewarding to build a theory of consciousness on the basis of axioms and definitions analogous to mathematics. To the best of my belief, it will be much more helpful and promising to adopt good practices of physics. Here, the common strategy is to study systems systematically, take data, find regularities and unveil universal principles. Only after the discovery of the underlying principles we are in a position to make valid statements about the properties of a given system. An appropriate example would be superconductivity: we simply cannot judge in advance whether or not a system is superconducting (under certain conditions) as long as there is only insufficient knowledge about the mechanisms behind superconductivity. But once the mechanisms are known, we should be prepared for surprising results. E.g., it could turn out that some systems can be manipulated such that they exhibit superconductive properties at much higher temperatures than expected.

    The strategy for the development of a theory of consciousness should follow the same rules, with the only difference that first person data and third person data have to be taken into account. We have to start with systems that we undeniably know to be associated with conscious and unconscious states (our brains). These systems must be studied systematically. From the neurophysiological body of evidence we have to extract regularities that finally point to the mechanisms searched for. Since consciousness is a fundamental characteristic of our existence, it is to be expected that the underlying mechanisms are of universal and fundamental nature. Once we know the fundamental principles behind conscious systems and conscious processes, we can study all kinds of systems with respect to the presence of the mechanisms. On this basis we can judge which of these systems are associated with conscious states and (hopefully in a second step) learn which states of consciousness are associated with the individual systems.

    My work in the field of consciousness research follows exactly this strategy. The details of my approach can be found in the following article: A new perspective on the functioning of the brain and the mechanisms behind conscious processes.

    The key insight from physics is that the attractors mentioned by Tononi make use of a universal mechanism that may very well be understood as integration of information. The crucial point is that only attractors leave characteristic imprints (information states / correlation patterns) in a ubiquitous energy field that lends itself as an ideal candidate for the substrate of consciousness. So, conscious processes can be distinguished from unconscious processes in that only the former processes utilize a mechanism that writes information into the substrate of consciousness. In other words, in conscious processes the mechanisms is present, while in unconscious processes the mechanism is not present.

    Now, coming back to intuition, it turns out that every nonlinear quantum system makes use of this fundamental mechanism. As a consequence (at least as long as we do not have any additional exclusion principle) a hydrogen atom may be conscious, which is a surprising and probably counterintuitive conclusion. Nevertheless, it is a clear implication of my approach … and in my understanding of science it would be unscientific to reject my approach purely on the basis of this implication.

    Final remark: Of course, at the moment we cannot tell what it feels like to be in the ground state of the hydrogen atom. Certainly, the conscious existence of the hydrogen atom is extremely rudimentary and monotonic, completely incomparable with the rich phenomenology of our existence.

  239. glen Says:

    The DVD has failed the Turing Test to the point of not taking it, or anyone not bothering to test it.

  240. willjasen Says:

    Integrated information theory intuitively seems useful in some particular ways. Experienced phenomenon like synesthesia are suspected to be caused by neural connections that would otherwise not be there in the average person, thus the brain being more connected and integrated.

  241. Robin Says:

    Hi Scott,

    I notice you say that philosophical zombies can’t exist in the eral world. I would point out that Dr Tononi has said that an implication of IIT is that a non-conscious system can be functionally identical to a conscious system.

    So P-Zombies are no longer an incoherent metaphysical concept, they are now a perfectly respectable scientific concept.

    Will you tell Daniel Dennett, or will I?

  242. What are the most informative books or other valid sources on "the hard problem of consciousness"? - Quora Says:

    […] views.Upvote1DownvoteComment Ken CluwnScott Aaronson's Pretty Hard Problem of Consciousness (http://www.scottaaronson.com/blo…) and the Phenomenal Binding Problem(s) (Binding problem) are worthy of consideration. The Hard […]

  243. Daniel Says:

    With regards to your list of things that intuitively are and are not conscious, how do you distinguish between intelligence and consciousness? That list makes some sense if you replace “is/isn’t conscious” with “more/less intelligent”. Could it be that a computer applying this matrix computation is conscious but doesn’t seem like it because it isn’t intelligent? I mean, that sounds silly, but we really don’t have much to go on when it comes to determining if something is conscious. Every individual (presumably) has that they are conscious, but they don’t know that anyone else is conscious. All they have are other peoples’ reports of their consciousness.

  244. Kevin Aylward Says:

    Its Mathematical Masturbation. Chalmers is correct, consciousness is a new axiom of the universe, unable to be derived from anything. Its futile to try and “explain” consciousness. It is a fundamental fact that any words used to “explain” consciousness can only use words that refer back to consciousness itself, i.e. “what does “aware” mean, what does “understanding” mean, what does “explain” mean. Its all circular. The loop cannot be broken to insert “electron” , “mass” “energy” etc, hence it’s impossible to introduce physical facts into consciousness. “Pain” is what a kick in the balls gives you. It is subjective, not objective. It can only be “explained” by using other words which can also only be defined withing the same loop. There is no way out or in. There is no way for maths to be introduced int that loop. Circular loops mean that anything can be “proved” in the loop, hence, there is no proof at all.

  245. Pedro1 Says:

    The hard problem should be relabeled the impossible problem, not because it’s impossible to understand (it’s easy), but because it (the understanding of our own subjective experience) is impossible to compute.

  246. Consciousness and the Attention Schema Theory - Meaningful? Says:

    […] you seen https://scottaaronson-production.mystagingwebsite.com/?p=1799 and https://scottaaronson-production.mystagingwebsite.com/?p=1823? […]

  247. Shtetl-Optimized » Blog Archive » If I can’t do math, I don’t want to be part of your revolution Says:

    […]  I was the “official skeptic” of the workshop, and gave a talk based on my blog post The Unconscious Expander.  I don’t really agree with what Horgan says about physics and information in general, but […]

  248. Peter Turney Says:

    Tononi gives some simple examples of the computation of Φ, showing that it is indeed larger for systems that are more “richly interconnected” in an intuitive sense. He speculates, plausibly, that Φ is quite large for (some reasonable model of) the interconnection network of the human brain—and probably larger for the brain than for typical electronic devices (which tend to be highly modular in design, thereby decreasing their Φ), or, let’s say, than for other organs like the pancreas.

    One problem with this is that the value of Φ depends on the level at which you choose to describe the physical system. At the hardware level of description, a typical computer has a modular design, and thus a relatively low Φ. But imagine that a large but conventional, modular computer is running a simulation of a human brain. At the software level of description, Φ is relatively high. Thus whether a physical system (such as a conventional computer running a brain simulation) has a high or low Φ depends on the level of our description of the system.

  249. Argument that AI cannot be conscious | Lowly Ocean Says:

    […] https://scottaaronson-production.mystagingwebsite.com/?p=1799 […]

  250. A Debate on Animal Consciousness | The Rationalist Conspiracy Says:

    […] IIT at all. I have the same reaction to IIT as Aaronson, and encouraged Aaronson to write his post on that […]

  251. Consciousness | if truth exists Says:

    […] Aaronson argues that the only test for such a theory is “whether it can produce results agreeing with common […]

  252. Consciousness I | if truth exists Says:

    […] Aaronson (MIT) argues that the only test for such a theory is “whether it can produce results agreeing with common […]

  253. Chuck Simmons Says:

    “if n were 10^14 or so—something that wouldn’t be hard to arrange with existing computers”

    The n x n matrix has 10^28 elements. This is 10,000 trillion trillion. So we start out with a 10 terabyte disk drive. A little bit bigger than we currently have available to us, costing on the order of $100. We now need a thousand trillion of these disk drives to store the matrix at one byte per cell.

    That currently existing computer system required, what, a thousand years of current world wide gdp?

  254. Stanley Jozwiak Says:

    I fully agree with your sensible position on this matter.

  255. Shaikh Raisuddin Says:

    What is conscious? is the first question.

    Without precise definition, modelling may be erratic.

  256. Minds & Panpsychism | WG-IT-Services-WordPress 020 8262 0472 Says:

    […] noted that there is currently no way to test this theory, and that integrated information theory fails some common-sense tests when trying to deduce what things are conscious. (A thermostat, for instance, may have some low-level consciousness by this metric.) But Koch said […]

  257. Minds Everywhere: 'Panpsychism' Takes Hold in Science -RocketNews Says:

    […] noted that there is currently no way to test this theory, and that integrated information theory fails some common-sense tests when trying to deduce what things are conscious. (A thermostat, for instance, may have some low-level consciousness by this metric.) But Koch said […]

  258. Effective Altruism Geneva – Qu’est ce que la conscience? Says:

    […] Cette question se pose car nous détenons une compréhension claire de l’éventail des solutions possibles malgré la complexité de cette première. Tellement complexe que Scott Aaronson dénomme la question de distribution : le Problème relativement compliqué de la conscience. […]

  259. David Roberts Says:

    This is not my area of expertise so I may be wrong. However, I think much of it, with the notable exception of Jaochim Keppler’s post, is based on a fundamental error. If I understand it correctly, the arguments are premised on the assumption that there is a unified underlying causal system. Those who propose an analogy with computing take the error even further, assuming that brain functions are based on logical gate types and controlled processing.

    It is certainly true that our conscious awareness appears to be largely coherent and consistent but that says nothing about the other functions of the brain which are implicit and appear to make up 80-90% of brain functions. For example, recognition of a concept e.g. a blue colour, a blue book, depends on implicit perception and categorization of that perception before it can be recognised and brought to awareness. From what I can see, there are multiple, disparate and automatic implicit processes that occur. What comes to awareness is merely the tip of the iceberg emergent from the interaction of a myriad of organic processes. Those multiple implicit processes can be, and often are, inconsistent and contradictory. There is no underlying causal system but a dynamic interplay of conflicting ‘concepts’.

    Indeed it takes a special set of circumstances to trigger conscious awareness. Consciousness is not integral to our brain function (or indeed our experience) but is an epiphenomenon.

  260. Tommy Yip Says:

    I am writing an article of my pure speculation on the possibility of Consciousness interacts with the physical world via probability (synergetic set):
    https://issuu.com/tommy39/docs/interactivedualism
    I think if interactive dualism does exist, then it could be detectable experimentally by searching irreducible synergetic set of random signals occured in the brain.
    Anyone who is interested may take a look or give advise.
    The initiative comes from that consciousness or free will seems to relate to neural ‘background random signal’.
    http://www.livescience.com/46411-free-will-is-background-noise.html

  261. Onne Says:

    Aplologies for being a little late to the party. As an information scientist, the phi does not make much sense. A Φ of a computer is low. But it can model any Φ inside. Almost the moment the system has a turing machine in it, it has unlimited Φ? No it depends on the program …

    I have written about how a brain that experience conscious thought might evolve from a single neuron here: http://leverlabs.io/blog20160924/ but the gist of it is. The brain persist and predicts sensory input. That mechanism can also start predicting the outcome of plans. Plans include things not in the real world, like “utility”, “harm”, “plans” and “self”. This is how the self becomes a thing to the brain.

    (Considering conscious from the dictionary definition, to be aware of being alive, that is how most people understand it. Otherwise we might as well call our DVD player conscious …)