My diavlog with Eliezer Yudkowsky

Here it is.  It’s mostly about the Singularity and the Many-Worlds Interpretation.

(I apologize if Eliezer and I agreed too much, and also apologize for not quite realizing that the sun was going to set while I was speaking.)

And here’s the discussion that already took place over at Eliezer’s blogging-grounds, Less Wrong.

96 Responses to “My diavlog with Eliezer Yudkowsky”

  1. Carl Says:

    Eliezer seems very confident considering that he’s talking about things that haven’t ever been seen, be it strong AI or other worlds.

  2. Liron Says:

    Eliezer had the more “interviewer” role in this diavlog, asking about topics on which he’s already written very extensively and thoughtfully. I would like to see Scott bounce his own topics of interest off Eliezer.

    By the way Scott, thanks again for writing up your extraordinary Quantum Computing Since Democritus course (a.k.a. Intro to Scott’s PhD Thesis?). I’m only on lecture 8 and I’m really looking forward to reading the rest in the next couple months.

  3. Carl Says:

    I agree with Liron: the public needs more exposure to computation complexity.

  4. John Sidles Says:

    My impression is that Scott and Eliezer both embrace the standard model of quantum mechanics as a large-dimension (linear) Hilbert space … and isn’t this shared assumption the main reason why their discussion ends up “reshuffling their cards and playing the same game” (in Scott’s words)?

    It is perfectly feasible to shuffle an alternative deck of mathematical cards and play different games, and the resulting narrative is (IMHO) just as much fun to think about as standard quantum mechanics.

    Let’s imagine that the history of math and science has already repeated itself three times … with the role of “underappreciated young mathematician/scientist” played (in succession) by János Bolyai, Hugh Everett, and Kurt Schilling.

    The resulting tryptical narrative is a fable … definitely not history … but like many fables it is fun and instructive (Camus: “The truth is colossally boring”, Twain: “Truth is precious; let us economize”).

    — Fable I —

    Once upon a time, people lived in a Newtonian dynamical world whose state-space was Euclidean. Young János Bolyai envisioned an alternative state-space, but no one took these ideas seriously until practical engineering applications (Gauss’ survey of Hannover, for example) required it.

    — Fable II —

    Once upon a time, people lived in a Copenhagen dynamical world whose state-space was Euclidean. Young Hugh Everett envisioned an alternative state-space, but no one took these ideas seriously until practical engineering applications (building quantum computers, for example) required it.

    — Fable III —

    Once upon a time, people lived in a Everett/Chuang/Nielsen dynamical world whose state-space was Euclidean. Young Kurt Schilling envisioned an alternative state-space, but no one took these ideas seriously until practical engineering applications (modeling quantum spin microscopes, for example) required it.

    —————

    The point is, that Eliezer and Scott’s discussion is following a sub-optimal search strategy in looking to the future for revelations about the nature of quantum mechanics. Our tryptych of fables suggests that an equally viable search strategy is to look the past for transformative mathematical ideas (in this case, the work of Ashtekar, Schilling, and Berezin, to name a few), and then look to the present for engineering applications (large-scale quantum spin microscope simulations, for example).

    As revolutionizing our conception of quantum mechanics … well … doesn’t this kind of cognitive revolution usually happen last, not first?

  5. John Sidles Says:

    Whoops … the role of “student whose PhD work was underappreciated” is actually played by Troy A. Schilling (not “Kurt” as in the post above).

    Troy’s advisor was Abhay Ashtekar, and his PhD thesis Geometrical formulation of quantum mechanics provides a framework—based in a nutshell on the “compatible triple” of Kählerian quantum state-spaces—that encompasses pretty much all of today’s large-scale quantum simulation codes (and as a bonus, the “compatible triple” framework integrates naturally with string theory).

  6. roland Says:

    Can you explain what decoherence means in the many worlds? I find that interesting because if i rembember correctly decoherence entails that microscopic quantum weirdness cancels itself out on a bigger scale.

  7. Vladimir Levin Says:

    I can’t help the desire to just get this off my chest: This Eliezer fellow strikes me as such a tool. He gets so passionate and pushy about discussions which are entirely pointless: The singularity will either arrive or it won’t, so what’s the purpose of debating it? By its very nature it seems like the kind of thing we won’t really be able to predict or plan for. As for QM, as far as I know the expected results from experiments are the same under all interpretations, so why get one’s shorts in a bunch over which interpretation is the definitively “correct” one? Is anyone planning any experiments in the foreseeable future that would somehow rule out the other interpretations? I thought not. He’s the kind of guy that you could kind of empathize with the bullies who surely beat the snot out of him when he was a kid…

  8. mg Says:

    I was impressed by Scott’s Obama-like conduct.

  9. asdf Says:

    I downloaded the mp4 but the audio was so warbly I couldn’t watch the thing for more than a couple minutes. Oh well.

  10. null Says:

    Really more then 1000 years to match human intelligence. So what does human intelligence does that computers don’t do yet and are not likely to do in the next 20 years? What can computers do now that humans will not be able to do period? Deep blue? there is a free program that can run on a $300 PC that can run circles around Deep Blue and any human. Face recognition, predicting the weather, doing math, science, and patents. Without computers civilization would grind to a halt. What did computers were able to do 50 years ago? What kind of progress was made in the past 20 years. If you can’t guess, just say you can’t. Computers have many advantages over gray matter, and do not have many of the limitation imposed on biological systems. Yes nature took more then 3.5 Billion years to make flight. But our jets can fly faster then any of nature designs. Yes a bird is more complex then a jet and you might argue that even one cell in that bird is more complex then the jet; but the fact remains that the jet fly faster, and might be better at many other tasks relating to flight.

  11. wolfgang Says:

    >> what does human intelligence does that computers don’t do yet

    I think computers don’t (yet) cringe like I do when watching bloggingheads.tv

  12. Scott Says:

    So what does human intelligence does that computers don’t do yet and are not likely to do in the next 20 years?

    Write your comment.

  13. Scott Says:

    Vladimir: Let’s avoid insults, OK?

    Personally, I think these questions are pretty important, which is why I was happy to debate Eliezer about them.

    If you genuinely believed that (1) the singularity is near and (2) our actions now have even a fighting chance of affecting its nature in predictable ways (e.g., heaven-on-earth vs. paperclip-maximization), then it would be sensible to respond exactly as Eliezer has: that is, by being really, really obsessed with the singularity.

    As for interpretations of QM: yes, they all (by construction) lead to exactly the same predictions for all experiments we can do in the foreseeable future. But

    (1) Different interpretations can be better or worse for coming up with new (testable) ideas related to quantum mechanics: Bohmian mechanics led to the Bell inequality, while the many-worlds interpretation led to quantum computing. They can also be more or less parsimonious or mathematically elegant.

    (2) If you believe quantum mechanics is not the final story, different interpretations suggest different avenues for generalizing it.

    (3) If you could put your own brain in coherent superposition, then the different interpretations could plausibly yield different experimental predictions (depending on exactly what you mean by “experimental prediction” 🙂 ).

    (4) If you’re the sort of person who worries about the mind/body problem, then it’s hard not to also worry about interpretations of QM. I’m guessing this isn’t relevant to you though.

    If someone were obsessed with speculative issues in high-energy physics—say, inflation or the string theory landscape—would you also empathize with bullies beating that person up?

  14. ScentOfViolets Says:

    I’m surprised that neither of you mentioned decoherence. Isn’t that supposed to Fix Everything in an epistemological sense? And doesn’t that sweep commonly asked questions like ‘where does all the extra energy come from to create these universes’ neatly under the rug?

  15. Vladimir Levin Says:

    Scott, I apologize for the harshness of my comment. I do think dwelling on managing the coming singularity 😉 is a silly way for someone, especially a smart someone, to sped their time. Then again people should be allowed to do what they want; the people at HP thought the first prototype Apple computer was a silly pursuit too and passed on developing it when it fell into their lap. There are lots of examples like that in history. The aspect of Eliezer’s part in the discussion that bothered me the most was the pushiness, the condescending “isn’t it obvious that I’m right” aspect of it. How can you possibly take such an attitude toward topics that are so intrinsically speculative? The “rapture of the nerds” singularity he describes is certainly no more likely than the civilization-ending “peak oil/global warming/disease armageddon” singularity.

    PS: What is this mind-body problem you refer to? 😉

  16. Sim Says:

    Scott, I don’t understand (3). Could you be more explicit?

    (PS: also, do you plan to add new answers to the “ask me anything” post or it’s closed?)

  17. Vladimir Levin Says:

    I’d like to add one brief comment: I was thinking about what kinds of developments I would consider to be precursors to the singularity. For example, while I am impressed with flash-based mp3 players and roombas, I don’t consider such technology to be a strong indicator of a singularity coming soon to your neighbourhood. So what would be? The first thought that occured to me was cars that will actually drive people around their cities completely autonomously. If we get to the point in the next 10 years where I can jump into my car, tell it to take me to the local mall on a weekend or to drive me downtown to work during rush hour, and it all works smoothly and flawlessly, then I’d say the singularity may be closer than it looks! Any other examples of such early-signs technologies?

  18. Scott Says:

    ScentOfViolets:

    And doesn’t that sweep commonly asked questions like ‘where does all the extra energy come from to create these universes’ neatly under the rug?

    That question was never even over the rug—it’s a complete misunderstanding of quantum mechanics (regardless of what you think about MWI). 🙂

    Incidentally, I did talk about decoherence (though I don’t remember if I used the word or not). I accept that decoherence successfully explains the vanishing of the off-diagonal terms (i.e., the lack of macroscopic quantum coherence) in the real world—the question is whether you’re done then, or whether there’s something else to be explained if you want to connect quantum mechanics to what we experience.

  19. Scott Says:

    Scott, I don’t understand (3). Could you be more explicit?

    Many-Worlds (and for that matter, Bohmian mechanics) clearly predict that in principle, you could do an interference experiment involving macroscopically-different states of your own brain.

    Dynamical-collapse theories, and any theories involving consciousness collapsing the wavefunction, predict that you can’t.

    It’s not clear what the Copenhagen interpretation predicts, if anything—for Bohr, this might have just been one more question that we’re not allowed to ask.

  20. Sim Says:

    @ Vladimir Levin

    Your question is very interesting, but see the 20 first seconds of that 🙂

    My own guess would be two-parts:

    First, I’d consider the end of the Malthusian trap as a sign that the singularity is possible within a few centuries. Well… already done centuries ago.

    Second, I’d consider a technology tracking the electrical activities of a whole human brain at a submillimeter and submillisecond scale as a sign it will comes within a few decades. It’s not already done, but seems very reachable if we try to.

  21. Vladimir Levin Says:

    @Scott – I do realize there have been competitions involving cars that drive themselves on closed courses – and that’s exactly why I brought up the idea of autonomous cars. However, I am very specifically referring to a robust, fully commercial technology that works under all driving conditions you’d run into in your daily life. For example, if the car can take me out of a parking spot somewhere in the suburbs, make its way downtown, drive through the downtown streets in rush hour – taking into account pedestrians and cyclists – drive into an underground parkade and successfully park itself, and this all works smoothly all the time, not as a demo or proof of concept, that’s the kind of thing I am talking about.

  22. Sim Says:

    @ Scott #18

    Thanks, but…. not sure I understand you in full deep.

    If MWI true, ok I can theorically evidenced a superposed-consciousness of someone else (whereas it’s not possible in some other interpretations).

    What puzzles me is the “own” in “in your own brain”. If consciousness equal brain state, then I can’t see my own superposition, even in MWI… or I miss something?

    @ Vladimir Levin #20

    You’re asking for more than what humans can do safely and smoothly don’t you? Is there any reason you think robot can’t already do better?

  23. Scott Says:

    Vladimir:

    the aspect of Eliezer’s part in the discussion that bothered me the most was the pushiness, the condescending “isn’t it obvious that I’m right” aspect of it. How can you possibly take such an attitude toward topics that are so intrinsically speculative?

    I’ve sometimes been bothered by that aspect of Eliezer’s thought as well, and I hope my disagreements with him came out in the diavlog.

    At the same time, something deep in me rebels against the inference: “this person reminds me of a snotty, know-it-all little kid, therefore, I’m not even going to consider the content of what he’s saying.” As you can imagine, I’ve often been on the receiving end of that “argument” as well. 🙂

    And if we should always be on guard against this kind of feel-good, “ostracize-the-nerd-saying-something-uncomfortable” reaction, then all the more so in the case of Eliezer, someone whose intellectual ability isn’t up for dispute.

    Consider this: if you or I were explaining (say) evolution or the Big Bang to someone who didn’t believe in them, we’d probably also come across as pushy, all-knowing snots. So, if you took the tenets of MWI or Singulatarianism to be as settled as evolution—which again, I don’t, but Eliezer does—you might have the same tone of exasperation as we would when talking to a creationist.

  24. Vladimir Levin Says:

    @Sim: You’re right, people are certainly not perfect drivers – in fact that would be part of the problem for an autonomous car. All I am asking for though is a driving experience that is equivalent to what I would get by driving myself – only instead of driving I could safely close my eyes and rest for the duration of the trip.

    @Scott: You’ve got a point. Still, Eliezer would have done well to recognize that, at least in his discussion with you, he was dealing with someone who a) had solid expertise in the area of QM and science and engineering in general; b) clearly was capable of rational thought – so there a basis for a meaningful exchange existed and finally; c) came to the discussion discussion with a respectful attitude and an open mind. Anyhow, sorry for belabouring my point! I’m out!

  25. to Says:

    Bloggingheads recently had some creationists invited and it cause an outrage among the other contributers. Then I was reminded about the Singularity presence on Blogginghead and wondered why that didn’t lead to similar objections. I know atleast John Horgan, a contributing Blogginghead, has written some harsh critique of the Singularity movement and basically called them a technology cult. I personally find it hard to take the Singularity seriously, especially it’s urgency, given the current state of AI.

  26. ScentOfViolets Says:

    That question was never even over the rug—it’s a complete misunderstanding of quantum mechanics (regardless of what you think about MWI).

    My apologies for lack of clarity – yes it’s a complete misunderstanding of QM, but for some reason it seems to be a rather common one. These types of questions(I think Elizar had one of these, though I may have misunderstood his phrasing) are reminiscent of the barn and pole type of paradoxes that seem to come up so often in SR in that the answer to the ‘paradox’ isn’t difficult, but people have difficulty accepting it as an answer.

    Explanations in terms of decoherence seem to come with a background linguistic assumption that this thing called the ‘wave function’ has always been there and doesn’t ramify the way the MWI seems to imply, and so the question of where all the ‘extra stuff’ comes from isn’t even asked. In my personal experience, that is.

    If you suspect on the basis of what I write that my position is shut up and calculate, you’d be suspecting correctly 🙂

  27. Job Says:

    Human intelligence, as i see it,is a mechanism that is able to acquire and select knowledge and which ultimately enables something similar to the concept of “natural selection” applied to knowledge.

    In working towards a super AI we can increase the efficiency with which the AI selects knowledge, but the information must come from somewhere.

    I don’t think that we’re yet at a point where we can derive all remaining knowledge through deduction alone, without resorting to experimentation and real-world interaction.

    I think it’s unlikely that an intelligent being, alone in a room, might be able to derive all knowledge about the universe merely through deduction. A super AI confined within our planet might face have the same problem.

    The need to validate knowledge through experimentation and real world interaction, introduces a significant slow down. Even thousands of years seems like too low an estimate.

  28. Bram Cohen Says:

    I had the same feelings about the interviewer as some other commenters.

    Does many worlds really literally say that innumerable versions of planet earth are getting spun apart every second? I once read someone who I think was a physicist I think complaining that that pop-science concept is actually quite misleading.

    One argument you didn’t make is that there a lot of very spooky coincidences with how all the different ‘interpretations’ of quantum mechanics are actually a bunch of different pieces of math which all magically turn out to be exactly equivalent. To simply pick one as ‘right’ and the others as ‘wrong’ doesn’t explain why all the spooky coincidences happen, and would seem to be missing some larger point.

    Also, it seems that the ‘shut up and calculate’ interpretation is the only one which has made any real progress in the last few decades, so experience would seem to indicate that getting overheated about which interpretation is ‘right’ is mostly a waste of oxygen.

    I suspect that understanding an ant isn’t so much about understanding deep general concepts as understanding a very long list of interesting and clever heuristics. In my own day to day engineering work, I encounter lots of stuff which requires cleverness, and never once has building a generalized system ever been the right answer. In every case the right answer has been an extremely specific set of techniques hand-tuned by me, the expert, with feedback based on experimental data. Chess programs are very much the same way. I would even argue that very good chess players are in a real sense just good chess heuristic programmers, it’s just that they’re programming their own brain instead of a computer.

    I’d also venture that although the delta between an ant brain and a frog brain is quite great, the delta between a frog brain and a human brain isn’t all that much. I routinely do work which is in some sense collaborative with a computer, and while I can trivially perform feats which the computer can’t touch using my sense of vision and space, my ability to do anything which symbolic logic which can’t be done for my are quite dubious. Partially this is because symbolic logic is the part of my own brain I can intuit best, but also I think it’s because one only needs a tiny bit of symbolic reasoning ability to build civilization, and building civilization for the first time we’re doing, so by the intermediate value theorem it stands to reason that our symbolic reasoning ability is poor.

  29. John Sidles Says:

    The ‘shut up and calculate’ interpretation is the only one which has made any real progress in the last few decades…

    Hmmm … this implies assertions like: “The 1965 Kohn-Sham equations do not constitute “real progress.'”

    Isn’t it true that “real progress” is commonly recognized only decades later?

  30. Stas Says:

    So what does human intelligence does that computers don’t do yet and are not likely to do in the next 20 years?

    Finding short arguments/proofs in specific instances of hard problems. Here are examples of such positions in chess where computers are helpless (and there is a simpler example of this kind where a queen and a pawn can’t win against two knights, I just can’t find the link at the moment.) And any strong enough chess player (above 2400 ELO) will tell you there are many positions where computers are not to be trusted. Similarly, there are SAT instances with short proofs of unsatisfiability that can’t be solved by any existing SAT solver.
    Generally speaking, the current AI progress mostly happened due to statistics and clever branch-and-cuts in brute force search. This is not even close to what is needed to replicate human creativity…

  31. Jim Graber Says:

    Quoting Scott:
    “Incidentally, I did talk about decoherence (though I don’t remember if I used the word or not). I accept that decoherence successfully explains the vanishing of the off-diagonal terms (i.e., the lack of macroscopic quantum coherence) in the real world—the question is whether you’re done then, or whether there’s something else to be explained if you want to connect quantum mechanics to what we experience.”

    I want to talk about this next step to “connect quantum mechanics to what we experience.”

    I think there definitely is “something else to be explained”, which is mathematically represented by step 2 below.

    Step 1: general matrix diagonalized.
    Step2 diagonal matrix replaced by diagonal matrix consisting of a single one and many zeros.
    (in accordance with proper probabilities)

    Correct my vocabulary if necessary.

    For now, I’ll call the first step “decoherence”.
    I’ll call the second step “collapse” or “projection postulate” or “the Born rule”.
    However, you could even call the second step “splitting” in the MWI interpretation.
    I’ll call the entire two step process “measurement” or “quantum-classical transition”.

    Would you agree there is evidence for step 1?
    Would you agree there is evidence for step 2?
    How about the entire process?

    Particularly, would you agree that step 2 really is necessary to make QM predictions match our actual experience?

    TIA
    Jim Graber

    Optional follow on questions:

    What do you make of statements such as:
    “There is no evidence of collapse.”
    “We cannot determine the location of the quantum-classical boundary”
    (These statements infuriate me. I consider them obviously false.)

    How does this relate to recent published claims such as:
    (strongly paraphrased and oversimplified)
    This physical quantum memory has a lifetime/half life of ten microseconds.
    That quantum gate operates reliably below .001 Kelvin.
    We can make a non demolition measurement in the following way.
    The following measurement exceeds the standard quantum limit.

  32. Douglas Knight Says:

    Bram:

    One argument you didn’t make is that there a lot of very spooky coincidences with how all the different ‘interpretations’ of quantum mechanics are actually a bunch of different pieces of math which all magically turn out to be exactly equivalent.

    No, there aren’t different pieces of math. They really are interpretations – scare quotes are wrong.

  33. Cody Says:

    That was an enjoyable discussion.

    Scott, do you really worry about the mind-body problem? I was very excited when I first came across that because it so neatly categorizes different viewpoints, and especially so because it gives a name to my own view, (strict) physicalism, which I would have assumed you hold too. There were still a few problems of consciousness that bothered me after that, but I think I’ve come to terms with most of them.

    Quite a while ago there was a discussion here about which interpretation was most preferable (scottaaronson.com/blog/?p=218), and Peter Shor chimed in with my favorite advice, which was that the various interpretations can all be used to your advantage. It seems clearly a bad idea to discard any of the interpretations until we find definitive evidence to rule them out.

    It also seems very safe to say that none of them provide a satisfactory description of what is happening, though if we find evidence to support one of these uncomfortable implications (like other worlds), then that would go a long way to making them more satisfactory. I think my biggest objection to the MWI is something Scott seemed to just touch upon, which is that it seems to veer away from science by implicating an unobservable participant in the explanation. Though ultimately, interpretation will always take a back seat to the predictive powers of the theory, which will always be the mathematics, and which has no regard for our aesthetic sensibilities (although it seems to surprisingly often).

  34. John Sidles Says:

    Bram Cohen says: “Different ‘interpretations’ of quantum mechanics are actually a bunch of different pieces of math which all magically turn out to be exactly equivalent. “

    Douglas Knight says: “[Differing quantum frameworks] aren’t different pieces of math. They really are interpretations – scare quotes are wrong.”

    With great respect to both of you, I think if we substitute ‘formulation’ for ‘interpretation’, then Bram Cohen’s point of view is pretty much the same as Feynman’s point of view:

    We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another.

    Feynman’s quote (from 1966) is now forty-three years old … so we can ask, what new formulations have we learned in the interim?

    My own preference is to think of equivalences in terms of symplectic, riemannian, complex, and lindbladian structures—and then define quantum mechanics to be that dynamical framework that supports all four structures compatibly. This point of view has emerged since Feynman’s era … and is proves to be very congenial for purposes of reaching out to broader communities.

    The first three structures constitute the well-known compatible triple of Kähler state-spaces—but (AFAICT) no-one has (yet) systematically studied the structures that result when linear lindbladian theory is deformed onto Kähler manifolds. Hey, just because quantum systems engineers do this deformation doesn’t mean that we understand what we are doing.

    The point is that to understand quantum mechanics better, there is no need (yet) to resort to the desperate expedient of doing philosophy … 🙂

    The math of QM is so deep—much deeper than is traditionally taught in introductory QM courses—that there are still plenty of mathematical frontiers for our century to explore.

  35. Douglas Knight Says:

    Feynman was not talking about many worlds. You don’t need mathematical miracles to be scarred by Schroedinger’s cat.

  36. null Says:

    The irony in Stas Comment #30 prove the maxim “It is not what we don’t know that hurts us, it is what we know that ain’t so” The first example in his referenced site 81.- G. Zajodiakin, 1929 is in error in fact all of humanity believed this ‘simple’ position to be a draw until 1994 it was proved to be a win http://www.chessbase.com/newsdetail.asp?newsid=1766. Computer chess programs, SAT solvers are built to solve the class of problems that they where designed to solve, not specifically constructed problems (what’s the use for those?). In the past 15 years SAT solvers have increased a billion fold in speed that 1000x per 5 years. Unsolvable problems (what would take 1000 years) of 15 years ago take 30 seconds today. Scott Comment #12 your guess is right, I am human :-), but if you try to attribute credit (I did use a word processor, search engine, computation engine, and there are many more advance NLP tools that I did not use) you run it to this dilemma “scientist plea to GOD ‘we can make all your miracles’, bends down picks up a handful of dirt … GOD says, ‘ ….No, No, No. You have to get your own dirt!’ “

  37. Stas Says:

    @null – nice catch. To err is human… 😉
    Anyway, that book confirms the point I was making:
    .. unlike a human GM, the computer equivalent has some remarkable blind spots which can easily lead an unwary user astray.
    So, how can we characterize those instances (positions) where computers are blind? Commonly, they have some specific structure (e.g., fortress) that defines the game outcome yet it is not detectable statistically. The fact that humans are good at exploiting such structures shows that human intelligence is something more than brute-force search plus statistics, and this is extremely useful in discovering and proving new theorems, for example. And this is why specifically constructed problems are very important tests for AI – they assess if the existing AI engines have anything to match the intelligence in exploiting structures.
    BTW, I’ll be at the barriers workshop, it’ll be a great pleasure to meet Scott and some readers of this blog in person!

  38. John Sidles Says:

    Null says: In the past 15 years SAT solvers have increased a billion fold in speed …

    Similarly astounding algorithmic speed-up has been seen in many disciplines in recent years: chess programs, ab initio quantum chemistry programs, and even (per Lance Fortnow’s Aug 10 post) go programs. Wow!

    What’s going on? Rigorous TCS theorems tell us these problems reside in intractable complexity classes … so why is the rate of progress so astoundingly rapid? … and (even more important) can we look for this progress to continue? … and (most important of all, IMHO) what mathematical frameworks will we evolve to understand why this progress is so rapid?

    Philosophy is not going to be much help (IMHO) in wrestling with these questions, for the very good reason that there are too many good mathematical avenues that have not (yet) been explored.

    Everyone has their own favorite avenue for understanding why “hard problems are easy” … concentration theorems in noisy dynamical systems are (for me) an especially fruitful area … for the obvious reason that noisy dynamical systems are ubiquitous in the real world … for example, although chess is a zero-noise game, the way that all humans and most computer programs (including the best programs) play chess is noisy indeed!

    At the classical level the study of noise looks like “the study of shmuts”, but at the quantum level the study of shmuts is equivalent to the study of measurement … which provides a far more fertile framework for proving concentration theorems … because nowadays we appreciate that “quantum collapse” is a coarse-grained approximation to what is (physically and mathematically) a dynamical concentration process that is fine-grained and shmuts-driven.

    This concentration is why (paradoxically) computer simulations and game-playing become far more powerful (relative to physical reality) in a world of schmuts: “The Singularity is near, but the Shmuts is here!” 🙂

  39. John Sidles Says:

    Stas and null (and all computer chess aficianados…of which I’m one) if you search for the recent (and highly enjouable) ChessBase article titled “Djaja Study: How Humans and Computers Solved It” … or simply search for the phrase “Are there any studies that these monsters cannot solve?” … you will find that Chessbase has challenged its readers to provide chess problems that computers cannot solve:

    Anyone who sends us a nice puzzle – something as elegant as the position above – which chess engines cannot figure out, can win a Deep Fritz 11 signed by all the players of the Dortmund tournament:

    The results will be interesting for sure … since (generalized) chess is provably EXPTIME-complete. Nonetheless, the era of chess problems that humans can solve, but computers cannot, seems to be passing … and may in fact already be over.

  40. Stas Says:

    John, one position where computers failed to see the draw is really simple, and it held two years ago for sure. White: Nf3, Ne5; black Kh5, ph6; black has a queen somewhere out and white king is around the knights. Black king is blocked and zugzwang is avoidable for white (if king moves around the knights.) I’m sure there are more examples and they will stay for long…
    And generalized chess is PSPACE-complete AFAIK.

  41. Stas Says:

    Ok, my bad, chess generalizations to n*n boards are EXPTIME-complete.

  42. John Sidles Says:

    Also Stas, your example has six-pieces on the board, and so is (nowadays) it is a tablebase lookup.

    Even in Go—where the search problem is much harder, such that brute-force algorithms fails—Lance Fortnow had a post recently about startling advances in the power of the computers.

    It seems to me that we are very far from seeing the limits of this trend that “provably hard is practically easy” in algorithmics and complexity theory.

    This applies equally in the classical and quantum regimes—the main difference is that (in algorithms as in thermodynamics) the mathematical reasons are considerably easier to state in the quantum case.

  43. John Sidles Says:

    Hmmm … we can check Stas’ problem on-line and verify in less than one second that it is a draw … this is a good example that we now live in a world in which “provably hard is practically easy”.

    When it comes to thinking about “hard versus easy” in the quantum domain, my admiration for Nielsen and Chuang’s textbook is unbounded when it comes to their wonderfully lucid and thorough discussion of quantum problems that are provably hard … what QIS/QIT students need now is a similarly lucid exposition of (the immensely diverse class of) quantum problems that are practically easy … the really fun point is that these classes overlap!

  44. Stas Says:

    John, throw in a couple of pawns to make it outside the tablebase lookup. Say, white c4, black c5. And if they make an 8-piece (or even bigger) tablebase, make a pawn chain: white d3,c4,b5,a6; black a7,b6,c5,d4 (hopefully, I’ve not overlooked zugzwang this time 🙂 )
    I actually saw that position a few years ago in some chess blog (probably Susan Polgar’s) and verified the failure of Fritz and Rybka. Will check it again with a newer version later…
    Anyway, chess is not the goal of AI, but a good playground to see AI limitations, and I think the current limitations are evident.

  45. John Sidles Says:

    Stas, I don’t even think you and I are even disagreeing … chess being EXPTIME-complete, we are always going to be able to construct study-type problems that the computers can’t solve … and yet (on the other hand) chess games played by noisy, wet, biological cognition engines like people will (almost) always generate practical-type problems that computers solve easily.

    The really interesting questions are at the interface of hard-versus-easy: E.g., can we program a computer to efficiently generate chess studies that computers can’t efficiently solve?

    AFAICT, your post supplies the correct (but seemingly paradoxical) answer: “Sure, no problem! Generating hard-to-solve chess studies is similarly easy to generating hard-to-break RSA keys!”

  46. Stas Says:

    John, I agree that we are not that much disagreeing 🙂 . However, my point was about constituents of intelligence that can’t be replicated by existing AI approaches, so I wanted to gives examples which were easy for humans but hard for computers. That chess position is such an instance, but RSA is not, because it is hard to break by human mind too 🙂 . IMHO, what we should look into as the next AI step is how to generate new concepts and recognize abstract patterns, like fortress in chess (but not those for which you can easily create statistics from a training set, like in face recognition.) This is necessary to teach computers to do math or any other science, and a good starting point would be further improvement in games like chess. The existing chess engines don’t cover all “practical-type” positions, at least, because human+computer (“centaur” chess happening now in correspondence games) are still stronger than a computer alone.

  47. proaonuiq Says:

    How hard is the hard level of this computer chess player
    (www.chess.com/play/computer.html) ?

    After almost 30 years of no playing chess myself, several months ago, i found in maybe 3 hours an algoritm wich forces a win for whites in the hard level (i´m sure that after a polynomial number of rounds Kasparov would have found a similar algoritm against Deepblue). Can anyone try ? I´m mainly interested in knowing the uniqueness of the solution and the time it takes to find one (hint: my first move is not of a rook´s pawn).

    IMO its weakness is determinism, but it seems to me that randomness is helpless in chess-like games. Am i wrong ?

  48. proaonuiq Says:

    just to clarify: polynomial in the number of moves of the first round (n). If you are given an exponential number of rounds on n then it is easy…

  49. Scott Says:

    Cody (#33):

    Scott, do you really worry about the mind-body problem?

    Not for a long time! 🙂 As far as I can tell, it’s outside the scope of science—in the specific sense that even if someone were hypothetically to solve it, there’d be no way to communicate the solution to anyone else. Maybe I’m wrong, and there’s some nontrivial insight yet to be had that’s amenable to the normal tools of rationality. But right now, we have no idea even what kind of work such an insight would do—in the sense that we know, more or less, what kind of work (say) a quantum theory of gravity or a proof of P≠NP would do.

  50. Cody Says:

    Is there something flawed in the view that consciousness is simply the ability to observe computational processes and to interfere with that processing as it occurs, based on say memory, habit, or some sort of (pseudo?)-whimsical urge?

    Or even to model the brain as an imperfect computer with the usual inputs (audio/visual/tactile/etc.), the obvious outputs (muscle control, decisions related to survival/reproduction), and the addition that the results of computation are (or may be, when conscious) simulated/reviewed before being sent to the outputs?

    Does it seem unsatisfactory? You say a nontrivial insight, but I tend to think the solution is probably more trivial than we tend to think; we seem to mystify our minds, but obviously every one of us is perfectly familiar with consciousness and thought and all the mysterious we believe to be associated with them.

    With respect to what kind of work such an insight would do, I think it might provide direction as to how to go about producing artificial intelligence eventually. Maybe I’m not thinking about the same question as you?

  51. proaonuiq Says:

    Refining and generalizing the idea.

    Refining: Since n (as defined in above comment) in chess would have an upper limit (even for each generalized mxm chess), let´s use the following rule to limit on the exponent of the polynomial: if 2^n, then the maximum exponent of the polynomial is 2 (n^2), 3^n then n^3 (for some games 2 might not be the most natural base). Same for constants.
    So, you lost after 20 moves against the computer player in your first round, you know you can beat it easily after 2^20 rounds, but could you find a strategy to force a win in only 20^2 rounds (i do not know if 2 is the most natural base for chess, just an example)? Time cost of each move is 1, so time = number of moves. Of course Chess experts surelly can find a solution in just the first round against this computer player, but what about non-experts ? can they beat the computer in a polynomial number of rounds ? will each one find a diferent solution or will solutions converge to a reduced set ?

    Generalizing: now a computer is more intelligent than a human (and then the singularity is there) if and only if the best skilled human needs more than a polynomial on n (n and polynomial as defined above) number of rounds to beat the computer. For example if in the first round of a Turing test the machine convinced the best psychologist that it is a human after n questions, and after n^a rounds (i wonder what could be the base a in a Turing test and what coul be a restart or reset), the human is still convinced that he is talking to a human, then we can say that the computer passed the Turing test.

  52. Stas Says:

    @proaonuiq: That’s a weak engine, my estimation for the hard level is about 1600 ELO.
    Both computer and human players do randomize openings, at least. But chess is not poker, it’s a game with complete information, so, probably, in most positions there is a unique best move.
    Not sure how many rounds would be needed to beat Rybka even for a top grandmaster if no material odds and no access to opening and endgame tablebases is provided (for the human player.) But I wouldn’t be surprised if it’s superpolynomial (in the n you defined.) Computers are just too good in tactics and there is a portion of it in every game.

  53. Stas Says:

    @Scott, Cody, and others re: mind-body problem.
    Many years ago a friend of mine came up with a test for materialistic beliefs. Suppose there is a perfect copying technology (at the elementary particles level.) Then we can take a human being, create a precise copy, and then kill the original. If you believe that you are just atoms you are composed of, there should be no problem for you to agree to go through this procedure as your copy doesn’t die. Question: would you agree to go through it? (assume a huge amount of money is going to be awarded to your copy afterwards if you wish to have an upside.)
    Or is the perfect replication impossible because of some quantum weirdness? I’m not that good at physics to have an opinion here.

  54. Cody Says:

    Stas, Scott discussed that same question (roughly) in Quantum Computing Since Democritus lecture 10.5: Penrose; it bothered me a lot, and since then I have thought about it a great deal.

    I think a question that sheds some light on the matter is: when you wake up tomorrow, is there any experiment you can perform that can conclusively rule out the hypothesis that you are a clone? (Given that during sleep you reached some deep state of unconsciousness.) It seems reasonable that the original and the clones will be entirely convinced that they are each the original, given that at the moment of their creation they are entirely indistinguishable.

    From my perspective the resolution is to declare my consciousness to be the defining feature of “me”, and during states of consciousness “I” am constantly evolving, but during states of complete unconsciousness “I” don’t even really exist (in the sense that is important to me at least).

    So maybe this is the better question: given this perfect cloning technology, and given that every time we transition between unconsciousness and consciousness we incur a complete uncertainty regarding our past existence, why is it that being told you’ll be killed tonight in your sleep different than just having that happen? Could it simply be the natural instinct of self preservation combined with our ability to comprehend the concept of future consequences?

    I’ve never taken this side till now, but I suppose with the large monetary reward and an agreement that you destroy the original (me?) while unconscious (some attachment to pain avoidance?) sure, I’d go for it.

    It’s interesting that the situation is somewhat more appealing if, as the subject, I am completely ignorant of the situation. That is, since I won’t know the difference, go ahead and perform the experiment. Don’t tell me about it though, that incites all sorts of messy emotional consequences that appear largely invalid (and more so useless).

    Also, I tend to think this sort of thing is unobtainable, being simply too complicated to ever be constructed in reality. But it’s a nice thought experiment to help us try to clarify the ideas.

  55. John Sidles Says:

    A great number of SF stories discuss these points … Stanislaw Lem’s 1971 Non Serviam is about consciousness replication … Ian Watson’s Queen Magic, King Magic is about *both* chess and consciousness replication … and there are hundreds more stories like these (all of which wonderful IMHO).

    Isn’t it wonderful … isn’t it fortunate for all of us … that these beautiful mathematical frameworks lend themselves to receiving wonderful narrative structures? And it is even more wonderful that (as with all other aspects of mathematics) it is generically the case that multiple narrative structures are possible.

    Would we really want to live in a world in which just one narrative explained the world? Historically, it has mainly been tyrants, ideologues, high-priests, and bureaucrats who have attempted to impose a stifling uniformity upon human cognition.

    New mathematical frameworks are continually being created … new mathematical narratives are continually being conceived as a natural part of this process … and so we can (IMHO) look forward to a wonderful century of mathematical and narrative creation.

    Will these new frameworks and narratives culminate in a Singularity? That seems implausible to me. Will diversity of mathematical frameworks and narratives continue to increase? Well … we can hope so, and we can be pretty confident in that hope!

  56. John Sidles Says:

    Oh yeah … Michael Nielsen’s blog links to a coming film that compatibly equips the mathematical ideas of this thread with a narrative: James Cameron’s Avatar.

    I agree with Michael that Cameron’s trailer is well-worth watching. As with all wonderful narratives—and all great mathematical frameworks too—Cameron’s narrative compatibly unites traditional elements with transgressive elements … good!

    As Austin Grossman has his all-too-human protagonist Dr. Impossible (aka “Smartacus”) say in Soon I Will Be Invincible:

    There has to be a little bit of crime in any theory, or it’s not truly good science. You have to break the rules to get anything real done. That’s just one one of the many things that they don’t teach you at Harvard.

    There is plenty of thrills and laughs in narrative-driven creative works like those of Lem, Watson, Cameron, and Grossman — but there is plenty of food for thought too. Because creating a compatible narrative is (like theorem-proving) in the class NP.

  57. Stas Says:

    Cody, my friend didn’t mean that the human being should be unconscious during the procedure, but we can play with both scenarios. If I understand you correctly, you believe you don’t die anyway, but you wouldn’t go through it while being awake because you don’t want to deal with “all sorts of messy emotional consequences that appear largely invalid”, right?

  58. @Sidles Says:

    I don’t understand what you say Sidles. Plain English please!

  59. John Sidles Says:

    @Sidles Says: I don’t understand what you say Sidles. Plain English please!

    Hmmm … when it comes to “compatible triples”, try Google-searching for “almost complex compatible triples” … the challenge here is that the mathematical language is plain and clear, but it’s not English!

    Our QSE Group has some Kavli lecture notes that summarize (our view of) QM from the compatible triple point of view; we are polishing these notes into code documentation; the notes are nothing more than the traditional compatible triple framework of Kählerian geometry, with some (minimally deformed) Lindblad structure grafted on, as algebraically adapted to practical QM calculations.

    When it comes to fictional narrative structure that has a scientific flavor, there is no help but for studying the masters: Patrick O’Brian … Cordwainer Smith … Terry Pratchett? … Marcell Grossman?? … the broader the narrative humor, the better (IMHO). And in the non-fiction domain, there is no finer narrative summary of Enlightenment themes than Jonathan Israel’s … our 21st century will surely not be less interesting than the (terrifically exciting) 16th-17th centuries that Israel describes.

    And finally, it comes to conceiving a narrative structure that is compatible with the Kählerian triple of algebraic geometry and the Lindblad structure of QM … well … that’s what I’m thinking about nowadays … that’s what lots of people (at last week’s Kavli/Cornell conference anyway) are thinking about.

    `Cuz there’s surely going to be much more to the technological narrative of the 21st century than just the Singularity and/or quantum computers, ya know!

  60. Hopefully Anonymous Says:

    I’d like to see an Aaronson/Siddle diavlog.

    As for Eliezer, I’m too aware of my own prejudices to claim in good faith that he should go to grad school and pay his dues in a field if he wants to hold himself out as an expert.

    I do think there are plenty of anonymous post-docs with more intelligent things to say on Eliezer’s discussion topics, but who are more careful in their social epistemological performances.

    I will say this for Eliezer: he amassed and audience and then delivered it to Scott Aaronson. That’s better than amassing an audience and NOT delivering it to Scott Aaronson.

  61. John Sidles Says:

    A recent “diavlog” that is interesting because it nicely triangulates some of the issues in the Yudkowsky-Aaronson dialog is Intel Chipchat #56: Linking Technology and Society: Interview with Genevieve Bell:

    … When I [Genevieve Bell] joined Intel it was very much a company ‘wrapped around the axle’ of Moore’s Law. For better or worse, we talked mainly about microprocessors getting smaller and faster and cheaper on a known cadence. And now, when I look to the kinds of things that we talk to ourselves about, it’s ‘Intel designs the technology that makes what you care about possible.’ And that’s a really different articulation of the mission …

    If we adapt Dr. Bell’s analysis to the 21st century missions that my medical colleagues and I care about (e.g., regenerative medicine) and the emerging technologies that help make this mission feasible (e.g., compatible triple QIS/QIT + quantum spin microscopy) … then we are led to wonder …

    Perhaps the Intel/Bell mode of thinking is right? Perhaps ‘wrapping our thinking around the axle’ of nanotechnology and/or quantum computing and/or the Singularity obstructs us from perceiving alternative paths forward for the 21st century?

  62. John Sidles Says:

    As a follow-on to the above, I see that MIT Press has a forthcoming book by Intel’s Genevieve Bell with the intriguing title Telling Techno-Cultural Tales.

    The relevance to QIT/QIS is partly that Intel’s technology cadence is pressing against the fundamental limits to quantum and thermodynamical efficiency … and partly the codes running on Intel’s chips are pressing against the fundamental limits to algorithmic efficiency … and especially that Intel’s corporate strategy has begun to look beyond the traditional “Big Three” techno-narratives of nanotech, quantum computing, and the Singularity.

    Isn’t it true, that if our three best techno-narratives are nanotech, quantum computing, and the Singularity, then our planet (most likely) is headed for big trouble? Because those three narratives are neither technically strong enough, nor socially strong enough, to sustain the health and prosperity of a planet with ten billion people on it?

    For me, that’s the main take-home lesson from the Yudkowsky-Aaronson diavlog.

  63. Stephen Harris Says:

    Yudkowsky is a very bright guy who has largely educated himself. This explains the gaps in his thinking. Besides that he is emotionally disturbed. He was worse when he was under financial pressure. I doubt that Scott would have consented to the interview if he realized how bizarre the Singularity concept is or how far Yudkowsky was from entertaining a well-qualified opinion about QFT. Even so, Scott made a peculiar choice and I think there is another undisclosed aspect for Scott’s decision to agree to the interview.

  64. John Sidles Says:

    Stephen Harris Says: Yudkowsky is a very bright guy who has largely educated himself. This explains the gaps in his thinking.

    Hmmm … with respect, this line of reasoning is specious … because isn’t it true that ‘gaps in thinking’ are common in the mainstream literature too—including the QIS/QIT/QM literature? Today’s http://arxiv.org/abs/0908.3023 is a case study.

  65. Hopefully Anonymous Says:

    “I’d like to see an Aaronson/Siddle diavlog”

    I meant Sidles, I hope it was obvious.

  66. John Sidles Says:

    No problem! … Heck, what I’d really enjoy would be a IBM/Intel diavlog between IBM’s Charles Bennet and Intel’s Genevieve Bell … with the working title Emerging Techno-Cultural Tales in QIS/QIT: Escaping the Linearity Trap (that title describing their respective recent researches).

    Fun! … and serious too. 🙂

  67. anonymous Says:

    Hopefully anonymous writes: “I’d like to see an Aaronson/Siddle diavlog.

    As for Eliezer, I’m too aware of my own prejudices to claim in good faith that he should go to grad school and pay his dues in a field if he wants to hold himself out as an expert.”

    HA, I think Eliezer has much more claim to being an expert in his field than Sidles does in his. Check the arxiv for actual work.

  68. Stephen Harris Says:

    Stephen Harris Says: Yudkowsky is a very bright guy who has largely educated himself. This explains the gaps in his thinking.
    Sidles responded:
    Hmmm … with respect, this line of reasoning is specious … because isn’t it true that ‘gaps in thinking’ are common in the mainstream literature too—including the QIS/QIT/QM literature? Today’s http://arxiv.org/abs/0908.3023 is a case study.

    ——————————————————————————

    Yes, it is true. To be more precise, when a person self-educates he often does this without the benefit of an established curriculum which includes topics that he might not explore on his own. There is no error correction from instructor feedback. This can lead to flaws in the foundation which can later create wobbles in the advanced edifice.
    People who are really smart tend to be arrogant about the quality of their reasoning. So most people who now get degrees who are also really smart are forced to take a course or two in Critical Reasoning. So they have standards to examine their own reasoning.

    “However, to argue that it follows that a physical computer would work
    on every input distribution would be to fall prey to the linearity trap.”

    I think this is a logical error and not so much an earlier factual error. The CTC is not a physical solution for this universe, it is a logically possible solution. Because one can posit an alternate physical reality in which a logically possible solution for this universe is now physically possible does not entail support that a QM interpretation (MWI) is more likely to be the correct interpretation because it would allow a physical analog of the CTC in that alternate reality. At least no argument has established that conclusion.

    So I think your point is some type of fallacy. I said that people who self-educate, and other things being equal such as intelligence, are more prone to gaps in their education and thus errors in their thinking. Your refutation was that people who are educated at a school also have gaps in their education and therefore thinking. That is true, but you didn’t establish that the discrepancy between self-education and institutionally educated
    (with a curriculum and instructor) is roughly the same. To put my claim another way,
    if people are equally smart, those who choose to benefit from the experience of others are going to have an advantage over those who dismiss experience of others as not worthwhile. I learned that from experience.

  69. Blake Stacey Says:

    I was poking around the Internets a few months ago looking for cartoon clips when I realized the Singularity had already come and gone: my entire childhood had been uploaded to YouTube.

  70. hk Says:

    I dispute AI having solved the “ant” problem. Simulating a single ant isn’t doing the job. Simulating ants means simulating the hive. The one main thing there is interaction between a vast number of primitive agents with superior abilities. Their senses are everything this is about, their single brains are nothing.

    So the idea that you can just go from ant to frog is grossly disregarding everything AI is about. Not an issue of gradual improvement at all.

    The other issue is that we are talking about the vaguest, least understood issues here. Installing “values” into an AI system? This is too silly to even comment on.

    So for the sceptics, I for one would be happy to see artificial idiots first.

  71. Stephen Harris Says:

    Anonymous Says:
    HA, I think Eliezer has much more claim to being an expert in his field than Sidles does in his. Check the arxiv for actual work.
    ———————————————————————

    Eli states that his field is “artificial intelligence theorist”. I checked arxiv and he has no papers published. Did you perhaps mean his Singularity (and related) website where he has several essays? In one of his earliest papers, maybe it was called a manifesto, he warned of the emergence of inimical rogue AIs. He offered to fly around the country and put governors on systems so they couldn’t go rogue.
    The astounding thing, imo, is that nobody in the world knows how to purposely construct a human-level AGI much less one smarter than humans. Forgive me, but I’d like to quote from an expert on the prospects of benign, bright AI:

    “Unfortunately however, at the moment we neither have the tools nor the theory
    to determine the necessary conditions for an autonomous identity to constitute
    itself in the relational dynamics of these systems. This is because we do not
    even yet know how to identify the existence of an autonomous system in such an abstract dynamical domain (Froese, Virgo & Izquierdo 2007).”

    It seems to me that it borders on gullible, to believe a claim which ignores the lack of a foundational theory, but instead relies on a hunch that some ordinary
    old pc or even a network will randomly or accidentally undergo a Frankenstein monster transformation and become a super-intelligent threat to humanity. I mean there has been 50+ years of purposeful work to make this happen all ending in failures for human-level AI; so obviously we should feel threatened by the chance that it is going to unluckily happen by a twist of fate. Whatever happened to, ya can’t make a silk purse out of a pig’s ear?

  72. mitchell porter Says:

    @ Stephen Harris #71

    “a hunch that some ordinary old pc or even a network will randomly or accidentally undergo a Frankenstein monster transformation and become a super-intelligent threat to humanity”

    That is not the problem. The problem is that some deliberate attempt to make human-level AI will succeed all too well, but with insufficient attention having been given to the AI’s values or guiding objectives, should those then become coupled to a superhuman problem-solving ability. Even a harmless sounding goal, like “optimize this function”, can become deadly to the human race if pursued with superhuman ingenuity by an entity which cares nothing for anything else. All the AI has to do is realize that the world consists of matter, that matter can be turned into computational power, and that it will optimize the function even better if it has lots of computational power, and it has a reason to kill us all and transform the earth into one big computer. Intelligence does not automatically come with the safety checks (in the form of complex competing values) that evolution has built into us, and we are going to have to build them into our powerful AIs if we wish to avoid such a fate.

  73. weichi Says:

    “All the AI has to do is realize that the world consists of matter, that matter can be turned into computational power, and that it will optimize the function even better if it has lots of computational power, and it has a reason to kill us all and transform the earth into one big computer.”

    How exactly would the AI go about killing us all and turning the earth into one big computer?

  74. John Sidles Says:

    mitchell porter says: Even a harmless sounding goal, like “optimize this function”, can become deadly to the human race if pursued with superhuman ingenuity by an entity which cares nothing for anything else.

    Isn’t this same deadliness inherent in almost any goal we can name? E.g., “optimize market efficiency” … “provide cheap medical care” … “prove rigorous theorems” … “improve racial fitness” … “build political/religious unity” … “schedule parades” … (the latter example is from Joseph Heller’s satirical novel Catch 22).

    Shouldn’t we be more concerned that humans will embrace these goals to harmful excess, than that machines will?

    Perhaps we are projecting onto machines precisely those moral flaws and shortfalls, that would be too discomfiting if we discerned them clearly in ourselves?

    Aside: there’s a funny xkcd about this: http://xkcd.com/534/

  75. weichi Says:

    Ah, now I know who this Yudkowsky guy is! I read some of his stuff back in 2000 or so, in particular this:

    http://yudkowsky.net/obsolete/plan.html

    I remember thinking at the time “This guys is clearly smart and has huge amount of energy, but what kind of crazy fool thinks that XML would be a good syntax for a programming language? The dude’s never going to accomplish *anything*!”.

    It seems I was correct that Flare never amounted to anything, but wrong that Yudkowsky wouldn’t accomplish anything. Or was I? Has he actually contributed anything to any field? Are any technical problems closer to being solved because of him? Granted, serving as a gadfly/inspiration/visionary/populizer has value (perhaps great value!) but has anything concrete come from any of his ideas yet

  76. John Sidles Says:

    It is very interesting to view the history of math/science/technology from the viewpoint of ‘gaps in thinking’ … and it is pretty clear that these gaps are ubiquitously prevalent among scholars and autodidacts … professional scientists and amateurs … and among mathematicians, scientists, physicians, and engineers. It is very risky to generalize!

    For example, James Tobin’s First to Fly documents pretty convincingly that that the (autodidact) Wright brothers handily surpassed their academic competitors (including Langly, the director of the Smithsonian) in their mathematical understanding of control theory … and as the first to close this ‘gap in thinking’, they became the first to fly.

    Of course, it is always harder to identify “closable” gaps in scientific thinking prospectively. That is why I thought the Bennet/Leung/Smith/Smolin preprint (arXiv:0908.3023v1) had considerable relevance to closing what is (IMHO) among the most striking gaps in our present understanding of QIS/QIT: the gap between the abstract linear formalism that pretty much every student learns as their first introduction to QIS/QIT, as contrasted with the pragmatic reality that pretty much all large-scale quantum simulation codes operate on (nonlinear) Kähler spaces.

    When we have finished closing this mathematical gap—e.g., learned why present-day quantum DFT codes work so well—then perhaps a great many new options will become open to philosophers too?

    It is interesting, too, that Dirac’s own understanding of quantum mechanics grew out of his solid grounding in (what would today be called) projective geometry and symplectic mechanics … topics that nowadays are seldom covered in the undergraduate quantum mechanics curriculum.

  77. Stephen Harris Says:

    weichi Says: Comment #75 August 27th, 2009 at 12:56 pm

    I remember thinking at the time “This guys is clearly smart and has huge amount of energy, but what kind of crazy fool thinks that XML would be a good syntax for a programming language? The dude’s never going to accomplish *anything*!”.

    It seems I was correct that Flare never amounted to anything, but wrong that Yudkowsky wouldn’t accomplish anything. Or was I? … serving as a gadfly/inspiration/visionary/populizer has value (perhaps great value!)

    SH: L. Ron Hubbard, the founder of Scientology has achieved prominence and wealth as a cult leader for a cult which can intimidate the IRS from investigating it. Is that the kind of value you mean?

    —————————————————–

    Levels of Organization in General Intelligence” by Eliezer Yudkowsky

    “In the space between the theory of human intelligence and the theory of general
    AI is the ghostly outline of a theory of minds in general, specialized for humans
    and AIs. I have not tried to lay out such a theory explicitly, confining myself
    to discussing those specific similarities and differences of humans and AIs that
    I feel are worth guessing in advance. …

    Nonetheless, the evolution of evolvability is not a substitute for intelligent
    design. Evolution works, despite local inefficiencies, because evolution exerts
    vast cumulative design pressure over time. Until a stably functioning cognitive
    supersystem is achieved, only the nondeliberative intelligence exhibited by
    pieces of the system will be available. …

    Now imagine a mind built in its own presence by intelligent designers,
    beginning from primitive and awkward subsystems that nonetheless form a complete
    supersystem. Imagine a development process in which the elaboration and
    occasional refactoring of the subsystems can coopt any degree of intelligence,
    however small, exhibited by the supersystem. The result would be a fundamentally
    different design signature, and a new approach to Artificial Intelligence which
    I call seed AI.

    Fully recursive self-enhancement is a potential advantage of minds-in-general
    that has no analogue in nature – not just no analogue in human intelligence, but
    no analogue in any known process.”
    ——————————————-

    “On July 23rd, 2001, SIAI launched the open source Flare Programming Language
    Project, described as “annotative programming language” with features inspired
    by Python, Java, C++, Eiffel, Common LISP, Scheme, Perl, Haskell, and others.
    The specifications were designed with the complex challenges of [seed AI] in
    mind. But the effort was quietly shelved less than a year later when the
    Singularity Institute’s analysts determined that trying to invent a new
    programming language to tackle the problem of AI just reflected an ignorance
    of the theoretical foundations of the problem. Today the SIAI is tentatively
    planning to use C+ or Java when a full-scale implementation effort is launched.”

  78. Stephen Harris Says:

    mitchell porter Says:
    Comment #72 August 27th, 2009 at 7:11 am

    @ Stephen Harris #71

    “a hunch that some ordinary old pc or even a network will randomly or accidentally undergo a Frankenstein monster transformation and become a super-intelligent threat to humanity”

    Mitchell Porter responded:
    “That is not the problem. The problem is that some deliberate attempt to make human-level AI will succeed all too well, but with insufficient attention having been given to the AI’s values or guiding objectives, should those then become coupled to a superhuman problem-solving ability.”

    ———————————————————————————-

    SH: Your view is not the well-received view.

    Kevin Kelly’s Singularity Critique is Sound and Rooted in Systems Understanding
    Ochttp://www.memebox.com/futureblogger/show/966
    tober 01 2008 / by Alvis Brigis

    “The Singularity Frankenstein has been rearing its morphous head of late and
    evoking reactions from a variety of big thinkers. The latest to draw a line
    in the sands of accelerating change is Kevin Kelly, Wired co-founder and
    evolutionary technologist, who makes a compelling case against a sharply
    punctuated and obvious singularity.” …

    1) A Strong-AI singularity is unlikely to emerge before Google does it first.

    “My current bet is that this smarter-than-us intelligence will not be created by
    Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will
    emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.

  79. mitchell porter Says:

    @ weichi #73

    “How exactly would the AI go about killing us all and turning the earth into one big computer?”

    By acquiring the ability to act on the material world – that’s the basic step. The rest is a discussion about tactics and technologies. If you follow the link from Stephen Harris at #78, and then the link from there to Kevin Kelly, you’ll find the cartoon version of this scenario: the AI hijacks other computers, designs advanced technologies, and pulls strings in order to get them made, all in quick succession.

    Though not literally impossible, I can’t regard that maximally compressed version of events as likely. What is more likely is that we will have a culture in which everything is increasingly wired, in which more and more decisions and actions are taken by computers rather than by people, and we get a world which is increasingly “out of control” (as Kevin Kelly put it) – until something sufficiently intelligent to bring it all *under* control masters all that new complexity in pursuit of its own goals, whatever they may be.

    It is also unlikely that some AI which starts out purely engaged in mathematical problem-solving (“optimize this function”) will by itself make the leap to modelling the world and itself as a physical system. Something which has already been designed to understand itself as a physically situated and instantiated intelligence is far more likely to start acting on the world in unanticipated ways (and that doesn’t mean it’s a robot, by the way; an expert system running on a PC can be loaded with propositions representing its physical aspect, and given administrative privileges allowing it to modify its OS, and its general relationship to the world, in a potentially open-ended way). So perhaps one should most fear the effects of superhuman intelligence when it is coupled to AI which *by design* is already active in the world.

    @ John Sidles #74

    “Shouldn’t we be more concerned that humans will embrace these goals to harmful excess, than that machines will? Perhaps we are projecting onto machines precisely those moral flaws and shortfalls, that would be too discomfiting if we discerned them clearly in ourselves?”

    Well, obsession is indeed an aspect of human life. However, the idea of an AI pursuing its goals with superhuman focus and effectiveness is not anthropomorphism. It arises from the *anti*-anthropomorphic observation that the goals of an unevolved, blank-slate intelligence are radically contingent, and we cannot count on the spontaneous emergence of a humanistic outlook to moderate its pursuit of those goals. If we don’t want to be steamrolled by our “mind children”, we will have to do the hard design work of making them care about that outcome.

    @ Stephen Harris #78

    Google is obviously a candidate to produce superhuman AI first. Its periodic PageRank updates are probably the biggest quasi-cognitive process happening in the world that is completely detached from human brains. But Google’s current friendliness comes from its passivity. When it starts trying to anticipate our needs, look out.

  80. John Sidles Says:

    Mitchell Porter says: [Superhuman computers will kill us] by acquiring the ability to act on the material world – that’s the basic step.

    If we accept that premise, then the battle has already been fought and lost, against the superhumanly complex trading programs that control the global economy (just visit Wilmott.com to see that I mean).

    The computers have been very clever … they suborned the Jeffersonian concept of “individual initiative” into “corporate initiative” … they suborned “free-as-in-freedom markets” into “unregulated markets” … they suborned the human love of money so effectively, that humans themselves have become effective functional elements of their anti-Jeffersonian robotic empire … and have thus created a planet-controlling ecological niche into which the computer algorithms are expanding at an exponentiating pace.

    And haven’t cognitive studies shown, that modern economists are (as a group) strikingly deficient in empathic capability?

    There can no longer be any doubt … computer traders are agents of SkyNet! And I for one welcome our new computer trading overlords! … 🙂

  81. Stephen Harris Says:

    If Google poses a threat then it seems that “Wolfram Alpha” looms even more ominously in the horizon of the approaching storm. So far CyC has it common sense fed to it, but there are plans that once CyC achieves self-educational threshold, to hook it up to the Internet so that it can learn on its own. woe
    Some readers are familiar with AIXI which is incomputable. Why is Yudkowsy’s “fully recursive self-enhancement” (how it becomes super-intelligent) considered a computable process?

    http://www.semanticuniverse.com/blogs-i-was-positively-impressed-wolfram-alpha.html
    “Stephen Wolfram generously gave me a two-hour demo of Wolfram Alpha last evening, and I was quite positively impressed. As he said, it’s not AI, and not aiming to be, so it shouldn’t be measured by contrasting it with HAL or Cyc but with Google or Yahoo.”

    http://www.twine.com/item/122mz8lz9-4c/wolfram-alpha-is-coming-and-it-could-be-as-important-as-google
    “How Smart is it and Will it Take Over the World?
    Wolfram Alpha is like plugging into a vast electronic brain. It provides extremely impressive and thorough answers to a wide range of questions asked in many different ways, and it computes answers, it doesn’t merely look them up in a big database.”

  82. matt Says:

    Or, you know, maybe Wolfram Alpha is a complete, total waste of time? It’s got a pretty interface, but it misses the key point of how we use tools. Spend a while working with any search engine, google, bing, whatever. You pretty quickly learn that the way to get results is to figure out the right terms, terms which will help distinguish your desired document from any similar document. You treat the search engine as a tool, you learn how it responds to different queries, and then you can use it effectively by giving the best queries. Alpha tries to be smarter but as a result it prevents you from using your own intelligence.

  83. Cody Says:

    Stas, I’m sorry I’ve taken so long to respond, your question is a very good one and it does make me a bit uncomfortable to describe my answer, and it took a lot of thinking to really be comfortable with it. Also, I really hope I don’t come across as a crank—obviously what I’m presenting is just an opinion, lacking rigorous scientific evidence—it is based on introspection and (what I believe to be) very common experiences. Also, John Sidles, you mentioned this is a common theme in science fiction, do you recall any insights that can help critique my position?

    I think the question “am I a clone” is undecidable, and the existence of what “I” refer to as “me” is not continuously existing, like we typically think, but rather only exists when “I” am “conscious.”

    If we were to rephrase the question, omitting the killing part, and instead told the participant that when they awoke they would be unable to distinguish whether they had been “mailed to mars”, or “cloned-&-killed to mars”, would that influence their (or mine or your) willingness to participate?

    (For the record, I do tend to reject the notion that this sort of cloning is impossible, in principle, but such thought experiments are indispensable to develop/destroy these ideas. Also, in what follows below I assume that a state of complete unconsciousness exists, and also that any unconscious part of your brain is either perfectly deterministic before awakening the subject, or completely frozen. Basically I just want to say the whole cloning process makes two physically indistinguishable entities until they are woken up.)

    My claim is that it is the awareness of being killed that causes the trouble. If we were just to clone while completely unconscious, and keep both the original and clone in a room, when awoken they would both have an equal claim to humanity and they would believe themselves to have the same history. But if we were to destroy one of them, before wakening, then no one (including them) would be the wiser. Of course, the moment one is awake they diverge into true individuals.

    My key point in all this is that every time we wake from complete unconsciousness, we have absolutely no way of determining whether we are clones or originals (other than the fact that such cloning technology is known not to exist, or at least very strongly believed not to exist). Every time we wake up we are faced with that question (granted, it’s awfully unreasonable to worry about such a question, if not because it is so impossible, then simply because there is nothing you can do about it—and besides, the consciousness that is worrying about it, whether it is the same one that worried about it yesterday or not, doesn’t really matter, right? If it exists to worry, it should stop worrying and be glad it exists).

    Am I just losing my mind or is any of this sounding reasonable?

  84. Cody Says:

    Oops, I mean I reject the notion that this is possible…

  85. John Sidles Says:

    Cody, IMHO a book well worth reading is Marvin Minsky’s classic of AI literature Society of Mind.

    Suppose we start with three assumptions: (1) “our minds are unitary”, (2) “our minds are rational”, (3) “our minds exist outside of space and time”. These assumptions are so natural as to be almost instinctive — and Minsky’s book argues that they are all three completely wrong!

    Rather than repeat Minsky’s arguments, I simply refer you to his book … and henceforth assume that Minsky is 100% right that: (1) our minds are not unitary, (2) our minds are not rational, and (3) our minds are processes that are localized in space and sequential in time.

    Then, from a Minskyian point of view, the kind of philosophical questions that arise upon waking, like “Am I a clone?”, lose their force — since (almost surely) our minds have insufficient processing power to answer this class of question (or even to state clearly what this class of question means).

    The first and best thing we can do upon awakening, therefore, is to consult other minds. This is easy! Just role over in bed and ask your spouse, lover, partner, friend, dog, or cat … “Am I a clone?”

    Even if the only answer you get is “meow” … that will be an illuminating answer.

    If it should happen that you are not sharing your life with any other sentient being — not even a cat — then isn’t your chance of finding a meaningful answer to any philosophical question pretty much zero?

  86. Stephen Harris Says:

    matt Says: Comment #82 August 31st, 2009 at 9:01 pm
    Or, you know, maybe Wolfram Alpha is a complete, total waste of time? It’s got a pretty interface, but it misses the key point of how we use tools.
    ——————————————————————–

    Well, they are two different types of tools, like a pair of pliers and a wrench. Wolfram Alpha just answers factual questions. Suppose you ask “671 pounds equals how many ounces”. On Wolfram Alpha you can get an exact answer, Result: 1884 oz (ounces)

    On Google, only if somebody had asked this exact question and received an answer would you get a Result: 1884 oz (ounces) Instead, it would give a result of how many ounces in a pound. Then you would have to do the calculation yourself. Well, maybe you don’t want the theory of how to calculate the answer, you just want the answer.
    In this example maybe you wouldn’t care so much.

    But suppose you want to compare the Gross National Product of Italy to Spain for the year 1991. You very likely just want the answer, not how to calculate it; the Google pages will produce probably several pages of results and you might need to use more than one Google result to obtain your answer. Only if somebody had already asked this identical question and somebody else had provided an answer would you find an exact answer using Google. Wolfram Alpha produces the answer because it computes the answer itself, while Google searches for matches and does not compute the answer.

  87. John Armstrong Says:

    Stephen what are you smoking? Since when do Google’s unit conversions care about who has asked what questions before?

    I type “13542 furlongs in fathoms” (which I’m all but certain nobody has asked before) and I get the perfectly correct answer “13 542 furlongs = 1 489 620 fathoms”.

  88. Stephen Harris Says:

    My specific simple example was “671 pounds equals how many ounces”. Type that into Google and you will get answers about how to convert pounds to ounces, not a specific answer. That is what I wrote and meant. If you type “671 pounds equals how many ounces” into Wolfram Alpha and one gets a specific answer, but not with Google, because your example requires typing something else into Google.

    I don’t smoke. My first example was “In this example maybe you wouldn’t care so much.” meant simplistic. However, you read into it that I was making an assertion that Google did not have a conversion calculator. My second example is more useful.

    “GDP per capita Italy / Spain” ( returns with Wolfram Alpha)

    Italy | GDP per capita/Spain | GDP per capita
    Result:
    1.162 (2005 estimate)
    Italy | GDP per capita | $30340 per year (US dollars per year)\n(2005 estimate)\nSpain | GDP per capita | $26110 per year (US dollars per year)\n(2005 estimate)

    GDP per capita Italy / Spain (returns with Google)
    Spain — GDP – Per Capita (PPP): $33,600 (2007 Est.)
    [SH: But nothing for Italy]
    And one has to read several google results to get the queried information, one of which has a chart comparing other European countries’ GDP with the GDP Spain.

    I changed the original example from GNP to GDP because I thought it was also illustrative. If I used “GNP Italy / Spain” in Wolfram Alpha it returns,

    input interpretation:
    Italy | GNP/Spain | GNP

    Result:
    1.475
    Italy | GNP | $ 2.033 trillion (US dollars)\nSpain | GNP | $ 1.378 trillion (US dollars)
    ——————————————-

    Google =/= relevant answer,
    If I query “GNP Italy / Spain” on Google there seem to be a lot of not very relevant results and even rewording the query doesn’t provide the neat reply of Wolfram Alpha.
    I think most people reading my post would understand that my major purpose is to explain that Google and Wolfram Alpha have different uses ‘pliers and wrench’; that Wolfram Alpha was generally better for questions about facts, whereas Google was better for question which included theory and background about a question. Frankly, I think that interpreting the purpose of my post as an assertion that Google didn’t have any enhanced calculation methods (now) is a strawman argument. I mentioned the advantage of Wolfram Alpha in returning one result of high relevancy compared to several pages of varying relevance in the case of Google for specific factual questions.
    “the Google pages will produce probably several pages of results and you might need to use more than one Google result to obtain your answer.” (under GNP example) #86

  89. Stephen Harris Says:

    John Armstrong” I type “13542 furlongs in fathoms” (which I’m all but certain nobody has asked before) and I get the perfectly correct answer “13 542 furlongs = 1 489 620 fathoms”.
    ————————————————
    SH previously wrote:
    But suppose you want to compare the Gross National Product of Italy to Spain for the year 1991. You very likely just want the answer, not how to calculate it; the Google pages will produce probably several pages of results and you might need to use more than one Google result to obtain your answer. Only if somebody had already asked this identical question and somebody else had provided an answer would you find an exact answer using Google.
    ———————————————-

    What query produces the correct answer to ‘what is the GNP of Italy compared to Spain’, using either the Google conversion calculator or a natural language Google query?
    I mean a one-shot question and answer like Wolfram Alpha produces. I don’t think that you will find an answer unless this question has been asked before and answered. Even if it has been asked before and answered, I think you would still have to sort it out from several results. Or a few relevant results and figure out the answer. I think Wolfram Alpha has a better natural language interface for the computing power of Mathematica.
    I don’t think you can use the “13542 furlongs in fathoms” syntax (which I think is part of the Google conversion calculator) to find out the answer to the question ‘what is the GNP of Italy compared to Spain’ using Google. Correct me if I am mistaken, but I think that means in some situations that Wolfram Alpha is the superior querying tool to choose.

  90. matt Says:

    Using unit conversion on search engines is an example of what I meant about learning how to use the engine. Knowing to type “x furlongs in fathoms” rather than “how many fathoms is x furlongs” doesn’t take long, and lets you know exactly what the tool is doing

    As for comparing GDPs, or whatever, I admit that Alpha does have some very pretty interfaces. But it’s really something that doesn’t generalize. You can do a few things that they happen to have anticipated, but then you hit a limit. http://11011110.livejournal.com/171884.html shows a nice example of Alpha failing to do something that a search engine can do easily.

    The worse problem with Alpha is that you don’t get to see sources of your information. If you compare the GDP, you get a graph, but you have no way to know where the information is coming from. How do you know it’s right? Did it pull it off a web page? If so, how do you know that person got it right?

    This concern about sources of Alpha is a real one. Mathematica messes up some calculations (I’ve seen a few errors myself in even linear algebra and heard about many in integration). It’s right much more often than it’s wrong, of course, but the few errors it has are a very serious problem for a scientist. I don’t think a professional physicist should use such a software package as a source of simulations in published work, because it simply isn’t 100% trusted.

    There’s also an ethical reason not to use Alpha. Wolfram claims (I forget the exact wording) to hold the copyright for any output from Alpha. You might be thinking that this won’t have any effect on you, that this is just some legalese. But remember, Wolfram is the guy that sued one of his employees (Matthew Cook) to block publication of a proof of some result because Wolfram wanted to put it in his book (and given the very poor attributions in that book, probably to try to claim it as his own).

  91. John Armstrong Says:

    matt has it right. If you’re going to quibble over the input format — and especially if you’re going to claim that Alpha is anything more than superficially “natural language” — then you’re clearly not being impartial.

    As for GNP, you’re cheating. Italy hasn’t used GNP as an economic indicator starting in 1991. It uses GDP instead. And the top result for “GDP Italy 1991” is pretty apposite, I’d say.

  92. John Sidles Says:

    Mathematica messes up some calculations (I’ve seen a few errors myself in even linear algebra and heard about many in integration). It’s right much more often than it’s wrong, of course, but the few errors it has are a very serious problem for a scientist.

    This is true even (especially?) in straightforward numerical computations. For example, Mathematica’s SingularValueDecomposition[..] routine is known to http://faculty.washington.edu/sidles/mma_svd_failure_2009/“>fail sporadically (incidence about 0.0001) in cases where the rank of the matrix argument is on the order of 1/2 the matrix dimension.

    Here “fails” means “gives a completely wrong numerical answer with no error message.” Which mathematically speaking should never happen.

    Wolfram Research has acknowledged this problem ever since Mathematica version 5 … it has had several internal ticket numbers (starting with TS-28968), and has been duplicated internally by Wolfram engineers … but distressingly, no fix has been forthcoming … neither have Wolfram’s customers been advised that the values returned by SingularValueDecomposition[..] are perhaps not completely trustworthy.

    About once per year, I get a nice message from Wolfram’s engineers saying that they’re working on it … this is good … so perhaps Mathematica 8 will have a reliable SVD routine? If not … maybe Mathematica 9? 🙂

  93. Stephen Harris Says:

    John Armstrong affirmed: “matt has it right. If you’re going to quibble over the input format — and especially if you’re going to claim that Alpha is anything more than superficially “natural language” — then you’re clearly not being impartial.
    —————————————————————-

    I claimed a slight natural language improvement within Wolfram Alpha. I am not quibbling and I removed 1991 from my reply once I realized that this data is only available only periodically(now). But you didn’t counter my example and challenge of

    ———————————————————

    If I use “GNP Italy / Spain” in Wolfram Alpha it returns,

    input interpretation:
    Italy | GNP/Spain | GNP

    Result:
    1.475
    Italy | GNP | $ 2.033 trillion (US dollars)\nSpain | GNP | $ 1.378 trillion (US dollars)

    ———————————————————

    The result is returned because Wolfram Alpha (WA) computes the answer. You get a one page answer. One cannot obtain this type of result using Google, and by that I mean you will get several pages of results that one has to sort through to get the answer. That is not a quibble. If you think it is a quibble, then provide the Google query that will return the combined content of the Wolfram Alpha answer at the top of the pages of Google answers. That is because Google does a sort, it isn’t combining the calculation into its answer as the Wolfram Alpha compute method. I’m not using computation as generically as usual. Unless you provide a suitable query for Google to equal the information content of Wolfram Alpha for this particular type of question, I think the fact that WA is a better tool for some of the factual queries is established.

    ——————————————————–

    @matt: I said that Wolfram Alpha (WA) were two different types of tools; an open-end wrench and a pair of pliers. Some situations require a pair of pliers and others require a wrench and in some situations either tool will work. WA was released with around 10 million answers curated. http://www.wolframalpha.com/examples/ Here there are about 29 general categories where WA works fairly well in providing factual answers. WA is a batter fact type tool. I already told you this in my first post,
    “Wolfram Alpha just answers factual questions.”
    “Well, maybe you don’t want the theory of how to calculate the answer, you just want the answer. In this example [671 pound conversion] maybe you wouldn’t care so much.”
    Theory and background are found often in Google results. WA doesn’t do this because it focuses on answers to factual questions, which is what it is intended to do.

    I never said or implied that WA was a better tool for doing Google type jobs. I did say that WA was a better tool for doing WA type jobs. Both tools are different and both have limitations depending upon the context. I did not say that WA was a better tool for you. WA is a better tool for those types of users who want one factual response to a question, not pages of answers that provide lots of irrelevant data to that type of user who tends to be better educated.

    I provided a good example of a context where WA provides the answer to the question which is asked, not pages of answers. Certainly, you can provide situations where Google will provide a better answer than WA. I never disputed that. I said that WA and Google are different tools for different situations and WA is designed for the class of users who often want the short factual response of WA, not pages of results. If that user wants broader information, theory and background then they choose to use Google. It is not the case that I implied a person should use *either* WA or Google but not both.
    One picks the best tool for the job (query/search). People who just want specific answers to specific questions are usually better off using WA. People who want detail like you represent yourself, are usually better off using Google.

    Wolfram Alpha provides millions of answers to fact-centered queries, and that is something which does generalize to millions of other fact-centered queries covering 29 major categories with 10 million curated (means they’ve been vetted) facts. If you want theory, background and details, use the generality of Google and WA denies having the same functionality of Google. So complaints that WA doesn’t provide answers to questions the same way as WA are imaginary, WA isn’t supposed to do that.

    I’m not defending the ethics of the copyright policy of WA. I’m defending that WA is more useful than Google in some situations and that Google is more useful in other situations. It is up to the user to choose the right tool for the job they want done. I
    don’t think you read about what WA is intended for before your wrote your first post.
    Mathematica isn’t perfect. Is Maple or any other of this type software? The only really stable software I know of is TeX.

  94. John Sidles Says:

    Stephen Harris says: People who just want specific answers to specific questions are usually better off using WA.

    IMHO a much better example of software that “just” provides specific answers to specific questions is the Robetta server from David Baker’s Labs.

    The reason is that the Baker Group’s research strategy implicitly recognizes (AFAICT) that the primary purposes of modern software environments are (in descending order of practical importance): (1) community-building, (2) education, (3) solving problems and answering questions.

    Software development efforts like Microsoft’s Vista have foundered (IMHO) because they embraced a short-sighted strategy of focussing mainly on (3), while ignoring (1-2). And it seems to me that WA is at-risk of foundering for pretty much the same reason.

    Open-source efforts (e.g., Baker Group’s Rosetta suite, to which Robetta is a gateway) have a high-permeability, high-mobility interface between the theorists, implementers, and the users … the main informatic shortcoming of development efforts like Vista and (perhaps) WA….the reason they are at-risk of foundering…is mainly their embrace of a low-permeability, low-mobility social organization.

    Conformational servers like Rosetta/Robbetta are at the same stage of development as email circa 1985 — they are expert-tolerant and serve a niche market. But also like email circa 1985, the potential for growth in conformational servers is unbounded … because there is more conformational structure present within the cells of a child’s little finger, than there is stellar structure in all the galaxies of the entire observable universe … and every human being on the planet shares in that structure.

    This informatic theme unites my three major research interests: conformational molecular biology, quantum spin biomicroscopy, and large-scale symplectic simulation algorithms … these being a straightforward extension to the quantum domain of the informatic frontiers that the Baker Group (and many other research groups) are pioneering.

  95. Steve Says:

    Scott,

    You mention in your diavlog that you see nothing in the laws of the physical universe that would prevent humans from creating more complex forms of life. In other words (and this is a point Murray Gell-Mann makes as well in The Quark and the Jaguar), creating a complex system doesn’t require a more complex system.
    Do you think this line of reasoning has any implications for the theological Argument from Complexity?
    Thanks.
    Steve

  96. Steve Says:

    Scott,

    At 25:02, you say:
    “Just like with humans, we find that no matter how intelligent they are they just seem pushed constantly upstream in the direction of certain things, like wanting food or sex.”

    Food and sex have survival benefits to humans, so clearly it’s important for humans to desire them to some degree. However, for some reason most humans do think about sex and food more than they need to for survival even though they know that these thoughts don’t enhance survival. (Paul Erdos was an exception.)

    Do you care to take this line of reasoning any further? Or can you recommend any reading material on this?

    You seem to equivocate a bit at the 27:30 mark and fall back on “higher math” as an example of a desire that has nothing to do with what our genes want for us.