The Singularity Is Far

In this post, I wish to propose for the reader’s favorable consideration a doctrine that will strike many in the nerd community as strange, bizarre, and paradoxical, but that I hope will at least be given a hearing.  The doctrine in question is this: while it is possible that, a century hence, humans will have built molecular nanobots and superintelligent AIs, uploaded their brains to computers, and achieved eternal life, these possibilities are not quite so likely as commonly supposed, nor do they obviate the need to address mundane matters such as war, poverty, disease, climate change, and helping Democrats win elections.

Last week I read Ray Kurzweil’s The Singularity Is Near, which argues that by 2045, or somewhere around then, advances in AI, neuroscience, nanotechnology, and other fields will let us transcend biology, upload our brains to computers, and achieve the dreams of the ancient religions, including eternal life and whatever simulated sex partners we want.  (Kurzweil, famously, takes hundreds of supplements a day to maximize his chance of staying alive till then.)  Perhaps surprisingly, Kurzweil does not come across as a wild-eyed fanatic, but as a humane idealist; the text is thought-provoking and occasionally even wise.  I did have quibbles with his discussions of quantum computing and the possibility of faster-than-light travel, but Kurzweil wisely chose not to base his conclusions on any speculations about these topics.

I find myself in agreement with Kurzweil on three fundamental points.  Firstly, that whatever purifying or ennobling qualities suffering might have, those qualities are outweighed by suffering’s fundamental suckiness.  If I could press a button to free the world from loneliness, disease, and death—the downside being that life might become banal without the grace of tragedy—I’d probably hesitate for about five seconds before lunging for it.  As Tevye said about the ‘curse’ of wealth: “may the Lord strike me with that curse, and may I never recover!”

Secondly, there’s nothing bad about overcoming nature through technology.  Humans have been in that business for at least 10,000 years.  Now, it’s true that fanatical devotion to particular technologies—such as the internal combustion engine—might well cause the collapse of human civilization and the permanent degradation of life on Earth.  But the only plausible solution is better technology, not the Kaczynski/Flintstone route.

Thirdly, were there machines that pressed for recognition of their rights with originality, humor, and wit, we’d have to give it to them.  And if those machines quickly rendered humans obsolete, I for one would salute our new overlords.  In that situation, the denialism of John Searle would cease to be just a philosophical dead-end, and would take on the character of xenophobia, resentment, and cruelty.

Yet while I share Kurzweil’s ethical sense, I don’t share his technological optimism.  Everywhere he looks, Kurzweil sees Moore’s-Law-type exponential trajectories—not just for transistor density, but for bits of information, economic output, the resolution of brain imaging, the number of cell phones and Internet hosts, the cost of DNA sequencing … you name it, he’ll plot it on a log scale.  Kurzweil acknowledges that, even over the brief periods that his exponential curves cover, they have hit occasional snags, like (say) the Great Depression or World War II.  And he’s not so naïve as to extend the curves indefinitely: he knows that every exponential is just a sigmoid (or some other curve) in disguise.  Nevertheless, he fully expects current technological trends to continue pretty much unabated until they hit fundamental physical limits.

I’m much less sanguine.  Where Kurzweil sees a steady march of progress interrupted by occasional hiccups, I see a few fragile and improbable victories against a backdrop of malice, stupidity, and greed—the tiny amount of good humans have accomplished in constant danger of drowning in a sea of blood and tears, as happened to so many of the civilizations of antiquity.  The difference is that this time, human idiocy is playing itself out on a planetary scale; this time we can finally ensure that there are no survivors left to start over.

(Also, if the Singularity ever does arrive, I expect it to be plagued by frequent outages and terrible customer service.)

Obviously, my perceptions are as colored by my emotions and life experiences as Kurzweil’s are by his.  Despite two years of reading Overcoming Bias, I still don’t know how to uncompute myself, to predict the future from some standpoint of Bayesian equanimity.  But just as obviously, it’s our duty to try to minimize bias, to give reasons for our beliefs that are open to refutation and revision.  So in the rest of this post, I’d like to share some of the reasons why I haven’t chosen to spend my life worrying about the Singularity, instead devoting my time to boring, mundane topics like anthropic quantum computing and cosmological Turing machines.

The first, and most important, reason is also the reason why I don’t spend my life thinking about P versus NP: because there are vastly easier prerequisite questions that we already don’t know how to answer.  In a field like CS theory, you very quickly get used to being able to state a problem with perfect clarity, knowing exactly what would constitute a solution, and still not having any clue how to solve it.  (In other words, you get used to P not equaling NP.)  And at least in my experience, being pounded with this situation again and again slowly reorients your worldview.  You learn to terminate trains of thought that might otherwise run forever without halting.  Faced with a question like “How can we stop death?” or “How can we build a human-level AI?” you learn to respond: “What’s another question that’s easier to answer, and that probably has to be answered anyway before we have any chance on the original one?”  And if someone says, “but can’t you at least estimate how long it will take to answer the original question?” you learn to hedge and equivocate.  For looking backwards, you see that sometimes the highest peaks were scaled—Fermat’s Last Theorem, the Poincaré conjecture—but that not even the greatest climbers could peer through the fog to say anything terribly useful about the distance to the top.  Even Newton and Gauss could only stagger a few hundred yards up; the rest of us are lucky to push forward by an inch.

The second reason is that as a goal recedes to infinity, the probability increases that as we approach it, we’ll discover some completely unanticipated reason why it wasn’t the right goal anyway.  You might ask: what is it that we could possibly learn about neuroscience, biology, or physics, that would make us slap our foreheads and realize that uploading our brains to computers was a harebrained idea from the start, reflecting little more than early-21st-century prejudice?  Unlike (say) Searle or Penrose, I don’t pretend to know.  But I do think that the “argument from absence of counterarguments” loses more and more force, the further into the future we’re talking about.  (One can, of course, say the same about quantum computers, which is one reason why I’ve never taken the possibility of building them as a given.)  Is there any example of a prognostication about the 21st century written before 1950, most of which doesn’t now seem quaint?

The third reason is simple comparative advantage.  Given our current ignorance, there seems to me to be relatively little worth saying about the Singularity—and what is worth saying is already being said well by others.  Thus, I find nothing wrong with a few people devoting their lives to Singulatarianism, just as others should arguably spend their lives worrying about asteroid collisions.  But precisely because smart people do devote brain-cycles to these possibilities, the rest of us have correspondingly less need to.

The fourth reason is the Doomsday Argument.  Having digested the Bayesian case for a Doomsday conclusion, and the rebuttals to that case, and the rebuttals to the rebuttals, what I find left over is just a certain check on futurian optimism.  Sure, maybe we’re at the very beginning of the human story, a mere awkward adolescence before billions of glorious post-Singularity years ahead.  But whatever intuitions cause us to expect that could easily be leading us astray.  Suppose that all over the universe, civilizations arise and continue growing exponentially until they exhaust their planets’ resources and kill themselves out.  In that case, almost every conscious being brought into existence would find itself extremely close to its civilization’s death throes.  If—as many believe—we’re quickly approaching the earth’s carrying capacity, then we’d have not the slightest reason to be surprised by that apparent coincidence.  To be human would, in the vast majority of cases, mean to be born into a world of air travel and Burger King and imminent global catastrophe.  It would be like some horrific Twilight Zone episode, with all the joys and labors, the triumphs and setbacks of developing civilizations across the universe receding into demographic insignificance next to their final, agonizing howls of pain.  I wish reading the news every morning furnished me with more reasons not to be haunted by this vision of existence.

The fifth reason is my (limited) experience of AI research.  I was actually an AI person long before I became a theorist.  When I was 12, I set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov’s Three Laws of Robotics.  I coded up a really nice tokenizer and user interface, and only got stuck on the subroutine that was supposed to understand the user’s question and output an intelligent, Three-Laws-obeying response.  Later, at Cornell, I was lucky to learn from Bart Selman, and worked as an AI programmer for Cornell’s RoboCup team—an experience that taught me little about the nature of intelligence but a great deal about how to make robots pass a ball.  At Berkeley, my initial focus was on machine learning and statistical inference; had it not been for quantum computing, I’d probably still be doing AI today.  For whatever it’s worth, my impression was of a field with plenty of exciting progress, but which has (to put it mildly) some ways to go before recapitulating the last billion years of evolution.  The idea that a field must either be (1) failing or (2) on track to reach its ultimate goal within our lifetimes, seems utterly without support in the history of science (if understandable from the standpoint of both critics and enthusiastic supporters).  If I were forced at gunpoint to guess, I’d say that human-level AI seemed to me like a slog of many more centuries or millennia (with the obvious potential for black swans along the way).

As you may have gathered, I don’t find the Singulatarian religion so silly as not to merit a response.  Not only is the “Rapture of the Nerds” compatible with all known laws of physics; if humans survive long enough it might even come to pass.  The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails).  After all, we might similarly expect that there should be models of computation as far beyond Turing machines as Turing machines are beyond finite automata.  But in the latter case, we know the intuition is mistaken.  There is a ceiling to computational expressive power.  Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster.  Now, it’s clear that a human who thought at ten thousand times our clock rate would be a pretty impressive fellow.  But if that’s what we’re talking about, then we don’t mean a point beyond which history completely transcends us, but “merely” a point beyond which we could only understand history by playing it in extreme slow motion.

Yet while I believe the latter kind of singularity is possible, I’m not at all convinced of Kurzweil’s thesis that it’s “near” (where “near” means before 2045, or even 2300).  I see a world that really did change dramatically over the last century, but where progress on many fronts (like transportation and energy) seems to have slowed down rather than sped up; a world quickly approaching its carrying capacity, exhausting its natural resources, ruining its oceans, and supercharging its climate; a world where technology is often powerless to solve the most basic problems, millions continue to die for trivial reasons, and democracy isn’t even clearly winning over despotism; a world that finally has a communications network with a decent search engine but that still hasn’t emerged from the tribalism and ignorance of the Pleistocene.  And I can’t help thinking that, before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 18th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.

113 Responses to “The Singularity Is Far”

  1. John Armstrong Says:

    His language gives away how bad his math is. He points out exponential growth (or sees it like the Blessed Virgin in a grilled-cheese sandwich), and then claims there will be a singularity — a phenomenon associated with superexponential growth. The exponential function grows without bound, but it takes infinitely long to get to infinity.

    Kurzweil makes some good points, but he drags them off into raging nutjobbery. The same can be said for Fritjof Capra.

  2. Scott Says:

    John: As far as I can tell, the idea is that

    (1) civilization is characterized by exponential explosions (which, of course, eventually hit fundamental limits or peter out), and

    (2) the timescale for those explosions is itself decreasing exponentially.

    Under those assumptions, you might indeed expect a literal singularity. Even if you accept (1) and reject (2), extrapolating various exponentials out by a century would have some truly spectacular consequences, so much so that if you zoom out far enough on the timeline, the term “singularity” might be justified.

    Thus, I’m willing to accept the terminology; I think the error (extreme technological optimism) is one of substance.

  3. Michael Nielsen Says:

    I found myself unable to complete Kurzweil’s book, but I did enjoy Vinge’s original Singularity paper, which I found much clearer (and way shorter!) than Kurzweil’s book:

    http://mindstalk.net/vinge/vinge-sing.html

  4. Eliezer Yudkowsky Says:

    Scott, if you haven’t already, you might want to consider reading Artificial Intelligence as a Positive and Negative Factor in Global Risk. It might give you some further perspective on what Kurzweil chooses to talk about and not talk about.

    For other readers of this blog, I recommend Three Schools of Singularity. Given the way this term has mutated over time, it is a good idea to never talk about the Singularity without rigorously distinguishing these three concepts.

    Kurzweil is an optimist, and in some ways a normalist, who chooses(?) not to discuss certain issues. He sees no role in the future for self-improving AI (or nanotech for that matter) except maintaining the current curve he draws for technological progress. Just as he sees technology as something that, for all its scariness, is always fundamentally going to be an amazing neat product you purchase at a store.

    The thing about timescales driven by fundamental research breakthroughs is that they are really hard to time – Moore’s Law isn’t going to work for doodlysquat on AI. That doesn’t mean you have positive knowledge that AI is a long way off. It means you have very poor ability to predict the timing one way or the other. But AI will feel very hard, if you don’t know exactly how to do it yourself, and so it will feel like you have positive information that AI will take a long time. We know what it takes to build a star from hydrogen atoms, so we know there isn’t likely to be a clever shortcut. AI is not so well understood as astronomy.

    In short, Scott, I am not willing to rely on your optimistic reinterpretation of Kurzweil. We may have as much time as you think, but it seems to me that you are confusing your ignorance of AI’s key with a positive knowledge of its difficulty and temporal distance. When heavier-than-air flight or atomic energy was a hundred years off, it looked fifty years off or impossible; when it was five years off, it still looked fifty years off or impossible. Poor information.

  5. mitchell porter Says:

    “I haven’t chosen to spend my life worrying about the Singularity”

    You still have time to change!

    “In a field like CS theory, you very quickly get used to being able to state a problem with perfect clarity”

    I have never really read Kurzweil, but I have studied Yudkowsky, and I’d like to point out that he has gone some way towards reducing his goal – a “Friendly” Singularity – to the task of solving a few exactly stated problems. The informal statement of that which is to be achieved is something like this:

    Ensure that the first intelligence to strongly exceed human intelligence is an ideal moral agent, with respect to the moral criteria implicit in human nature.

    Now if human beings and artificial intelligences were just expected utility maximizers, that informal statement could be made a lot more formal right away. We could speak directly of ensuring that the utility function of that first superhuman intelligence stood in a certain relationship to the utility functions of all the human beings alive at the time (for example). Yudkowsky’s informal proposal could perhaps be completely formalized right away, and the task of making a better world would have been reduced to an exercise in applied computer science.

    In real life, human cognitive architecture presumably does not decompose neatly into a utility function and a problem-solving engine, but one supposes that it does belong to some distinct class of decision-making system; in which case a more complex version of the preceding approach may be the necessary and sufficient condition for a Friendly Singularity.

    I’ve only mentioned one half of his strategy, Friendly AI; the other half, seed AI, is all about the “superhuman intelligence” aspect. A seed AI, by definition, is one which starts with less than human intelligence, and self-modifies its way to greater than human intelligence. Using this terminology, we can say the big goal is:

    Ensure that the first successful seed AI is a Friendly AI.

    There are, again, formidable problems in turning these slogans into realities, but I would point especially to the work of Schmidhuber, Hutter, and Legg as introducing exact definitions of intelligence, and also software architectures which would plausibly produce increasing levels of intelligence as thus defined.

    I think the plausibility of seed AI is the main reason, along with the development in computer hardware, to think that this is all a matter of decades rather than centuries.

    “The fourth reason is the Doomsday Argument.”

    I can sympathize with this perspective. But as far as I am concerned, there are funny unanswered questions remaining in anthropic reasoning. For example: if unconscious things are far more common than conscious ones, isn’t it extremely unlikely that I should be aware of my own existence at all? And doesn’t any process of reasoning which would imply that some manifestly real state of affairs is extremely improbable … thereby cast itself into doubt? So it remains possible that there are logical occlusions interfering with our attempts to reason anthropically. And so long as that remains the case, to be disheartened in these efforts by Doomsday arguments is merely an emotional response; understandable, but a matter of morale rather than reason, and capable of being met on that plane.

    “I can’t helping thinking that, before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.”

    I do not think we have the luxury of first achieving universal human well-being and then dealing with the challenges of truly advanced technology. Division of labor is both possible and necessary. So it comes back to asking yourself: what is most important, and where am I best equipped to make a contribution?

  6. roland Says:

    > And if those machines quickly rendered humans obsolete, I for one would salute our new overlords.

    Don’t you think the humans would try to keep their position of power and not care much about the originality, humor, and wit of intelligent machines.

  7. michael vassar Says:

    Scott: I’m with you until

    “The idea that a field must either be (1) failing or (2) on track to reach its ultimate goal within our lifetimes, seems utterly without support in the history of science (if understandable from the standpoint of both critics and enthusiastic supporters). If I were forced at gunpoint to guess, I’d say that human-level AI seemed to me like a slog of many more centuries or millennia.”

    which seems to come fairly close to contradicting

    “as a goal recedes to infinity, the probability increases that as we approach it, we’ll discover some completely unanticipated reason why it wasn’t the right goal anyway. ”

    I can think of a few goals in human history, but only a few which were accomplished through “a slog of centuries during which actual progress was made decade by decade” and essentially none that were accomplished through “a slog of millennia” during which actual progress was made century by century. It’s not crazy to claim that AI is one of the former class of problems, a field founded by Pascal long ago and not obviously closer to the end than to the beginning, but nanotechnology simply isn’t. Rather, its the culmination of precision machining, a field that may have started in Arabia under the Caliphs, but which accelerated radically this century, bringing the end clearly into Feynman’s sight by fifty years ago. Advanced nanotechnology doesn’t give you “Engines of Creation” automatically, but it does make most of the things Kurzweil describes into a series of engineering challenges no more dubious or unpredictable than mid 20th century technology was when people like Wells discussed it in the late first decades of the century. I would say similar things of the convergent fields of molecular biology, synthetic biology, and the like. If you are already ambivalent about claims like the above then it seems to me that the Doomsday argument should tend to move you further towards accepting them. After all, today’s technologies can cause disasters that will kill you and all of your friends and set human civilization back to the middle ages or possibly even to the late neolithic, but if recovery was slowed by the degraded environment that would only mean more time over which to integrate the future population. A long series of such cycles with you coincidentally living in the first one is just what the Doomsday hypothesis rejects as unlikely. By contrast, nanotechnology, advanced biotechnologies and the like actually put in our hands tools with which small groups of individuals might bring about actual human extinction, or might alternatively change us enough that we no longer belong to our current reference class. The doomsday argument doesn’t make you wonder why you aren’t a bacterium or a zooplankton, does it?

    The other place where we disagree is with your last sentence, though I’m not sure how serious you are. I doubt that you think that before inventing any more physics we need to make sure that the people we already have already know all the physics we already have. Would you have said the same thing in the 19th century?

    Anyway, given that we seem to agree on almost all the details, I would like to see what you think decision theory or expected egoistic utility maximization says about cryonics, a technology that actually seems much more appealing under your apparent set of assumptions than among those common to transhumanists.

  8. Scott Says:

    Eliezer: I have read your “AI as a factor in global risk” piece with great interest, and recommend it to others. It deserves its own post sometime; I didn’t feel up for reviewing the whole singularity literature in one go.

    I wasn’t claiming to have much positive knowledge about the difficulty of AI, just a probabilistic guess—i.e., the sort of thing that, were I a Bayesian, I couldn’t help but have.

    When heavier-than-air flight or atomic energy was a hundred years off, it looked fifty years off or impossible; when it was five years off, it still looked fifty years off or impossible.

    I’m not sure I agree with you. In the case of atomic energy, I think physicists in the 1930’s were pretty quick to understand what was feasible: i.e. that you could build a bomb, but only by converting a whole country into a uranium-enrichment factory (as Bohr put it). The main thing they didn’t know was that the US would enter the war in 1942, and that that would precipitate converting a whole country into a uranium-enrichment factory.

    As for heavier-than-air flight, I don’t know what the scientific consensus was circa 1898, except that many people besides the Wright brothers were working on the problem, and the possibility was at least taken seriously enough for Lord Kelvin to scoff at it. Anyone care to enlighten us?

    I suspect that most of our disagreement is contained in your phrase “AI’s key.” I don’t see AI as a problem with a key; I see it like cancer or climate change, as a problem with a million little keys that open doors beyond which there are more little keys, etc. That’s precisely what makes it so difficult.

    Now, I might be wrong; maybe the key to AI will be discovered next year. But I might also be wrong in assigning a low probability to a message from extraterrestrials, a civilization-ending infectious disease, an extremely-rapid climate shift (as in The Day After Tomorrow), or Russia or China taking over the world next year. There are so many different conceivable world-changing events that I have no idea how to assign probabilities to. In such a situation, what can I do except work on things that seem interesting or important, while making a reasonable attempt to keep my ears to the ground?

  9. Scott Says:

    Roland:

    Don’t you think the humans would try to keep their position of power and not care much about the originality, humor, and wit of intelligent machines.

    Yes, and I think they’d be wrong to do so.

  10. James D. Miller Says:

    You wrote: “a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.”

    You want to change the ideology and thought processes of 98% of humanity, and I assume you want to do this without using too much force or mind control. Achieving this may well be more difficult than reaching a singularity especially since many governments and religions don’t want their people to embrace the enlightenment.

    Just think about how many countries Western nations would have to invade just to get started on this Utopian venture.

  11. ScentOfViolets Says:

    When talking about formulating clear problems and goals, aren’t you also relying on clear definitions, albeit ones that allow for the possibility of being modified in the future? I’m thinking of things like ‘intelligence’. Or is the AI community at the point where that term is just a convenient stand-in for whatever type of information processing is being discussed? Because I don’t see how can one even talk about greater than human intelligence without a clear idea of the basic concept.

    Also, there is the fairly mundane idea that the singularity will never arrive, because what constitutes a singularity is a moving target. The use of disk albums as a technology to record sound spans perhaps 1900 to 1990, 90 years. CD’s from 1990 to 2005, 15 years. MP3 files on flash drives, 2005 to to 2008, three years. Now, your canonical 50’s sf would probably go into some detail about science fictional audio recording and playback as a token of the gee whiz factor of the future. My daughter, aged fourteen, and her set? They’re impatient with the slow progress in the field. They demand that their mp3 players be smaller, thinner, slicker each year, and capable of more things. No one ever speaks breathlessly about ‘living in the future’ when discussing their iPod, not even adults who grew up with 33 1/3’s as the recording standard.

    The exact same thing is going to happen in the years ahead. Nanoware that sets and heals bone fractures in hours, or cleans plaque from arteries in days will be met with a certain impatience within a couple of years of it’s introduction, I would imagine. The discerning Consumer will demand the same thing be done in minutes, at home. Domestic robots that can carry on simple conversations with a three-year-old will only make people impatient for models that can make witty cocktail banter. And machines capable of learning, human style? “That’s not really intelligence”. 🙂

  12. Scott Says:

    ScentOfViolets: While we may not be able to define what we mean by greater-than-human intelligence, I think we could recognize some reasonably clear correlates of it, e.g. being able to prove the Riemann Hypothesis in 1 second or less.

    I agree that “the singularity” is a moving target, and it might be futile to try and define too precisely. But in this post, I was talking specifically about overcoming mortality, misery, and the other tragedies of the human condition by e.g., uploading our brains to computers.

  13. ScentOfViolets Says:

    Well, Scott, I am of the school of thought that says a lot of the misery of the human condition has been solvable by technology for some time. People just don’t want to apply it for that purpose. I predict that in a time when certain people are able to live healthy, extremely prolonged lives by whatever technological processes you care to invoke, there will still be other people, large percentages of them, who will still lack indoor plumbing and who will have to get by on $2/day. I don’t make this point as a schooling in morals or ethics, but to suggest that there could non-technological singularities as well. Maybe the day is not too far distant when forcing other people to live in squalor so that others may live in crass splendor will be seen as extremely morally repugnant. Enough so that legislation and political and cultural institutions are set up to enforce this morality in much the same way that, say, proscriptions against robbery and murder are enforced today.

  14. William Newman Says:

    Scott writes “I don’t see AI as a problem with a key.”

    How complicated do you think it is to design a computer system that learns about as well as a human baby’s brain? (Given $20M, or whatever, in custom hardware.) Based on rough guesses about the relevant numbers of genes controlling the layout of an actual baby’s brain, it seems likely to me that such a design is no more complicated than the design of Unix or of the Arpanet. Such a complicated design has more than one “key,” there are multiple elegant insights required, and a lot of nitty gritty screwing around too. But for various historical examples of such systems, the time lag between the prerequisite availability of economical hardware, and the consequent shipping of useful operating systems and packet-switching network systems, was only about a single decade, not the many decades that you think is the likely outcome for AI.

    Occasionally an important theme of analysis does take a very long time to figure out. Ar’nold’s analysis of the stability of orbital mechanics is the most extreme reasonably-modern example I can think of, where it took centuries between the time the solution became relevant and the time it was produced. Skip quadtrees arrived three decades or so after the time when people would’ve immediately put them to use if they had only known about them; similarly, modern error-correcting codes based on expander graphs. But it seems to me that such cases have been uncommon since the Industrial Revolution. Thus, I think a single decade is a decent general-purpose conservative guess for how long a design solution will be delayed once the required hardware or manufacturing capability is present.

    I think so far this rule of thumb has applied fairly well even within AI. How many times has the design solution to something nontrivial — face recognition, backgammon playing, whatever — been something you could bundle into a time machine and then use effectively on hardware more than ten years old?

    If Moravec’s estimates of the raw computational power of the human brain are correct, somewhat-economical hardware should become available in the next decade. Your analysis has not convinced me that we should expect the design lag after that to be any more than a decade.

    “being able to prove the Riemann Hypothesis in 1 second or less”

    That seems like a much higher bar than necessary. When and if we get computers capable of doing most human tasks significantly better and faster than humans, things will get very weird even if the computers aren’t a billionfold better than us. If they’re merely one or two orders of magnitude better and faster at the tasks of designing and manufacturing more effective AIs, “singularity” could be a fair description.

  15. Robin Hanson Says:

    I’ve responded here.

  16. panefsky Says:

    Great post Scott!
    I start from your last argument : “And I can’t helping thinking that, before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.”

    I totally agree but this could be applied to every research area, right? Why should we care about NLP, for example, when people die form hunger a few miles away?

    To me, your first argument is not put correctly :“The first, and most important, reason is also the reason why I don’t spend my life thinking about P versus NP: because there are vastly easier prerequisite questions that we already don’t know how to answer.”

    It is quite evident that to achieve such a high goal you will have to deal with many more ‘smaller’ problems, but this is not a reason not to engage in it. This is like saying “companies should not make business plans, because predicting the oil price (hence your cost) is very difficult”.

    However you implicitly reveal a bigger problem. Setting a goal is fine, but is it reasonable? It seems that the fundamental argument against the Singularity concept is not that its goal is super-hard, but that it is rather super-hard to argue on its hardness! (leaving aside ethical matters, consequences on society, etc).

    However, I don’t see why the popular “It is not the destination, but the journey that matters”, could not apply here as well. I also can’t say if such big ideas are possible or not (and in fact I think no one can) I doubt that concepts such as “love”, “passion”, “inspiration” can be represented by bits, but I will never say that it is impossible in any way and it will be really interesting to see what we will find out on the way.

    All in all, I think the goal of this idea is already here. Apart from the film “The Matrix” , don’t you think some ‘mental parts’ of Brin & Page will continue to exist (through the Google search engine), after they are long gone? And what about Mark Zbikowski, whose name is ‘hidden’ in most computers on Earth? (I have written a related article here)

  17. Scott Says:

    Michael V.: Thanks for the interesting comment. I’ve added the phrase “with the obvious potential for black swans along the way” to the sentence “…I’d say that human-level AI seemed to me like a slog of many more centuries or millennia”—it was in my first draft but I took it out because I thought it might confuse people. My view is that AI seems like a slog of many more centuries or millennia, assuming (which is far from a safe assumption) that there are no black swans that change the whole terms of the question.

    Regarding the Doomsday Argument: as I mentioned, I can accept that post-singularity creatures would operate a million times faster than I do. But because of the Church-Turing Thesis, I have much more trouble regarding them as belonging to a different reference class. So the question “why am I not one of them?” still has a great deal of force for me.

    It’s an interesting point that cryonics might make more sense under my assumptions than under the singulatarians’. I don’t have any strong argument against making yourself a popsicle: even if you never get thawed out, it’s still plausible that future civilizations would be able to extract something useful or interesting from your brain patterns. (It would be like writing your memoirs on steroids.) Personally I haven’t signed up for it (or even given it serious thought), for basically the same reasons I haven’t thought about any of the other practicalities of my own senescence and death. Undoubtedly I would think about them were I an expected utility-maximizer, but at least for now I’d much rather think about achieving my goals in this life.

  18. XiXiDu Says:

    http://www.spectrum.ieee.org/singularity

    Test, for some reason my first post got eaten…

  19. Nejat Says:

    Your last line is the best one – “And I can’t helping thinking that, before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.”

  20. Travis Says:

    As mitchell porter notes at the end of his post, technological progress is not irrelevant to alleviating human misery; think of how important progress in agriculture has been. That said, I tend to think that the key for solving much of human misery is more economic than technological.

    Functioning markets are the best way we know to productively organize human activity. Setting moral considerations aside for the moment, we know that severe suffering is typically non-productive, and so would tend to be reduced by markets. Now, I know people are going to counter with arguments about sweat shops and so on, so let me put it in these rather brutal terms:

    If you had to choose between peasant lifestyles, would you rather be a sweat-shop factory worker in China, or be a villager in sub-Saharan Africa (with the inherent risks of HIV, war, and nasty water-born ailments)?

    On a related note, ScentOfViolets implies that people are forced to live in squalor so that others may live in crass splendor. This certainly does occur, but it’s not the main cause of poverty. Rather, poverty is the default human condition; it takes productive work to rise out of poverty, rather than oppression to force someone into it. By and large, I don’t think anyone is forcing the poor in Africa to live in squalor; rather, they have completely broken societies, which profit nobody.

    On a positive note, the percentage of the world’s population that live on very meager incomes (~$1/day) has substantially decreased over the past few decades–we are making headway against global poverty.

    On a slightly meta note, this argument we’re all having here is a good example of why technological progress is “easier” than social progress. If Scott posted a correct proof of P = NP, we’d find it relatively easy to agree upon. By comparison, I doubt we’re likely to agree on how to fix the world politically, and if we can’t agree on how to start, it’s much harder to get anywhere.

  21. Peter Sheldrick Says:

    This is from the foreword of Multiple View Geometry by Richard Hartley and Andrew Zisserman
    http://www.robots.ox.ac.uk/~vgg/hzbook/

    The foreword was written by Oliver Faugeras.

    “Making a computer see was something that leading experts in the field of Artificial Intelligence thought to be at the level of difficulty of a summer student’s project back in the sixties. Forty years later the task is still unsolved and seems formidable. A whole field, called Computer Vision, has emerged as a discipline in itself with strong connections to mathematics and computer science and looser connections to physics, the psychology of perception and the neuro sciences. One of the likely reasons for this half-failure is the fact that researchers had overlooked the fact, perhaps because of this plague called naive introspection, that perception in general and visual perception in particular are far more complex in animals and humans than was initially thought.”

    Just a random example of course but the book is great btw i really recommend it.

  22. Scott Says:

    panefsky: Yes, I agree that AI, like P vs. NP, is a problem so hard that we don’t even know how hard it is.

    I see basic science as a central part of the Enlightenment project. Or at least, that’s what I tell myself in moments of doubt.

  23. Scott Says:

    James:

    Achieving this may well be more difficult than reaching a singularity especially since many governments and religions don’t want their people to embrace the enlightenment.

    Obviously, I don’t think getting everyone on Earth fully up to speed with the Enlightenment is a prerequisite to further technological progress. I do think getting more people up to speed would help progress, and even more to the point, would increase the chance of progress being applied to sane ends.

  24. Peter Love Says:

    Hi Scott,

    Great post – really enjoyed it. I do have a bleak feeling that sentences that start with

    “if humans survive long enough”

    have the same character as

    “If P=NP”

    given that, as a species, it doesn’t seem likely we’re going to make it in the long term.

    On a more cheery note, one could ask, not when the richest and most technologically advanced nations reach the singularity, but when will everybody have access to (e.g.) clean water? I recently became aware of one of your colleagues teaches a very interesting course on technology aimed at the poorest people:

    http://web.mit.edu/d-lab/

    Some grounds for optimism, after all.

    (insert d-wave/d-lab joke here)

    Cheers,

    Peter

  25. Barak A. Pearlmutter Says:

    > The Singularity Is Far

    Well good. Don’t Immanentize the Eschaton.

  26. Jay Says:

    Scott,

    While I disagree with your conclusion that the Singularity is far off, I am glad to see you’ve got well-founded criticism of Kurzweil. This keeps him from becoming a prophet figure, which would make the whole transhumanism movement look rather silly.

    Even if you think that the Singularity is far off, I can’t help but feel that it’s not smart to assume that you are right about this.

    The Singularity isn’t necessarily a good thing. We’ve got only once chance to get it right and if we screw up it will be the end of our world in a negative way.

    However, if we assume that it’s close… then at least we’ll be in alert-mode. And we *need* to be as prepared as we can possibly be for something *this* big.

    (is it even *possible* to be prepared for the Singularity?)

  27. Aaron Says:

    Travis: yes, poverty is the default human condition. Maintaining markets and property rights is hard, and developing them is even harder. But just because it’s the default doesn’t mean we can’t blame it on oppression. Oppression is also the default human condition. You can’t usefully talk about the problems in Africa without the dictators and their thug squads that take away anything useful that anyone tries to build. You seem to want to avoid laying the fault of that oppression at the feet of the rest of the world, but in doing so, you overreach in denying the role of oppression.

  28. Scott Says:

    is it even *possible* to be prepared for the Singularity?

    Jay: That’s exactly the question. If (like me) you believe it’s many centuries off, then how could we possibly know how to prepare for it, or whether our preparations might actually do more harm than good?

  29. John Armstrong Says:

    Barak: Hail Eris

  30. Vladimir Nesov Says:

    Scott,

    The problem is that we still can’t be sure that it’s many centuries away. It is an existential risk, so it is a very important problem and we need to be prepared. Even if we don’t understand now how to implement intelligence, we need to have a plan on how to shape it once we do. And in shaping it, we set up the dynamic that will determine our whole future. It is a problem unlike anything we dealt with before: we can’t fix it later, not by default, only if the option of fixing it later follows from the initial dynamic. Once you understand the importance and difficulty of this problem, you can’t justify how little serious consideration it receives. AI might be a problem for many decades, composed incrementally from separate nudges of progress, but unlike any other mathematical problem, the failure to get it exactly right the first time will leave you no chances to reconsider. This is why it urgently needs serious attention, not because there is or isn’t a chance to get to the finish on one breath.

  31. Anonymous Says:

    Later, at Cornell, I was lucky to learn from Bart Selman, and worked as an AI programmer for Cornell’s RoboCup team—an experience that taught me little about the nature of intelligence but a great deal about how to make robots pass a ball.

    I’m more interested in human computation. Why not use many intelligent brains as a starting point for creating something interesting?

  32. Scott Says:

    Thanks, Vladimir! I understand the argument; the question for me is how much thinking about it at such an early stage in the development of AI is going to help us get it right.

  33. Vladimir Nesov Says:

    Scott: the question for me is how much thinking about it at such an early stage in the development of AI is going to help us get it right.

    We don’t know that it’s an early stage. If the AI turns out to have a key and that key will be found next year, or in five years, we won’t be as prepared as we could be, given the current state of the art. You can develop an AI without understanding what to do with it, and the world will end up in some silly configuration. Thinking about shaping the Singularity might be useless for solving the technical problem of AI, or it might be fundamental to it, but either way it needs to be done before it’s too late. We don’t even have a research community that adequately understands the problem, there is a long way to go before we can say that we’ve done everything we could, that there is nothing to add for now.

  34. Moodworves Says:

    Scott #17: “But because of the Church-Turing Thesis, I have much more trouble regarding them as belonging to a different reference class. So the question “why am I not one of them?” still has a great deal of force for me.”

    I’m not sure if it makes sense to decrease the probability that there will be many post-singularity creatures because you’re a pre-singularity creature. If you plug into Bayes’ Theorem, it would seem to imply that you’re right, except that you need the probability that you’re pre-singularity, and I don’t know how that would be defined:

    P( pre-sing | singularity )
    = P( singularity | pre-sing ) * P( pre-sing ) / P( singularity )
    = small * (undef?) / P( singularity )

    If you look at it from a frequentist perspective, I don’t see any conclusions you can make without knowing the proportion of pre-singularity creatures who live in worlds that have post-singularity futures.

  35. Moodworves Says:

    Oops, that should be:

    P( singularity | pre-sing )
    = P( pre-sing | singularity ) * P( singularity ) / P( pre-sing )
    = small * P(singularity) / (undef?)

  36. Dennis Says:

    Scott,

    I’ve been following your Blog for some months now. And, the truth is that, a lot of what you talk about goes over my head. But, every once in a while, you focus on something I can relate to and for which I have some understanding. And this piece was certainly one of them. Great thoughtful stuff – thanks for writing it. Cheers!

  37. math idiot Says:

    Either Kurzweil has read too many science fictions or he thought he was writing a science fiction. Actually, anyone like me can write this sort of stuff on an extremely weak basis.

    MI

  38. James Spada Says:

    This statement post is like saying the human cant break the four min mile and then within a year it was beaten twice.

  39. Blake Stacey Says:

    I read Kurzweil’s The Age of Spiritual Machines not long after it came out, and his knowledge of anything other than a few specific applications of computing technology read like he’d gotten everything from the inside front cover of a pop-science book. His treatment of evolutionary biology was singularly bad.

  40. Raoul Ohio Says:

    Kurzweil is the prime example of someone brilliant, outspoken, and somewhere on the pleasantly wacky — totally bonkers continuum. Seriously reviewing his theory reminds me of a high school reunion when a seminary student, upon learning that I was a mathematician, challenged me to debate the question of “did they steal Jesus’ body out of the cave?”. Being a few drinks to the good, I replied “HTF should I know? That was 2000 years ago. I don’t even know where Jimmy Hoffa’s body is.”.

  41. Andrew Says:

    “Also, if the Singularity ever does arrive, I expect it to be plagued by frequent outages and terrible customer service.”

    This is the best quote I’ve found all year. I love you, Scott.

  42. michael vassar Says:

    Scott:

    As a historical note, as late as 1939 most physicists believed that nuclear chain reactions were impossible according to the Szilard biography that I just read. Einstein and Fermi were particularly noted as resistant to the idea.

    I would really like to know more about your take on molecular nanotechnology. Unlike AI, it seems like a problem that we can sensibly expect to take decades, not centuries, and it seems hard to justify disbelieving in brain simulations within a few decades of molecular nanotechnology given the hardware and scientific tools which nanotech will make available.

    It seems to me that it is generally considered “conservative” both to claim that the 21st century will see as much or more technological change than the 20th saw AND to make plans for global warming, overpopulation, medicare, etc under the assumption that it will see no substantial changes of any sort, certainly none as significant as the internet. Do you agree that this is the case and that there is a contradiction in these attitudes? Do you agree that it is presently considered radical to actually expect the 2108 to be as different from today as 1908 was? It seems to me that it is VERY difficult to imagine how the science and technology could develop as much as it did during the last century without bringing both global destruction and substantial intelligence augmentation within the technological reach of small groups of IQ

  43. Simen Says:

    James (#38): there weren’t any conceptual barriers to be broken in order to run a four minute mile. There are many conceptual problems yet to be solved in order to be anywhere near anything that could be plausibly called Singularity.

    Besides, Scott isn’t saying Singularity is impossible, he’s saying that if it ever comes, it’s probably far off in the future.

  44. michael vassar Says:

    I want to put in a less than sign here but it doesn’t work. I’ll leave it out this time.

    160 people working for a few years. Even a repeat of the last 50 years should have this effect (maybe raise the IQ bar to 170?).

    I don’t ask you to be a Utilitarian or any other consistent form of optimizer. All I ask is that while you take actions that are specifically justified as altruistic you act as an optimizer. In other words, don’t quit your day-job, but when you are trying to save the world, actually focus where there seem to be large benefits, e.g. preventing Republicans from controlling the US and figuring out how to propagandize for rationality and overcome bias rather than fighting global warming.

    I am skeptical as to the claim that the Doomsday relevant reference class is “Church-Turing complete observers” for several reasons. For one, it seems to me that this divides the human species, leaving me in a reference class that excludes many of my fellow humans but which includes brains the size of Jupiter. For another, it is far from clear to me that we should expect large populations after the singularity. Relatively few (nanomoles?) extremely large agents seem like a more likely alternative. Do you think that it is likely that with respect to anthropic arguments most humans are like dogs and you are like a Jupiter Brain because you know some specific math that they don’t?

  45. Scott Says:

    Michael V.: I’m hardly an expert on molecular nanotechnology. But I’ll admit, the fact that so much of the Transhumanist vision rests on that one pillar, and that many of the experts in the area say that (at least for the foreseeable future) it can’t bear the weight, raises a red flag for me. I read with interest Richard Jones’ piece in IEEE Spectrum, which I thought was the best piece in the issue since it actually engages the opposing arguments on a somewhat-technical level. Basically, Jones is saying that to make nanomachines that can work reliably in the hot, wet environment of the body (as opposed to a vacuum at near 0K), nanotechnologists are going to have little choice but to reinvent molecular biology (or to reprogram existing molecular biology for their purposes). And that they’ll therefore encounter, in a different setting, essentially the same problem as AI researchers have: namely, the problem of playing catch-up against three billion years of evolution.

  46. Scott Says:

    Michael, WordPress eats greater-than and less-than signs—use < and >.

  47. Scott Says:

    Do you think that it is likely that with respect to anthropic arguments most humans are like dogs and you are like a Jupiter Brain because you know some specific math that they don’t?

    When I learned the union bound, I could feel my brain expand to the size of Jupiter… 😉

  48. Douglas Knight Says:

    to make nanomachines that can work reliably in the hot, wet environment of the body [is hard]

    What are the logical consequences of this? You seem to be ignoring the difference between “and” and “or.” That problem is relevant to biological immortality or uploading, but not to easier consequences of MNT, like immediately solving global warming or immediately causing much bigger problems.

  49. michael vassar Says:

    Scott: As far as I can tell, the pillar that Transhumanist thought rests on is materialism.

    If molecular nanotech can’t work (though see my comments on Jones here
    http://www.softmachines.org/wordpress/?p=175 and here
    http://www.acceleratingfuture.com/michael/blog/2008/06/response-to-dr-richard-al-jones-ieet-spectrum-piece-rupturing-the-nanotech-rapture/ here http://crnano.typepad.com/crnblog/2005/04/nano_research_f.html
    here and
    http://crnano.typepad.com/crnblog/2004/11/mainstream_acce.html
    ) other technologies will, see

    I would also like to mention this
    http://nextbigfuture.com/2008/06/achieving-mundane-technological.html

    MNT doesn’t need to work in the body to have a VERY large impact on health VERY fast. It just needs to build large (e.g. Eukaryotic cell sized) machines that can, and that’s just for health and biological research purposes. Elsewhere, cold vacuum is fine.

    Also, can you respond to my other points, e.g. utilitarian altruism, extrapolating progress, etc?

  50. Scott Says:

    Douglas: Yes, Jones was talking specifically about brain scanning and uploading.

    Nanotech has already led to much more efficient solar cells in the lab. I find this tremendously exciting, and hope many of the world’s deserts are blanketed with these cells sooner rather than later. This is a real application of nanotech.

    As for the gray goo scenario, if that’s what you were alluding to: again I’m not an expert, but it seems to me that replicating nanomachines would either have to feed off inorganic matter—in which case I’d wonder where they were getting all their energy from—or else organic matter, in which case they’d again have to survive in extremely hostile environments, and would, in effect, be viruses and bacteria reinvented (“welcome to the club!”).

  51. Bruno Loff Says:

    If I could press a button to free the world from loneliness, disease, and death—the downside being that life might become banal without the grace of tragedy—I’d probably hesitate for about five seconds before lunging for it.

    Well, I personally would not care to live in a world where such button has been pressed. There are so many interesting things which would not exist without suffering (e.g., most poetry), and I don’t think the richness of life can be reduced to this happiness/unhappiness spectrum.

  52. John Sidles Says:

    I could press a button to free the world from loneliness, disease, and death—the downside being that life might become banal without the grace of tragedy—I’d probably hesitate for about five seconds before lunging for it.

    You whippersnappers might enjoy reading Cordwainer Smith’s classic series of SF stories on this theme, written in the 1950s, and which have since been reissued in one volume as The Instrumentality of Mankind.

    Smith’s writings were so astonishingly prescient about the intersection of cognition, biology, technology, and ethics, that’s not clear whether any writer since Smith has had all that much new to say on this topic.

    Stories like these are called “classics” for good reason. 🙂

  53. Silas Says:

    Scott, about this: I see a world that really did change dramatically over the last century, but where progress on many fronts (like transportation and energy) seems to have slowed down rather than sped up;

    Do you know of someone who has posted “counter stats” to Kurzweil’s like these that show exponential worsening? Because that would be great information to have. (I’ve made similar criticisms when debating inflation.) Here would be useful stats to have to make your point:

    -Adjusted time spent in work-related transit per person-year. (Count transit requiring the traveler’s full attention [i.e. driving] as higher because that’s more opportunity lost.)
    -Energy (of all types) expended per food calorie produced, measured at final point of consumption.
    -Land area required per unit of real GDP. (That is, measure how much land is needed to be in use to get a unit of production. And you can choose to exclude from “production” anything that is not a final output so as to adjust for all the waste.)
    -Fraction of a person’s week spent doing work-related things. (I’ve looked at a typical day for my white-collar parents and it’s not good. Something like getting up at 4:45am and getting home at 6:15 pm.)

    What say you?

  54. Travis Says:

    Aaron: I’m not sure I understand what you mean by “fault” with regard to oppression in Africa, etc. Is the rest of the world responsible for people like Mugabe simply because of our inaction in stopping him, or are you claiming we actively create such dictators?

    If you mean simply that oppression arises naturally and locally, and the free world has an obligation to stop it wherever reasonably possible, I’m inclined to agree with you, although such interventions are very often politically unpopular and have unintended consequences.

    If you mean that oppression always has an outside source (i.e., us, in the Western world), then we disagree.

  55. Kaj Sotala Says:

    Bruno Loff:
    Well, I personally would not care to live in a world where such button has been pressed.

    And there are plenty of people who are not living in a world where such a button hasn’t been pressed.

    You do realize that you’re, in effect, saying that it’s okay that (among other things) people live such miserable lives that they ending up killing themselves (with all the associated misery to not only for themselves, but all their loved ones and relatives as well) because that gives us nice poems, right?

  56. michael vassar Says:

    Silas. Look for Phil Goetz for that, though not those particular stats.

    Scott: Molecular Nanotech has much more likely harmful uses than grey goo, such as the extremely cheap mass production of conventional and nuclear weapons. Cheap separation of uranium isotopes from seawater or manufacturing cruise missiles from petrochemical feedstocks appears much easier than building free-living replicators. The Center for Responsible Nanotechnology used to write lots of articles on the more serious dangers of molecular nanotech. Even simple economic disruption might count.

    I have published a paper with Robert Frietas arguing that while Grey Goo isn’t easy, or necessarily even solvable in the general case, it should be possible to build systems for defending against relatively simple forms of grey goo and the engineering skill required to build complex forms is unlikely to be used for such futile tasks.

  57. Patrick Says:

    Its worth noting that bringing The Enlightenment to more people seems to increase consumption of resources and increase the probability of Bayesian tragedy.(See China,India)

    Maybe we should figure out a way to make our lifestyle sustainable before we try to make it too ubiquitous.

  58. Scott Says:

    Patrick: The whole problem is that people aren’t consistent in their embrace of the Enlightenment. They take the fruits of a self-questioning scientific attitude (like cars and electric power) but reject the attitude itself. I was talking about spreading the latter, not the former.

  59. StCredZero Says:

    How about, “The Singularity does a Grover Impression?”

    “Near!…Far!…Near!…Far!…”

  60. Bram Cohen Says:

    Estimates of when the singularity will happen generally are set earlier than my estimate of when we’ll have ubiquitous HD videoconferencing, which would seem to be a significant discrepancy.

    Funny coincidence about Bart Selman – I studied under him for two summers. Did you hear mention of Walksat? That’s what I came up with, and it was the best known technique for a while until survey propogation totally cleaned its clock.

  61. Scott Says:

    Michael:

    As far as I can tell, the pillar that Transhumanist thought rests on is materialism. If molecular nanotech can’t work … other technologies will

    I thought we were talking about what’s plausible within a century or two. I said the Singularity is far, not physically impossible.

    Also, can you respond to my other points, e.g. utilitarian altruism, extrapolating progress, etc?

    There’s something about your presumptuousness that I find endearing. 🙂 I do try to focus on things that seem important, though I know I often fall short, both in my judgments and in the living up to them. I’m glad you agree that keeping the Republicans out of power is important. I think the climate crisis is also pretty important, though we seem to disagree there.

    It seems to me that it is generally considered “conservative” both to claim that the 21st century will see as much or more technological change than the 20th saw AND to make plans for global warming, overpopulation, medicare, etc under the assumption that it will see no substantial changes of any sort, certainly none as significant as the internet. Do you agree that this is the case and that there is a contradiction in these attitudes?

    Yes and yes. My own prediction is that by most measures, there will not be nearly as much technological change in the 21st century as there was in the 20th. I see many aspects of civilization as approaching the right-hand-side of a sigmoid (in the best case) or bell curve (in the worst case).

    My reasons? Firstly, I think that there was already more technological change in the first half of the 20th century than in the second half. Certainly the delta from 1900 to 1950 in how (say) the average American lives seems far greater than the delta from 1950 to 2000. Transportation, construction, power generation, and medicine all seem to have improved depressingly little in the last half-century. And even with some of the obvious exceptions—computers, the Internet, space travel—we already see signs of slowing down. Moore’s Law has essentially ended, if defined in terms of MIPS per core: future improvements will require more and more parallelism, which itself yields diminishing returns for many problems. The moon landings were followed not by the waves of colonization people at the time imagined but by stagnation and retreat; it’s not even clear that we still have the ability to put humans on the moon.

    What else? We’re at the global Hubbert peak right about now. Transcontinental travel might soon revert to being an extremely-expensive luxury. We’re probably approaching a leveling-off of human population. There’s no longer much desirable unused land on which to found new societies, cultures, or industries.

    Maybe none of this matters, since advances in AI, nanotech, and other singulatarian pursuits will render everything else irrelevant. And if humans survive long enough, maybe there will be only a temporary technological slowdown, followed by a renaissance in fields that don’t even exist today.

    But looking at the world as it is, what I see is almost the opposite of what Kurzweil sees.

  62. Scott Says:

    Bram: Yes, of course I heard about Walksat! I didn’t put together that you were involved with it. Cool.

  63. Koray Says:

    Nice post, Scott. I especially agree with you on the fact that such a long term big-idea goal will be revised so many times that it’s meaningless to even give dates.

  64. David Says:

    Scott:
    Is there any example of a prognostication about the 21st century written before 1950, most of which doesn’t now seem quaint?
    The 1889 novel Anno Domini 2000, or, Woman’s Destiny by Sir Julius Vogel (former Prime Minister of New Zealand) is a remarkable futuristic piece of writing:

    women hold many of the highest posts of office,
    reverse migration to a prosperous Ireland,
    Europe becomes fully federated,
    British royalty is strengthened by marriage between the ‘Emperor’ and a commoner woman of great charisma,
    air travel is universal, in lightweight aluminium ‘air-cruisers’ powered by ‘quickly revolving fans’, (16 years before the Wright brothers)
    instant communication technology in the form of ‘hand telegraph’ or ‘noiseless telegraph’, which politicians have fitted to their desks and journalists use to transmit copy directly to their newspapers,
    social welfare system provides living comforts even for the poor,
    electricity is the prime source of domestic light and heat and most houses in hot climates have air conditioning.

    You can read the full thing online, or get a nice summary from the 2000 edition’s introduction on Google Books.

  65. michael vassar Says:

    I’m very happy to see Scott take a position that I find to be highly intellectually defensible. While I don’t quite agree that progress in the last 50 years was slower than the previous, it doesn’t seem much faster to me either. Globally, life expectancy rose faster recently, and in the West life expectancy at age 5 and especially at age 10 or 20 rose much faster. GDP growth rates were higher, especially globally but also in the developed world, in per capita terms, but in the developed world GDP growth rate was slower in absolute terms. Television takes 4 hrs from the average person’s day, and the set of jobs is very substantially changed as well, so the typical American of working age spends 3/4ths of their waking hours very differently from 50 years ago. Then there are computers… and the changes associated with “the 60s”. All in all, it seems to me that each of the last 3 50 year periods saw comparable though slightly increasing change in the developed world but rapidly increasing change globally.

    As for land, how can you say we are running out of land? For starters there’s the new Northwest Passage and all that permafrost in Canada, Siberia and Greenland! Seriously though, there has rarely been much desirable uninhabited land unless by “uninhabited” one means “inhabited by poorly armed people of a different religion or ethnicity”. What there was plenty of 50 years ago is undesirable sparsely inhabited land which could rapidly be made desirable by building infrastructure. Exurbs are still an example of this, but a poor example and we have largely lost the culture that enables us to build new infrastructure.

    Anyway, these are all quibbles and I accept a slightly modified version of the thesis. We do seem to have seen constant returns in progress from exponentially increasing investments in science, so a non-naive extrapolation might reasonably be estimated to predict a serious global scientific and technological slowdown as soon as we run out of new populations to absorb into the global scientific enterprise, e.g. within 30-40 years.

    The difficulty of the “other technologies” I alluded to as alternatives to molecular nanotech may be a place where we actually disagree in a practical sense. I see nanotech as a way of accomplishing in the 2030s or 2040s technological goals which would otherwise be achieved in the 2060s or 2070s given the continuation of technological progress at a rate similar to the norm for the past 150 years. I don’t think that tech progress can slow by half without seriously endangering the stability of our society, creating negative feedback loops, and choking itself off much further, so given the ultra-conservative but progressive model MNT level capabilities are achieved in the mid 22nd century after 30 years of current rate progress and 100 years of progress at half the current rate. However, I would consider this model to be much less likely than the “Bell Curve” shaped decline you allude to.

    I will be in Boston in 2 weeks to meet with Nick Bostrom. I’d be happy to continue this discussion with you in person then.

  66. Silas Says:

    How about, “The Singularity does a Grover Impression?”

    “Near!…Far!…Near!…Far!…”

    OMG! My friends and I still love that Grover routine! YouTube link

  67. Scott Says:

    StCredZero and Silas: This is hilarious.

    Before clicking the YouTube link, I wasn’t sure what you were talking about, and assumed that you meant Grover’s algorithm approaching the marked item, then moving away from it, then going back to the marked item, then away from it again, etc. (which is indeed what it does).

    Is the first time in history that Grover’s algorithm has been honestly confused with Sesame Street Grover?

  68. Scott Says:

    I’d be happy to continue this discussion with you in person then.

    Michael: Sure! Send me email.

  69. Silas Says:

    Scott: lol! Awesome. You really need to start telling people that Grover’s Algorithm is named from the Sesame Street routine, and see how long it takes for that misconception to die 😛

    By the way:

    >>I’d be happy to continue this discussion with you in person then.

    >Michael: Sure! Send me email.

    Very cold, bro.

  70. Cody Says:

    I originally heard it was Bohr, but the true source appears to be unknown:

    Prediction is difficult, especially about the future.

    I wish I could refrain from all that.
    Assuming we did make AI and it was a threat to us and we weren’t prepared, wouldn’t it be relatively easy to destroy it in its infancy? Especially considering we could always store the plans and rebuild it later when we understood what we were doing better, (so even if its status as a threat were in dispute, we could always just postpone it’s widespread adoption at our whims).
    It seems like a far more minor threat than say, unregulated lead smelting, destroying the ozone layer, chemical warfare, certain contagious diseases, widespread environmental damage, water supply contamination, and (potentially) global warming—since none of those problems can be bombed into oblivion.
    Scott, I really like what you said about bringing the enlightenment to the masses, and your further explanation. It seems even in America, where the greatest number of people have reaped the greatest benefits of technology, the very existence of the Republican party in its current state seems indicative of a shortfall of the integration of the enlightenment into society. Prevalent religious fanaticism even more so.

  71. AlainR Says:

    Am I really the first to notice that the Enlightenment took place in the 18th century, not 17th ?

  72. Scott Says:

    Alain: Wikipedia says many scholars believe the Enlightenment started in the 17th century. But just for you, I changed it to 18th.

  73. John Sidles Says:

    A Google search for “Judaic Treasures of the Library of Congress: Spinoza’s Opera Posthuma” will find a facsimile of Thomas Jefferson’s personal copy … which was already almost a century old when Jefferson acquired it.

    Being interested in both Spinoza and Jefferson, I arranged with the Librarian of Congress to examine these volumes … which proved to be an unsurpassable thrill on multiple levels simultaneously. 🙂

    So although for us, the Enlightenment started in the 18th century, for Jefferson (at least) it clearly started in the 17th.

  74. Brian Wang Says:

    Many of the goals of transhumanism and the expected results of a technological singularity can be achieved without AI or diamondoid molecular nanotechnology. It is just requires more organization and effort but there are other technologies being worked on now to make it happen. [up several comments is the reference to my article on a mundane singularity which is how to make it happen without diamondoid mechnosynthesis or even much nanotech beyond what is already working and without AI)

    Note: very little effort and resources have been put into molecular nanotechnology. The billions spent on nanotechnology have primarily been for regular chemistry relabeled.

    Philip Moriarty has been funded to run experiments to validate the computational chemistry work of Robert Freitas and Ralph Merkle which will show how to make diamondoid mechanosynthesis viable.

    Richard Jones is not an expert on all nanotech. He is biased towards his softmachines view, which he wrote about.

    Nothing wrong with softmachines and there is a lot of working DNA nanotechnology (using DNA as structural material and for manipulation of other matter). But it does not mean diamondoid mechnosynthesis can’t work.

    Since you have worked in AI before you should know that a lot of the AI research that was done decades ago has been turned into a multi-billion industry where over half of all financial transactions are controlled but what was considered AI. Advanced Nanotech and AI have the problem that whatever works gets relabeled and moved out so what is left is what is still not achieved.

    Transportation, construction, power generation, and medicine all seem to have improved depressingly little in the last half-century. And even with some of the obvious exceptions—computers, the Internet, space travel—we already see signs of slowing down. Moore’s Law has essentially ended, if defined in terms of MIPS per core: future improvements will require more and more parallelism, which itself yields diminishing returns for many problems. The moon landings were followed not by the waves of colonization people at the time imagined but by stagnation and retreat; it’s not even clear that we still have the ability to put humans on the moon.

    the moon landings were political camping trips. NASA is mostly pork barrel spending that is labeled a space program. Of course if there is no overall plan to industrial space or colonize it or really attack the problem of cost of access then space will have a tough time having any worthwhile achievements. Fortunately there are now wealthy people willing to have more ambitious plans that could lead somewhere in space. (spacex, Bigelow etc…) Better tech strategy (fuel depots in space to reduce costs instead of meaningless space stations).

    Catepillar just funded contour crafting. Building with a big ink jet style printer but extruding cement. Will build 200 times faster than current methods.

    Exaflop supercomputer using Tensilica processors.

    Zettaflop computers have had technical conferences and look achievable with onchip photonic communication plus other architectural improvements.

    Power generation had funding issues in the 70s with high interests that helped to spike nuclear power. The Three mile accident and chernobyl accidents were misunderstood. No one died at Three mile island. Chernobyl was a russian reactor that had no containment dome. People complain about power generation and then do not take the time to understand the energy issues. There are now new reactors being built which can perform a deep burn of nuclear fuel that means going up to 50 to 99% burn of the fuel instead of 3-6%. that means almost no waste that has more than a half life of 30 years. Waste is unburned fuel.

    With the right choices, strategy and technology it will be easy to boost economic growth to 10-15% per year worldwide and to support population levels on earth that are many times higher than now and have all those people be much wealthier than people now.

    Tell me what technological thing you think cannot be done and I can tell you why it has not happened and the several better ways to attack the problem.
    Peak oil – easy, genetically modified seaweed (Japan), Jatropha in waste areas (India), algae biofuel (many companies), biofuel from garbage (several companies), mass produced nuclear power (Hyperion Power Generation, China’s high temp reactor)

    Climate change – easy (Calera cement – will absorb 1 ton of CO2 per ton of cement instead of releasing it, 2.5 billion tons of cement used per year now., plus the mass produced nuclear reactors)

    enough food – genetically modified plants and animals, more fish farms (over half of the worlds fish is from fish farms, mostly in China, just need to tweak make fish farming cleaner and double or triple the size. Also vertical farming for more food grown in cities.

    Many problems are not intractable. People either did not approach it the right way or the right people did not care to solve it.

  75. KaoriBlue Says:

    Brian,

    “The billions spent on nanotechnology have primarily been for regular chemistry relabeled.”

    I certainly hope you’re onboard for plenty of regular chemistry… an impossible number of breakthroughs will be needed to realize even a small piece of Drexler’s ludicrously complex vision for nanotechnology. I actually couldn’t be happier about this.

    Also, I can almost promise you that ‘DNA nanotechnology’ is not going to help with anything you’re envisioning.

  76. Brian Wang Says:

    I doubt that Kaoriblue is basing this on any evidence related to the advancing work over the past few years but on incorrect intuition.

    Take a look at a recently funded DNA origami based nanomachine with gold nanosphere antenna Direct communication with cells (including neurons) and future microsurgery. There is a nice graphic.

    There are DNA sewing machines

    IBM is using DNA to assemble carbon nanotubes into grids for new computer processors.

    There are DNA pistons for powering nanodevices.

    DNA was used to build a three-dimensional structure out of 15 nanometer gold nanoparticles (millions of nanoparticles.)

    They have created synthetic bases and new bases. Extending and replacing the DNA alphabet. DNA can be mixed with other nanomaterials (gold nanoparticles, carbon nanotubes, fullerenes, etc…) The chemistry of DNA is extended.

    Microbubbles and nanobubbles are enabling faster labs on chip to go beyond the microchannels arrays currently used. (MIT work).

  77. Brian Wang Says:

    That is another point. Many problems are well on there way to being solved but people do not know what work is happening or people do not understand the problems and what has been done (energy, oil, nuclear etc…) and what happened in the past.

    Why were the current nuclear reactors adopted ? Because the US was working on nuclear reactors for submarines when the UK and Russia made commercial nuclear reactors and this is the type of reactor the US was working on. There were other reactor designs known an in development that would have been better for commercial nukes like molten salt reactors.

    People do not realize that there are many different reactors that are possible and how to compare them. They also do not realize that solar power for all of its press is 0.1% of the power generation in the world. Versus 6% of all power for nuclear which is 16% of all electrical power.

    For diamondoid mechanosynthesis, for those who claim it is impossible or that decades have passed and it has not happened. A lot of time can pass where almost zero funding occurs. One will get no closer to completing a marathon if one stays on the couch. It does not mean a marathon cannot be done.

    For Drexler, his early work started by talking about protein engineering and DNA nanotechnology. It was not exclusively diamondoid nanotech. Richard Jones has tried to ignore all the research and work done at the Foresight Institute and Institute for molecular manufacturing (both initiated by Drexler) and pretend as if the only proposal put forward was diamondoid nanobots.

  78. Bram Cohen Says:

    Forgive my cynicism, but it’s pretty clear that the only reason why ‘nanotech’ is a bigger buzzword now than ‘microtech’ was in the 90’s is that ‘nanotech’ sounds a lot cooler.

  79. John Sidles Says:

    To bounce-back Bram Cohen’s post: “I’s pretty clear that the reason why ‘nanotech’ is a bigger buzzword now than ‘microtech’ is implicit in the Fermi Question: ‘Which is larger, the number of stars in the causally connected universe, or the number of atoms in a child’s little finger?”

  80. Anna Salamon Says:

    Scott,

    Great post. I followed the link from Overcoming Bias, disagree on various points, but found you well worth reading.

    How strong do you think the analogy is between intelligence levels and computational complexity classes?

    I agree that the Church-Turing thesis is suggestive as an analogy and that it lends plausibility to the claim that intelligence has some qualitative plateau beyond which all one can do is increase the speed. Would you claim more than that? In particular, would you claim there is some natural sense in which humans can be said to be Turing-complete?

    The best cashing out I can find of claims like “humans are Turing-complete” is that humans, when combined with some fairly simple external artifacts such as pen and paper and particular sorts of training, can emulate Turing machines (or C programs, lambda calculi, etc.). However, the same is probably true of mice: one can probably condition mice (or engineer micro-organisms) to emulate a particular Turing-complete formalism (e.g., Wolfram’s rule #110) in combination with some sort of physical state-recording device. If this sort of emulation does not distinguish between humans, mice, engineered micro-organisms, or engineered computer chips, this reading of “Turing-complete” does not sound like the sort of criterion that should put humans and Jupiter brains in a single reference class for observer-selection experiments while excluding mice etc. Your intuition about the numbers involved in the Doomsday argument would therefore need to rest on either a merely qualitative analogy (i.e., “intelligence and computation seem similar, so maybe both have a plateau”) or on some cashing out of “Turing-complete” that I haven’t been able to find. (There may be such a cashing out; my researches here have been far from exhaustive.)

    My guess is that “computational complexity levels” is not a maximally good cashing out of “intelligence levels”. I don’t know a good way of cashing out “intelligence”, but if I had to grab something I’d go more for “computational speed and ease with which the system can be re-programmed”, “speed and accuracy in approximating Solomonoff induction”, or “speed and accuracy with which the system can achieve its goals in a range of real-world environments”.

    I’ve been banging my head on the problem of how to define intelligence (if there is a good definition; it might be that enlightened AI theorists would replace our term “intelligence” with zero or several crisp concepts). I’d love to hear any thoughts, references, etc.

  81. KaoriBlue Says:

    Brian,

    “I doubt that Kaoriblue is basing this on any evidence related to the advancing work over the past few years but on incorrect intuition.”

    Perhaps my intuition is incorrect. But the literature certainly seems to back up the assertion that most application-based ‘DNA nanotechnology’ involves building toys that don’t work very well, aren’t really useful for anything, and can easily be replaced with a better/cheaper/simpler approaches. For example, there are plenty of other ways to activate neurons using light (non-toxic ligands with photocleavable protection groups for example). You don’t need a crazy Rube Goldberg-esque plasmonic laser that… I don’t even understand how the receptor-activation scheme makes sense. I have to admit though – the hype around the field is astounding.

    Can you name/point me to a killer app for DNA nanotechnology?

  82. Geordie Says:

    KaoriBlue : Killer app for DNA nanotechnology: Molecular diagnostics http://www.adnavance.com/ .

  83. Jonathan Vos Post Says:

    Dear Scott,

    I ask you again to please email the post that did not get through your filter.

    Now that I am again being paid for teaching Chemistry, Biology, Anatomy. and Physiology (after several years of teacing Math and Astronomy), what I carefully composed and submitted to this thead before my PC crashed about a week ago would be useful in my classroom.

    I mean no disrespect for you when I say that MOST people in AI, and CS in general, are undereducated in Chem and Bio and Brain Science. And I AM a published Mathematical Biologist with a MS in Computer Science. I AM an expert in molecular nanotechnology. And, though a coauthor with feynman, an NOT in the Cult of Feynman. And though I helped Drexler with early support and publicity (in Omni and Analog, for instance) I am NOT in the Cult of Drexler.

    So, though you may have moderated-out my (to me) lost but carefully written comment (perhaps because I am willing to call a cult a cult), please send it to me for my paid classroom use. You and I are professional researchers and professional teachers. To me, that trumps popularizers such as Ray Kurzweil, or Bayesian bloggers.

    The biblical command “Go forth and multiply” (Genesis
    24:2). Which, poetically phrased, by ecclesiastical consensus, implies a hidden clause after the last word. It is commonly interpreted to read “Go forth and multiply” (as do rabbits, else be condemned to hell.)

    Which leads to Fibonacci’s rabbits (about as much Bio as most CS folks know), and Malthus. Thomas Robert Malthus
    FRS [13 February 1766 – 23 December 1834] wrote that societies through history had experienced (as you started addressing with “doomsday”) epidemics, famines, wars: events that related to the inherent problem of populations
    exceeding their resource limitations:

    “The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race. The vices of mankind are active and able ministers of depopulation. They are the precursors in the great
    army of destruction, and often finish the dreadful work themselves. But should they fail in this war of extermination, sickly seasons, epidemics, pestilence, and plague advance in terrific array, and sweep off their thousands and tens of thousands. Should success be still
    incomplete, gigantic inevitable famine stalks in the rear, and with one mighty blow levels the population with the food of the world.”

    To give a mathematical perspective to his observations, Malthus proposed the idea that population, if unchecked, increases at a geometric rate, while the food-supply grows at an arithmetic rate.

    Which leads to discussion of exponential growth. And the Cult of The Singularity (tied to the Cult of Transhumanism and the Cult of Nanotechnology) and the Bad Math of “The Singularity is Near” by Ray Kurzweil, who literally misunderstands the mathematical term “singularity” (a point at which an equation, surface, or the like, blows up or becomes degenerate) and claims it in any growth curve, without ever suspecting Sigmoidal curves (i.e. logistic function). Which reminds me of another of my mentors, Herman Kahn, point out by 160 that the so-called expoential population increase would hit an inflection point in the 1970s. Experts scoffed. He seems to have been right. Regarding sigmoids, see also Peak Oil.

    Scott, there are great Chem, Bio, and Brain Science experts at MIT. You might hang out and chat with them now and then, and share your insights with us.

  84. KaoriBlue Says:

    Geordie,

    Ooops, I should have made clear that I was talking about structural ‘DNA nanotechnology’: think nanoactuators, nanorobots, scaffolds for computer chips, etc. Just look at the link Brian provided for a DNA chassis… with an onboard plasmonic laser for zapping individual ion channel receptors. Even if it somehow worked perfectly – you’d need a ridiculous number of them to cleanly trigger even a single neuron, it would almost certainly set off a massive interferon response, and (aside from my suggestion) light-activated ion channels have already been directly engineered in rat hippocampal neurons. Avoiding more cynical interpretations, it strikes me that most of this stuff is complex just for the sake of being complex.

    I certainly did not mean to knock or diminish the important work people have been doing on, say: electrical detection of hybridization events on gene/protein-chips, ‘hardening’ nucleic acids with fluoro/methyl/etc. modifications for more potent siRNA (and avoiding immune responses), or using block copolymers coupled with aptamers for cell-specific transfection of nucleic acids.

  85. null Says:

    Poorly defined singularity lead to poorly defined time scale

    The time when machines would be better players in chess then the best humans.
    The time when all of human knowledge can be accessed from any location on Earth.
    The time when we can look into the living human body with a resolution of less then 1mm.
    The time when machines would be better at translating written work from one language to another.
    The time when machines would do better then the average HS student on the SAT test
    The time when 50% of GDP of the US would be attributed to capital (Robotics and IT).
    The time when machines would prove P=NP or otherwise.
    The time that machines would be able to write a Blog like yours (call center equivalent).

    “upload our brains to computers” does not seem like something that would be useful once the technology would allow it. Therefore I predict that it would never happen.

  86. Cody Says:

    null: well said! (Not that I previously thought that way, but I certainly endorse it.)

  87. KaoriBlue Says:

    Null, Cody: I actually strongly disagree with you folks on this point. While I don’t entirely understand what it means to ‘upload your brain’ (or what the point of that would be), I certainly think we’re getting close to being able to cryo-image brain slices with sufficient resolution to roughly reconstitute the neural circuitry of, say, the human primary visual cortex (you wouldn’t need to get the topology exactly right). With existing Petaflop machines, we can even begin to simulate it (a few projects like this are underway).

    I think this will represent a real step forward in our attempt to make – though not necessarily understand – intelligent machines. They’ll probably be better at things like identifying/classifying objects in images, stopping spam, translating text, and perhaps securities trading. However, I can’t really imagine this ever generating anything worthy of a Fields Medal (at least on their own) or leading to any sort of a rapture/singularity/whatever.

  88. Cody Says:

    I think what null meant (if they don’t mind me speaking on their behalf), was that uploading an individual’s specific brain is mostly a meaningless point, and the idea that people like Ray Kurzweil seem to be anticipating that day is a bit misguided. (In my opinion, a modern dream of self preservation through a mechanism equivocal to immortality.)

    I think null‘s words are equivalent to yours:
    “ ‘upload our brains to computers’ does not seem like something that would be useful…”
    =
    “While I don’t entirely understand what it means to ‘upload your brain’ (or what the point of that would be)…”

    But I do think you both (as well as I) agree that certain human-equivalent capabilities would be both desirable and reasonably obtainable; I think null set out a number of fun checkpoints in A.I. evolution, many of which I would imagine are not that far off. While I can’t say if it will be achieved through “cryo-image brain slices”, I do think our recent construction of petaflop-computing machines is a positive indicator.

    I also agree with your second paragraph, except for the Fields Medal part, and possibly the singularity part (though as null pointed out, that is currently ill defined). Have you read about John Koza and his invention machine? Although I cannot locate the article I first read about it, he applied genetic programming to some astronomical data about the planets, and within a few hours, his computer cluster discovered Kepler’s third law of motion. It was that single fact that convinced me that even physicists and mathematicians may eventually take a back seat to computer’s discoveries… If a computer could produce a proof of the Riemann hypothesis, with no more input by the ‘programmer’ than a definition of the hypothesis and axioms of mathematics, would the Field’s medal belong to the programmer or the computer?

  89. Julius Thyssen Says:

    I completely disagree with null as well. And I do know some really nice applications for downloading what’s in my brain. In fact, I wish I had that option already. It would be very useful indeed. Presumably those who don’t think this is useful enough to explore don’t really like their brain’s contents. I do like what’s inside my head and I would love to be able to backup some info before it gets deleted biologically. Especially memory-wise there is a lot at stake. Not to mention acquired and very specialized knowledge about not too common subjects. It’s a lot easier if information in some format could be transferred in order to have it stored somewhere (instead of dumping it when alcohol ruins some cells by accident) than it is to record it using our hands and mouths. It’s faster, for one!

  90. Cody Says:

    Awe, my comment was eaten by wordpress!

    KaoriBlue, I feel like you and null are saying something very similar, in particular:
    “ ‘upload our brains to computers’ does not seem like something that would be useful once the technology would allow it.”
    and
    “While I don’t entirely understand what it means to ‘upload your brain’ (or what the point of that would be)…”

    I do believe that we are making a lot of progress, and I agree that our construction of petaflop+ machines is a large part of it, but I can’t say what roll ‘cryo-image brain slices’ might play.

    And I agree with your entire second paragraph, except perhaps the Fields medal point, and in some ways the singularity point (though, as null pointed out, it is an ill defined concept). John Koza used genetic programming to ‘discover’ Kepler’s third law in the late 80’s (if I remember correctly). If a computer were to discover a proof of the Riemann hypothesis with no more input than the hypothesis and certain axioms/assumptions, would the programmer still deserve the prize?
    And Julius, do you really think your head contains a lot of information that will be useful and interesting to other people, and cannot be found elsewhere (google, wikipedia, library of congress)? I really do love the comments of my head, but I don’t see why anyone else would want to waste their time going through my memory when probably they have their own personal rough equivalent. (And what I have that most don’t can easily be found in science text books the world over.)

    I sort of interpret Ray Kurzweil’s dream of transcending biology as nothing more than a modern day, scientifically justified dream of immortality. Which, although it appeals to my desire to continue existing, seems to gloss over some of the hairier details.

  91. Scott Says:

    Anna: Thanks for your interesting comment!

    How strong do you think the analogy is between intelligence levels and computational complexity classes?

    Not all that strong, but maybe stronger than your comment would suggest. In particular, it’s not clear to me whether rats can do universal computation in any nontrivial sense—has anyone tried to make them? 🙂 For that matter, can rats even be taught how to compute the XOR function reliably? (I’m not a ratologist; I don’t know.)

    All I’m saying is, I could believe that humans are universal Turing machines in a sense that rats and mice are not. (And maybe chimpanzees are primitive recursive functions… 🙂 ) To me, what separates humans from other animals seems to be precisely the ability to “feed in an arbitrary program.” I’m not saying circus bears aren’t impressive, but if you want to train an animal to do something really absurd, like spending its whole life hunched over a monitor, your best bet is probably a human.

  92. KaoriBlue Says:

    Cody,

    “John Koza used genetic programming to ‘discover’ Kepler’s third law in the late 80’s (if I remember correctly).”

    Yeah, he was actually one of Holland’s students. If genetic programming/neural networks/etc. ever rediscover something like special or general relativity, I’ll be more than happy to eat my words!

    “I sort of interpret Ray Kurzweil’s dream of transcending biology as nothing more than a modern day, scientifically justified dream of immortality.”

    Let me clarify my comment a bit… at the serious risk of spouting metaphysical nonsense. I’m a strict anti-dualist/functionalist, and I don’t really believe in ‘consciousness’ or the existence of ‘qualia’ (as neuro-philo people like Christoph Koch define it). I believe an experience can be made indistinguishable from an immediate and vivid memory of that experience (i.e. you don’t actually feel a burning stove, you remember feeling a burning stove and act accordingly). That said, I’m sympathetic to the idea that ‘you’ are equivalent to a perfect simulation of yourself, and I agree that ‘uploading your brain’ to achieve immortality is scientifically justified.

    I just don’t understand (1) – why you’d want to do this, or for that matter clone yourself/etc., and (2) – what people like Kurzweil mean by it.

  93. Cody Says:

    Hmm, KaoriBlue, I think I agree with you. I would describe myself as a ‘strict physicalist’, and, like you, would entirely reject dualism. (Now that I read about functionalism I would probably wholly agree with that as well. I would also agree with rejecting any claim that consciousness or qualia were anything more than emergent phenomena.) But I have failed to yet come to terms with the idea of transferring my consciousness to another medium, mostly because it seems that the destruction of this medium will somehow take with it something integral to ‘me’. It seems more likely that I my discomfort is one of emotion and not logical, but the discomfort remains.

    It’s a real problem, I am not sure what seems to be lacking, but the problem really is that although I can conceive of the cloning/destroying method of teleportation outlined in Scott’s 18th lecture on Quantum Computing Since Democritus, I cannot resolve that ‘I’ will still exist. Like I say, it is probably an emotional problem, not a scientific one, but I have yet to resolve it.

    As far as Kurzweil’s intention (1) and why people would want to do it (2), I reiterate my statement, that they conceive of it as a method of achieving ‘immortality’, though I may very well be missing their point (I haven’t read any of Kurzweil’s books). I think it is a very attractive notion, ignoring the complications implied by destroying ‘the original’ which seem to hang me up.

    As for genetic programming (or approaches found thereafter), do you think there is some sort of fundamental inhibitor that will prevent A.I. algorithms from achieving human-equivalent scientific/mathematical discovery (comparable to SR/GR)?

  94. null Says:

    I clarify my point about uploading by analogy, although I think that Cody got it straight on. Imagine one of our ancestors 7My ago thinking would it be great when humans would be around then they would invent agriculture and maybe would make some banana plantation for my decedent (chimps) or a vending machine to dispense exotic food with artificial banana flavor. KaoriBlue — could you imagine something more intelligent then a human being? Even today computers are indispensable in math, including theorem proving. Computers are used to find counter examples. Also see Robbins conjecture. No I do not think that a computer would get a field medal which is awarded to a person (there is even an age requirement 🙂 . But I would venture to say that the days that a person of any profession would be able to be the best at anything without the help of a computer are numbered. Also unfortunately to Scott and other readers of this blog “identifying/classifying objects in images, stopping spam, translating text, and perhaps securities trading” is still more profitable then deep mathematical theorem proving. BTW the points in my previous post are random.

  95. John Sidles Says:

    Null says: “I would venture to say that the days that a person of any profession would be able to be the best at anything without the help of a computer are numbered.”

    To be sure, many people agree with what Null has posted. Three intellectual professions in which this has become true in the last decade are (1) grandmaster chess, (2) financial trading, and (3) organic chemistry.

    In each case, computer-assisted reasoning has risen to dominance not by better-than-human heuristics, but rather by merely adequate heuristics and a larger-than-human search.

    That is why, in all three fields, increasing attention is being paid to the geometry of the searched state-space. A well-designed state-space geometry codes information implicitly rather than explicitly, and this proves to be a class of encodings that computers can exploit very efficiently … far more efficiently than humans can (at least, at the conscious level).

    One unanticipated consequence is that state-spaces are becoming trade secrets—this certainly is true in chess, finance, and quantum chemistry.

    This points to a key question: “How do traditions of academic openness survive in a world where large-scale computer simulation increasingly outperforms expert knowledge?”

    One answer is, of course, is to simply define the problem away, bu asserting that academia consists of all disciplines whose core expertise cannot be simulated via heuristics+search. Problem solved!

    But this answer puts academia at-risk (in the long run) of extinction or (worse) irrelevance. And even if we don’t care whether academia becomes extinct, we should care about whether there is any risk that academic traditions of openness and inclusion are at-risk of becoming extinct.

    Commitment to this extended notion of academic freedom is why our QSE Group is releasing its large-scale quantum simulation software (now in beta testing) under the GPL.

    Interesting discussions relating to this general topic can be found by Googling the topics “Chess Match Rybka versus GM Vadim Milov” and (more controversially) “Banned by Gaussian”.

  96. Jonathan Vos Post Says:

    Okay, suppose not P = NP, or suppose that quantum computers are hopelessly hard to build to useful scale. Big deal. The universe allows for arbitrarily lengthy classical computation.

    arXiv:gr-qc/0302076 [ps, pdf, other]
    Title: Indefinite Information Processing in Ever-expanding Universes
    Authors: John D. Barrow, Sigbjorn Hervik
    Comments: 6 pages
    Journal-ref: Phys.Lett. B566 (2003) 1-7
    Subjects: General Relativity and Quantum Cosmology (gr-qc); Astrophysics (astro-ph)

    Indefinite Information Processing
    in Ever-expanding Universes

    John D. Barrow and Sigbjørn Hervik
    DAMTP,
    Centre for Mathematical Sciences,
    Cambridge University
    Wilberforce Rd.
    Cambridge CB3 0WA, UK

    Abstract:
    We show that generic anisotropic universes arbitrarily close
    to the open Friedmann universe allow information processing to continue into the infinite future if there is no cosmological constant or stable gravitationally repulsive stress, and the spatial topology is non-compact. An infinite amount of information can be processed by “civilisations” who harness the temperature gradients created by gravitational tidal energy.

    These gradients are driven by the gravitational waves that
    sustain the expansion shear and three-curvature anisotropy.

  97. null Says:

    summary of posts
    85: The day the three-pound between our ears is going relinquish it’s title (in the sense of ….) “is NOT far” .
    94: News flash for Kurzweil and the upload cult: Any thing that is powerful/smart enough to emulate the human brain is not going to waste it’s time doing so.

  98. KaoriBlue Says:

    Null,

    “Any thing that is powerful/smart enough to emulate the human brain is not going to waste it’s time doing so.”

    Like us?

    I fully expect that we’ll soon have enough processing power (and high-quality structural data) to emulate a significant fraction of the human brain. Understanding what the heck is going on… will be much harder!

    Reminds me of one of my favorite Feynman quotes: “What I cannot build, I cannot understand.”

  99. null Says:

    KaoriBlue — OK you got me, errata should read “a human brain” see 94 for elaboration.

  100. ifatree Says:

    @Scott,

    >>if you zoom out far enough on the timeline, the term “singularity” might be justified.

    I agree. I feel about the “tech singularity” the same way I feel about the “heat death of the universe”. Scale out far enough to see it and you have to realize that we’re in the middle of it right now… otherwise, we’re not hitting any “singularity point” within my lifetime.

  101. Pierre Says:

    Maybe that the ultimate computer that could go near in resolving P=NP and could also simulate a human brain may be a parrallel computer with an growing number of processors like cells multiplicating in the human body. If enough processors could use a Monte-Carlos or Las-Vegas or a Genetic algorithm in order to approximate a solution to a SAT or something in the NP class it could be a good candidate for a intelligent being that could resolve at least a sub set of NP. Maybe that, we human use randomness with a lot a computing power for making discovery in science for example.

  102. michael vassar Says:

    Scott, you may want to note the following about Vos Post. I suggest looking at Tom McCabe’s comments

    http://www.overcomingbias.com/2007/08/is-molecular-na.html

  103. Kasey Kite Says:

    I’ve responded here.

    WARNING: Pokemon

  104. Yuri Orlova Says:

    Hello, I would like to know if problems that belong to any complexity class must have verifiability. Here is example. If you think of a number and I think of a number, there is a product of those two numbers even if you or I do not know what it is. So it cannot be verified. And so my question is does this problem not belong to any complexity class, is that how you say it?

  105. Cody Says:

    Yuri, I have no formal training in complexity theory, only superficial knowledge, but as I understand it, the way complexity theory studies problems is by examining the resources of space and time (think the number of steps) required to solve a specific type of problem. In order to do this you need to start with a way to solve the given type of problem in general (think an algorithm). So the problem you described doesn’t really fit that criteria, since it lacks any formal way to be solved in general (and probably not even any specific instance).

    As far as verifiability, I’d imagine complexity theory also requires the answer to come in one of two types: either deterministic (meaning exactly correct every time, assuming the algorithm is followed and no errors are made), or probabilistically, in which the answer is right with some probability. An actual expert, of whom there are many, may chime in to clarify or expand, or correct stuff I’ve said.

  106. Cesium Says:

    I’m with you right up until you say:

    “””
    Where Kurzweil sees a steady march of progress interrupted by occasional hiccups, I see a few fragile and improbable victories
    “””

    The acquisition of fire and maintaining knowledge of that technology across 100,000 years was “fragile”? The domestication of cattle, pigs, dogs, and chicken and maintenance of that technology across 10,000 years was “fragile”? And doing it four or more times makes each occurrence “improbable”?

    Although individual societies stumble, we have seen consistent forward progress in the acquisition and retention of knowledge across the millenia. Your key assertion and pessimism are simply unjustified.

  107. John Sidles Says:

    “We have seen consistent forward progress in the acquisition and retention of knowledge across the millenia … pessimism is simply unjustified.”

    Excerpt from the Executive Summary of the Neanderthal Risk-of-Extinction Commission.

  108. ces Says:

    Sidles —

    What’s your point? The technologies developed by the Neanderthals survived their extinction. One branch of evolving monkey society stumbled, another continued on propagating the innovation.

    Sure, you can have an earth wiping event — the sun passes through a dusty arm of the galaxy; a planet thrown out of orbit in some other solar system finally whips through our system and blats are planet to smithereens; a super-volcano covers the earth in 3 feet of ash; we all blow each other up in an exchange of nuclear weapons.

    Up until such a point, the march of progress is inexorable. The steps along that path are neither fragile nor improbable.

  109. John Sidles Says:

    CES asserts: The march of progress is inexorable. The steps along that path are neither fragile nor improbable.

    Hmmmm … I take it you are not a fan of Jared Diamond’s books … nor of Brandon Carter’s celebrated “Doomsday Arguments.” 🙂

    Obviously, the jury is still out on whether homo sapiens can sustain a planetary civilization … and it is sobering that no previous hominid species has succeeded at this.

  110. Laurie Pettine Says:

    Well, thanks to Aaron for the initial blog post – I came into this late but I can’t tell you how many times I threw Singularity is Near across the room – yelling, “What about plagues, floods, genocide?! What about poverty, class divisions and the stupidity our leaders can’t quite seem to shake – that perpetuates the sloppy means by which they continue to attempt to control the masses?!” Well, anyway, I’m forwarding your well formulated argument to everyone I know who is already planning for singularity (wait a minute kids). As the mother of two small children I have to maintain hope for the better aspects of the species and live well here and now. Thanks for this.

  111. Americo Says:

    Enjoyed your post. Thoughtful. But I disagree. I think Mr Kurzweils’ observation of accelerating evolutionary change stands from both a theoretical and empirical perspective.

    The theory basically states that a useful technological innovation will make the next technological innovation more likely, because it frees up resources and creates the necessary preconditions. Consider building a combustion engine without knowing how to write. Or consider us discussing such a thing as the singularity before the advent of the internet: We most probably wouldn’t have known each other, much less would we be able to communicate so easily.

    The empirical evidence supports this notion. For most of human history, man lived its live essentially as the generation before it. Paradigm shifts were few and far between. But the interval between these shifts kept getting shorter and shorter. Suddenly transformational new technologies started popping up once a generation, then twice a generation. Now they are shooting up twice a decade.

    Personally I’m much more concerned with mr Kurzweils’ Starry eyed optimism. Big changes are in the wind, lets hope we survive them. If you like you can read my book review on the singularity is near here .

  112. James Croft Says:

    I believe in a sense Kurzweil never relaized the limitations.

    1.he says that the exponent of the exponential growth he is talking about, has also grown. 2 to the power of n. but in fact that 2 has become 1.5 or so in recent years.

    2.All of us forget the limitations of AI and Computing. No matter what kind of algorithm is used the computers just compute. they can be just as smart as they are programmed. if one programs them for 1,000,000 cases, there is sure going to be a 1,000,001 case that is going to make the algorithm crash down. (and I am talking about a wholesome AI that is able to think humanly). AI cannot make computers perform even a single bit of creative work. Computers do repetitive work. and that is the basic limitation. there is no algorithm for creative thinking, AI is for searching through different states and cases, making a reaction to an action.

    But if we were to have machines that could overcome that limitation there needs to be a huge revolution in which these fundamentals are to change. that revolution maybe a long time away.

  113. Abul Hassan Says:

    I find it difficult to believe in this Singularity stuff when we are still using basic amalgam for dental fillings, as opposed to lasers for regenerative tooth growth. If we are still so primitive in a field as “basic” as dentistry, I find it hard to envision a fundamental overhaul in nearly all aspects of modern life in such a short time period.

    I mean, Moore’s Law is basically dead:
    ♣http://www.extremetech.com/extreme/203490-moores-law-is-dead-long-live-moores-law

    It’s really our own unique method of goal-post shifting that keeps it alive. If we cannot kick-start this form of computing power growth from a list of possibilities existing today [e.g. graphene] ▬ and all evidence and critical thinking suggests that we cannot, at least in the next 20 years ▬ then I don’t see how you can envision such a radical change in AI research that will allow for this Singularity.

    Personally I would be looking forward to things like automated transport [especially personal motor vehicles], microscopic surgery that allows the patient to recover the same day from even complicated procedures coming onto the market, dental regeneration, complete regeneration of limbs and organs, new modes of computing technology [perhaps wearable computers are the next step] etc.

    So my goals would be more modest, and even then I wouldn’t guarantee the specifics here.

    One thing I agree with is that we are becoming increasingly accustomed to rapid change. Being a kid of the 80s/90s, I remember CD technology being the big thing. And yet towards the beginning of the new millennium, the technology had almost faded into dust, with the advent of MP3 technology. In fact teenagers today are rather bemused by the notion of playing music in such a “physical” format. I also remember the pre-internet era very distinctly, and the transition to being ultra-networked. It used to be so damn hard to find practically ANYTHING without the use of meagre library resources. And even then information was so limited. Today, that sort of problem has been all but eliminated. To do a basic homework assignment was such a spectacle, whereas now considerably easier. To find information on an individual, company or organization used to be a mammoth task — now as simple as a couple of mouse clicks. To network with others or learn something could be a gargantuan task.

    I think people are underestimating these things, because for people older than 40, it all seems like such a blur, whereas folks younger than 25 don’t recall any of it whatsoever. It’s those of us from a particular generation that have felt the change most acutely.