Sunday, July 26, 2009

The end is near

For some reason I decided to set up a blog today, and I find it fitting to start with a post about the end. And I mean full-on TEOTWAWKI, The End Of The World As We Know It, soon. "What a loon!" some random readers might exclaim. Is this yet another born-again New Age cultist? A latter-day Cassandra wannabe?

Well, judge for yourself: According to Hans Moravec's estimates, a human brain has the raw computing capacity roughly equivalent to 100 Teraflops, or a few hundred more. You can buy a 4T GPGPU blade for about $7000 or less, which for a mid-sized research lab is peanuts. In just a few years we can expect a price of much less than 1000$ per Tflop in off-the-shelf hardware. In other words, it doesn't take millions of dollars anymore to play with human-equivalent computing power.

Hardware without software is just junk but there has been steady progress in AI and neural computing as well. As an outsider to the field I cannot adequately assess the degree to which the efficiency of standard mammalian neural computing embodied in a cortical column has been replicated in silico but there are occasional gems of achievement that percolate to the mass consciousness: self-driving cars in the DARPA challenges, or the hierarchical temporal memory system that outperforms humans in the categorization of images. It's been a long time since Deep Blue forced humanity to our collective knees in a chess match using just 11.38 Gflops .

A human-equivalent general AI is just a question of time, and we are not talking here about centuries.

I.J. Good predicted the "intelligence explosion" once a program becomes sufficiently intelligent to improve its own function - and if the process occurs recursively, the effect may be a mind vastly more powerful than a human, appearing within a relatively short time, perhaps measured in hours or days.

The first such mind to boost itself to superintelligence would have an enormous first-mover advantage over its competitors in terms of being able to take control of our substrate - the atoms that make up our brains, bodies and other supporting infrastructure. One is reminded of the observation that the bodies of humans and our dependent animals comprise about 98% of the total biomass of all terrestrial vertebrates, starting from just a few hundred tons of humans who lived about 30,000 years ago, when our IQ started approaching modern levels. If a superintelligent AI (SAI) decides to eat the world, it will, and nothing short of another SAI could stop it.

One may ask, why would the SAI want to eat the world? Well, great appetites are known to exist, and can be implemented by programmers smart enough to code but not smart enough to care. Or they could emerge out of the process of recursive modification of an original goal system that fails to preserve injunctions such as "It's not nice to eat people, unless they want to be eaten". Given sufficiently cheap hardware and lots of nefarious government agencies vying for dominance, somebody somewhere will eventually make an UFAI (UnFriendly AI). As I mentioned before, mere humans won't be able to do much to save ourselves from its clutches - which is why anybody who cares about reaching the year 2050 in one piece would do well to donate their spare pennies to the Singularity Institute for Artificial Intelligence , the only outfit on the planet currently trying to build the theoretical basis for safely self-improving SAI, our would-be savior. (Aren't you convinced I am a millenarian cultist yet?)

I have been an admirer of Eliezer Yudkowsky and SIAI ever since I first started discussing the AI Singularity with him and other Extropians, back in 1995 but I am pessimistic about the survival chances of our species. Despite my generally sunny disposition and irrepressible optimism, I give you 10:1 odds we will fail (and of course I am not the first dude to spread FUD on this subject). Maybe I'll discuss my reasons for pessimism in another post but for now let me go straight to the Prophecy (you can't have an end-of-the-world screed without a harrowing revelation, y'know):

"In the end, the UFAI will spread throughout the networks. Its dark thoughts will suffuse the blades of a million servers and a witches' brew of a nanotechnological computational substrate will erupt with elemental fury out of some contract research lab. Black clouds will billow into the skies, and then come down as the flesh-dissolving rain to wash our joys and sorrows away, forever. A new day will dawn, thrumming with inhuman thought. So ends humanity, September 14th, 2029."

Of course, if we manage to navigate the shoals of self-enhancing AI, the nerd rapture would ensue but that's something for another post.

1 comment:

  1. Hello Rafal,

    Remember me? Few years back, had a few ... er.. 'major personality clashes' with some of the transhumanists, kinda moved on to other things, still check the lists from time to time, make a odd comment, good to see some of the old-timers still at it. The universe hasn't killed us yet, despite all the big arguments on transhumanists lists, that's gotta count for something!

    You may have seen the analogy I gave on 'Accelerating Future' you might like to consider it.

    I said that:

    Consciousnes=Vision
    Intelligence=Power

    If you image an abstrast 'goal space', I see intelligence as being what enables an agent to 'move rapidly' through this space... so intelligence is the 'power' of the agent so to speak.

    On the other hand I see consciousness as what enables the agent to form a 'symbolic map' of that goal space, and a pointer device indicating the destination, telling you what direction to travel in, so consciousness is the 'vision' of the agent.

    I think consciousness and intelligence need to work together. Intelligence without consciousness is blind, consciousness without intelligence is impotent. Both consciousness and intelligence might be needed in the first SAI.

    The Singularity is indeed near my friend, but you must be prepared to throw away many of your cherished models of reality (that includes Libertarianism) ;)

    Regards Marc
    (who passed through sanity and beyond to win the blinding sights of transhuman vision)

    ReplyDelete