December 6th, 2013

Oh boy, now we have to worry about…

…the destruction of the human race by artificial intelligence:

In 267 brisk pages, Barrat lays out just how the artificial intelligence (AI) that companies like Google and governments like our own are racing to perfect could — indeed, likely will — advance to the point where it will literally destroy all human life on Earth. Not put it out of work. Not meld with it in a utopian fusion. Destroy it…

ASI is unlikely to exterminate us in a bout of Terminator-esque malevolence, but simply as a byproduct of its very existence. Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant’s next meal will come from. We cannot assume ASI empathy, Barrat writes, nor can we assume that whatever moral strictures we program in will be adhered too. If we do achieve ASI, we will be in completely unknown territory.

The theme of countless science fiction plots, come to life (or death)? Frankenstein, or the Modern Prometheus? Those of you with more scientific acumen than I can read the article and decide for yourself whether such a scenario is likely. Quite a few of the article’s commenters are skeptical.

10 Responses to “Oh boy, now we have to worry about…”

  1. physicsguy Says:

    I thought the term ASI was the more appropriate Artificial Sentient Intelligence, rather than Super intelligence as used in the article. The author clearly is implying sentience in the initial AGI stage he describes.

    This is an old question; when is something sentient? Hofstader wrote a large book about it, and even Schroedinger (father of Quantum Mechanics) weighted in. The answer probably is the old “you’ll know it when you see it”.

    His reason for our demise being the competition for energy is interesting. Now if the ASI is in charge of a bunch of robots that can do the mining, drilling, etc, to extract the energy, then I guess it’s possible. And if it is so superintelligent I guess it could solve the energy problem anyway.

    Other questions: Is the ASI a singularity, or does it produce copies of itself? Do the copies compete? i.e. ASI warfare?

    If ASI is the end result of biological evolution, then we are faced with the same question now facing SETI: where are they? If every intelligent biological species ends up creating ASI’s, and they grow exponentially, as the author suggests, then where are they?

  2. blert Says:

    The Eloi scenario is the single most likely outcome of AI.

    Morlocks could never be.

    Just remember that the crowd that is coding is gazillion years away from coding AI — a fantasy that will never come off.

    For, at the heart of it, to construct AI one must truly KNOW man.

    We’ve been working on that project for about 50,000 years.

    One must leave unto God His own work.

  3. parker Says:

    “For, at the heart of it, to construct AI one must truly KNOW man. We’ve been working on that project for about 50,000 years. One must leave unto God His own work.”

    I’m guessing that it has been at least twice 50,000 years; and we will never ‘know’ who we are because we, as individuals, are not gods. We are mere mortals fumbling along as we gaze at the stars.

  4. delete.the.alternative Says:

    We’re at the stage where science fiction is catching up with science.

  5. Mousebert Says:

    On the other hand, since AI will be created in our own image, and with Asimov’s Laws of Robotics, it is just as likely that an “I, Mudd” scenario where they become our “parents” or they make us their pets.

    Sort of like the Nanny State but run by intelligent beings and not bureaucrats .

  6. Ymarsakar Says:

    If our enemies create AI, they will weapons used to enslave humanity.

    If AI is created with free will, various other things will happen instead.

    Parker, I’ve heard that Obama offers ascension to godhood, all you have to do is to sign up.

  7. delete.the.alternative Says:

    Reading Mark Steyn’s latest (the loss of work, the loss of purpose), I see no reason why the useless should not be eliminated.

    I for one welcome our new overlords. At least if they’ll give us work. Even makes us slaves. Something instead of this grotesque being provided for!

  8. sergey Says:

    It is always dangerous to depend too much on something too unpredictable, and this is true even for very simple automatons, like autopilot or computer program running nuclear station. AI is not different in this respect from clockwork controller of a washing machine. The real problem here is that many complex devices became inherently unreliable when they became too complex. One of them is not electronics or software, but government itself. This is our Frankenstein.

  9. IGotBupkis, "'Faeces Evenio', Mr. Holder?" Says:

    This problem has been dealt with in SF a considerable number of times.

    When H.A.R.L.I.E. Was One, by David Gerrold
    The Two Faces Of Tomorrow, by James P. Hogan

    are probably two of the most seminal works.

    Classic works involving AI also are
    The Moon Is A Harsh Mistress, by Robert Heinlein
    Colossus, by D.F. Jones (there are two little-known sequels, The Fall of Colossus, and Colossus and the Crab, which are very relevant to the whole storyline, mind you)
    The Revolution From Rosinante by Alexis Gilliland (which also has two sequels, Long Shot For Rosinante and Pirates of Rosinante

    The Colossus trilogy is the only one of those which the AI is central to the events of the plot, as opposed to an additional character of significance.

    There are many others but those come directly to mind.

  10. IGotBupkis, "'Faeces Evenio', Mr. Holder?" Says:

    By the way, the first two of those are specifically regarding an emergent intelligence, that is, one developing and being dealt with as it becomes able to effect its environment.

    One of the most relevant things to understand is that we still haven’t mastered the problem of “common sense”. This is described fairly well in Hogan’s book — how do you define “common sense” — not even in the more complex form of “why do liberals fail so badly at it?”, but the kind that even THEY generally don’t fail at.

    Hogan’s example:
    The cat has fleas. We want to get rid of the fleas. Well, heat kills fleas. Throw the cat in the furnace!

    It is boneheaded obvious that that will kill the cat, too… but we didn’t specify that. It was “understood”. So we tell the AI that we don’t want the cat to die, providing a “common sense constraint”.

    OK, now the dog has fleas. How do we get rid of them? Well… heat kills fleas….

    We didn’t TELL the AI that we wouldn’t want a solution to kill a DOG, only a CAT.

    And there you can see the other aspect of the problem, that makes simple rule-based answers not work — we need common sense to have enough reason to not only handle a specific rule, but to be able to effectively GENERALIZE from those rules, to realize that when we say “don’t kill the cat”, we also almost always will mean “don’t kill the dog, the cow, the horse, the elephant, the giraffe…”

    The “generalization” problem is very much outside the scope of anything programmers can currently come close to, and it’s one of the key things preventing us from CREATING an AI intentionally.

    Whether or not something like that can ARISE from a disorganized system is unknown to us. It may well be that, when the internet gets 100 billion cpus dedicated to its own control, that something will “magically” (and yes, it would BE magic because we have no idea how it could occur) happen and some form of self-sustaining AI will be enabled. And that AI would be utterly alien to us because we’d have no understanding of it, and whatever principles that created it would be a mystery. As with Hogan’s book, things could get pretty chaotic until we managed to actually recognize each others’ existence … yes, it would be an ALIEN life form in every sense.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

About Me

Previously a lifelong Democrat, born in New York and living in New England, surrounded by liberals on all sides, I've found myself slowly but surely leaving the fold and becoming that dread thing: a neocon.


Ace (bold)
AmericanDigest (writer’s digest)
AmericanThinker (thought full)
Anchoress (first things first)
AnnAlthouse (more than law)
AtlasShrugs (fearless)
AugeanStables (historian’s task)
Baldilocks (outspoken)
Barcepundit (theBrainInSpain)
Beldar (Texas lawman)
BelmontClub (deep thoughts)
Betsy’sPage (teach)
Bookworm (writingReader)
Breitbart (big)
ChicagoBoyz (boyz will be)
Contentions (CommentaryBlog)
DanielInVenezuela (against tyranny)
DeanEsmay (conservative liberal)
Donklephant (political chimera)
Dr.Helen (rights of man)
Dr.Sanity (thinking shrink)
DreamsToLightening (Asher)
EdDriscoll (market liberal)
Fausta’sBlog (opinionated)
GayPatriot (self-explanatory)
HadEnoughTherapy? (yep)
HotAir (a roomful)
InFromTheCold (once a spook)
InstaPundit (the hub)
JawaReport (the doctor is Rusty)
LegalInsurrection (law prof)
RedState (conservative)
Maggie’sFarm (centrist commune)
MelaniePhillips (formidable)
MerylYourish (centrist)
MichaelTotten (globetrotter)
MichaelYon (War Zones)
Michelle Malkin (clarion pen)
Michelle Obama's Mirror (reflections)
MudvilleGazette (milblog central)
NoPasaran! (behind French facade)
NormanGeras (principled leftist)
OneCosmos (Gagdad Bob’s blog)
PJMedia (comprehensive)
PointOfNoReturn (Jewish refugees)
Powerline (foursight)
ProteinWisdom (wiseguy)
QandO (neolibertarian)
RachelLucas (in Italy)
RogerL.Simon (PJ guy)
SecondDraft (be the judge)
SeekerBlog (inquiring minds)
SisterToldjah (she said)
Sisu (commentary plus cats)
Spengler (Goldman)
TheDoctorIsIn (indeed)
Tigerhawk (eclectic talk)
VictorDavisHanson (prof)
Vodkapundit (drinker-thinker)
Volokh (lawblog)
Zombie (alive)

Regent Badge