Home » As if we didn’t have enough to worry about…

Comments

As if we didn’t have enough to worry about… — 25 Comments

  1. Doesn’t worry me. We’ve had the majority of the human race artificially intelligent for centuries. We’re still here.

  2. Nope, not something to worry about. The whole notion that machines will necessarily become conscious (in the human sense), possessed of independent will, and so forth, rests on a giant leap of materialist faith. There is zero evidence for the belief that computational complexity somehow produces intelligence in the human sense. Having a different set of axioms, I don’t think it’s possible.

    What we *do* have to worry about is the continuing displacement of human workers by machines which are “intelligent” in the limited sense of being able to perform more and more complex but strictly programmable tasks. For instance, the possible coming automation of fast-food production. More and more cashiering-type jobs, for instance, are being replaced by self-service specialized computers.

  3. All this hype about AI is just a ruse. Nothing serious of such importance will ever come from this, except for some useful programs like Google search engine and more smart robots. The best examples of AI are automated translation programs, chess playing programs and pattern recognition, used in medical diagnostics, oil fields prospecting and like problem solving. Mathematical theory of complexity makes some “hard” problems unsolvable in principle. The very existence of conscience in humans is really a miracle, inexplicable by any scientific means, so do not worry. This miracle is not reproducible by humans – never.

  4. Fear of rebellion of machines is a mythology of popular culture originated by Golem and Frankenshtein and has nothing in common with reality. Conscience just does not compute – it is not a computational problem.

  5. This has been of interest for me these many years.

    April 2009:
    “Adam becomes first robot to make a scientific discovery after conducting its OWN experiments”

    August 2014:
    “IBM develops a computer chip with one million ‘neurons’ that ‘functions like a human brain’

    September 2014:
    ‘Boris’ the robot can load up dishwasher
    “”Boris” is one of the first robots in the world capable of intelligently manipulating unfamiliar objects with a humanlike grasp.”

    January 2015:
    “Robots can now learn to cook just like you do: by watching YouTube videos”

    November 2015:
    “Uh oh! Robots are learning to DISOBEY humans: Humanoid machine says no to instructions if it thinks it might be hurt”

    Over at PJmedia, Richard Fernandez has some interesting observations and speculations on this subject;

    “The Crisis of the Blue Model”

    “Michael Belfiore of the Guardian believes that in the end, the small minority of humans who are creative enough to remain employed may become the new 1%. They will in any case be comparatively few in proportion to the population. For reasons of social peace these must be taxed at a tremendously high rates to provide a basic income for the other 99% whose labor will no longer be economically in demand. “Basic income isn’t a conditional welfare program – rather, it’s a check that’s paid to individual adults instead of households, regardless of other sources of income and with no requirement for work.”

    Neversay never Sergey…

  6. The machines would of course, recognize me as superior to the rest of you and wonder why you would have me live in a small apartment and without a shiny red sports car and a brand-spanking new Fender Telecaster. I think they would become angry in ways we could not predict.

  7. Well, I know we already have artificial reality — the SJW narrative.

    And it’s destroying Western civilization even as I post.

  8. Another day, another existential threat. All this time I thought it was AGW. Who knew?

    Didn’t Hawking predict dire consequences in the past? Or am I confusing him with another Intellectual. Alleged intellectuals spouting alarm all run together after awhile.

    Musk recently came to my attention by pulling Paypal out of a planned facility in North Carolina in solidarity with the “Pottie Freedom” movement. I cancelled my Paypal account. That will show them.

    The time may come when the ability to walk behind a mule and plow, and swing a hoe will once again be valued skills. Robots and computers will have the rest of the jobs. I just had a vision of Attorney Robot standing in a courtroom arguing before a jury of robots, presided over by Judge Robot. They naturally convicted the human defendant. Then I saw Harry Reid Robot, and Nancy Pelosi Robot leading a joint session of Congress. Might be a step forward.

  9. As someone with decades of experience in software development, we can’t even create essential common sense in code, much less AI.

    There is some risk of spontaneous development of AN AI emergently ( i.e., one just springs forth in the backround unnoticed by humanity at that time, rather than by design), but absolutely nothing we can do to prevent that short of shutting down the internet AND all large scale communication networks..

  10. Oldflyer: While Elon Musk founded Paypal he sold it to EBAY in 2002 so I’m not sure what he had to do with any decision of theirs this year.

  11. Climate change, robot rebellion, zombie apocalypse, yada yada. Anything but the one actual greatest threat to humanity: Islam. Now that’s beyond the pale.

  12. Geoffrey Britain: Science is defined in a good measure by some categorical “No Go” laws, like impossibility to construct a machine extracting energy from nothing. In mathematics, the closest analogues of such absolute laws are No Go theorems of combinatorial theory of complexity. One of the first of such theorems was Alan Turing famous work “On computable functions” which has shown that some well-defined functions can not be computed in principle, not because of lack of computational power. For a decade I was scientific editor of periodical “Cybernetics” which published in Russian translation the most important articles on AI, theoretical programming and theory of complexity – a natural development of Turing works on complexity of computations. This was 25 years ago, but this is the nature of mathematics that once established mathematical results can never be nullified.

  13. Sergey,

    I imagine your faith in a mathematically precise universe is comforting. Unfortunately, “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy”

  14. London Trader; I owe Musk an apology for that I suppose; although I am not sure why he is weighing in on this latest madness.

    In any event, Paypal will do without my business. Just read their sanctimonious statement on the NC law once again. We have truly passed through the looking glass.

    WRT to the matter under discussion, my introduction to the computer age was the laborious process of cutting 1s and 0s into a paper tape; then pressing switch-lights on a register to initiate a loading routine. Now I just use the APPs developed by others So, I am behind the times. Still, the computers I have seen, just sit there waiting for a human to tell them what to do. I do not doubt they can now be programmed to store and recall feedback routines that lead them to follow a programmed alternative. I doubt that they reason an alternative independently. Have I missed something? Have they really progressed to the point that they can ignore human instructions, and take independent action?

  15. Hamlet was right, of course, and I absolutely do not believe in mathematically precise universe, but mathematics itself is precise, indeed, and things it forbids simply can not be done – by humans at least. Only if Creator Himself intervenes, then all bets are off.

  16. Mac Says:
    April 16th, 2016 at 4:06 pm
    Nope, not something to worry about. The whole notion that machines will necessarily become conscious ….

    “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”
    — Western Union internal memo, 1876.
    —–
    “The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?”
    — David Sarnoff’s associates in response to his urgings for investment in the radio in the 1920s.
    —–
    “The concept is interesting and well-formed, but in order to earn better than a ‘C,’ the idea must be feasible.”
    — A Yale University management professor in response to Fred Smith’s paper proposing reliable overnight delivery service. Smith went on to found Federal Express Corp.
    —–
    “Who wants to hear actors talk?”
    — H.M. Warner, Warner Brothers, 1927.
    —–

    “A cookie store is a bad idea. Besides, the market research reports say America likes crispy cookies, not soft and chewy cookies like you make.”
    — Response to Debbi Fields’ idea of starting Mrs. Fields’ Cookies.
    —–
    “We don’t like their sound, and guitar music is on the way out.”
    — Decca Recording Co. rejecting the Beatles, 1962.
    —–
    “Heavier-than-air flying machines are impossible.”
    — Lord Kelvin, president, Royal Society, 1895.
    —–
    “If I had thought about it, I wouldn’t have done the experiment. The literature was full of examples that said you can’t do this.”
    — Spencer Silver on the work that led to the unique adhesives for 3-M “Post-It” Notepads.
    —–
    “So we went to Atari and said, ‘Hey, we’ve got this amazing thing, even built with some of your parts, and what do you think about funding us? Or we’ll give it to you. We just want to do it. Pay our salary, we’ll come work for you.’ And they said, ‘No.’ So then we went to Hewlett-Packard, and they said, ‘Hey, we don’t need you. You haven’t got through college yet.'”
    — Apple Computer Inc. co-founder Steve Jobs
    —–
    “Professor Goddard does not know the relation between action and reaction and the need to have something better than a vacuum against which to react. He seems to lack the basic knowledge ladled out daily in high schools.”
    — 1921 New York Times editorial about Robert Goddard’s revolutionary rocket work.

    —–
    “Drill for oil? You mean drill into the ground to try and find oil? You’re crazy.”
    — Drillers who Edwin L. Drake tried to enlist to his project to drill for oil in 1859.
    —–
    “I think there’s a world market for about five computers.”
    — Thomas J. Watson, Chairman of the Board, IBM.
    —–
    “The bomb will never go off. I speak as an expert in explosives.”
    — Admiral William Leahy, US Atomic Bomb Project.
    —–
    “This fellow Charles Lindbergh will never make it. He’s doomed.”
    — Harry Guggenheim, millionaire aviation enthusiast.
    —–
    “Stocks have reached what looks like a permanently high plateau.”
    — Irving Fisher, Professor of Economics, Yale University, 1929.
    —–
    “Airplanes are interesting toys but of no military value.”
    — Marechal Ferdinand Foch, Professor of Strategy, Ecole Superieure de Guerre.
    —–
    “Man will never reach the moon regardless of all future scientific advances.”
    — Dr. Lee De Forest, inventor of the vacuum tube and father of television.
    —–
    “Everything that can be invented has been invented.”
    — Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.

  17. I remember when science was all about the mirical of reality and all that kind of stuff.. now its constant stream of run, hide, its going to kill you… oh the futility of it all… waaaa!

  18. one day far in the future, ai in machines will lead to an existential elevator that is afraid of heights (and for those that know the story, a very depressed and despondent robot named Marvin)
    -=-=-=-=-

    “Yeah, yeah,” said Zaphod as the elevator doors opened.

    “Hello,” said the elevator sweetly, “I am to be your elevator for this trip to the floor of your choice. I have been designed by the Sirius Cybernetics Corporation to take you, the visitor to the Hitch Hiker’s Guide to the Galaxy, into these their offices. If you enjoy your ride, which will be swift and pleasurable, then you may care to experience some of the other elevators which have recently been installed in the offices of the Galactic tax department, Boobiloo Baby Foods and the Sirian State Mental Hospital, where many ex-Sirius Cybernetics Corporation executives will be delighted to welcome your visits, sympathy, and happy tales of the outside world.”

    “Yeah,” said Zaphod, stepping into it, “what else do you do besides talk?”

    “I go up,” said the elevator, “or down.”

    “Good,” said Zaphod, “We’re going up.”

    “Or down,” the elevator reminded him.

    “Yeah, OK, up please.”

    There was a moment of silence.

    “Down’s very nice,” suggested the elevator hopefully.

    “Oh yeah?”

    “Super.”

    “Good,” said Zaphod, “Now will you take us up?”

    “May I ask you,” inquired the elevator in its sweetest, most reasonable voice, “if you’ve considered all the possibilities that down might offer you?”

    Zaphod knocked one of his heads against the inside wall. He didn’t need this, he thought to himself, this of all things he had no need of. He hadn’t asked to be here. If he was asked at this moment where he would like to be he would probably have said he would like to be lying on the beach with at least fifty beautiful women and a small team of experts working out new ways they could be nice to him, which was his usual reply. To this he would probably have added something passionate on the subject of food.

    One thing he didn’t want to be doing was chasing after the man who ruled the Universe, who was only doing a job which he might as well keep at, because if it wasn’t him it would only be someone else. Most of all he didn’t want to be standing in an office block arguing with an elevator.

    “Like what other possibilities?” he asked wearily.

    “Well,” the voice trickled on like honey on biscuits, “there’s the basement, the microfiles, the heating system … er …”

    It paused.

    “Nothing particularly exciting,” it admitted, “but they are alternatives.”

    “Holy Zarquon,” muttered Zaphod, “did I ask for an existentialist elevator?” he beat his fists against the wall.

    “What’s the matter with the thing?” he spat.

    “It doesn’t want to go up,” said Marvin simply, “I think it’s afraid.”

    “Afraid?” cried Zaphod, “Of what? Heights? An elevator that’s afraid of heights?”

    “No,” said the elevator miserably, “of the future …”

    “The future?” exclaimed Zaphod, “What does the wretched thing want, a pension scheme?”

    At that moment a commotion broke out in the reception hall behind them. From the walls around them came the sound of suddenly active machinery.

    “We can all see into the future,” whispered the elevator in what sounded like terror, “it’s part of our programming.”

    Zaphod looked out of the elevator – an agitated crowd had gathered round the elevator area, pointing and shouting.

    Every elevator in the building was coming down, very fast.

    He ducked back in.

  19. “I won’t submit”: my point is that the matter of truly conscious AI is not a technological question. In order for it to be a technological question, the assumption that consciousness is material in origin has to be true. I believe it is false, hence that the “Skynet becomes self-aware” and similar scenarios are not going to happen. Not because the technology is difficult, but because technology can only operate via matter and material causation, and consciousness is not a material phenomenon. The assumption that it is so is a purely philosophical one, and there’s no evidence for it. It’s essentially a matter of faith for materialists.

  20. Until the limits of AI have been clearly defined and shown limited, the question is still unresolved.

    Don’t think anyone can claim the limits have been reached.

  21. When i worked wall street i did a number of AI and expert systems and none of them are anywhere NEAR anything functional enough to be afriad of…

    what would be scarier is Ai assisted systems that improve human behavior.

    augmented humans are much scarier than free range ai

    AI isnt anything mysterious either… its just a way of coding that allows systems to figure out through a neural net distributed computation what is important by learning, not by knowing. you train a net, but they are not that easy to get right, and certainly nothing is near anything that is human based thought except in tiny areas, or in non thinking simulation intended to trick people, not actually think

    but my other statement stands. without religion, science went from the miracle of existence and an amazing expansive reality, to everything is going to kill us be afraid. afraid of what? everything. guns, people, deseases, zombie apocalypse, nuclear weapons, non nuclear weapons, pandemics, financial collapse, warming weather, cooling weather, rising tides, famines, asteroids, solar flares, exploding stars too nearby, invasive species, we are all gonna die… hide hide hide… no, not there, there you die too..

    sheesh.

  22. Joel Garreau wrote an excellent book ten years ago called Radical Evolution, which looked at 3 possible AI outcomes considered the most likely in their categories. The categories were benificence, catastrophic, and mixed. Great fun, good writing. My takeaway from the catastrophic scenario was that it wwas so miserable and so unmoveable that it was pointless to worry about it now, as we could do nothing and would have plenty to worry about then. If we had any warning before sudden destruction, that is.

  23. groundhog: These limits have been discovered and understood decades ago. See NP-completeness in Wiki or elsewhere.

  24. Assistant Village Idiot: The most probable outcome was completely overlooked. It is that nothing serious will ever emerge, beyond what was already done. This belong to the same category as flying cars, quantum computers or nuclear fusion energy: billions spent in many decades with little to show in return.

Leave a Reply

Your email address will not be published.

HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>