singularity – (en)

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they’re getting faster is increasing.
True? True.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and a lot of other very smart people can, then all bets are off. From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators.
It’s impossible to predict the behaviour of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you’d be as smart as they would be. But there are a lot of theories about it. Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2045.
This transformation has a name: the Singularity.
The difficult thing to keep sight of when you’re talking about the Singularity is that even though it sounds like science fiction, it isn’t, no more than a weather forecast is science fiction. It’s not a fringe idea; it’s a serious hypothesis about the future of life on Earth. There’s an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it’s an idea that rewards sober, careful evaluation.
If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well … pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility.  Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet…. In a Post-Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. I. J. Good had something to say about this, though at this late date the advice may be moot: Good proposed a “Meta-Golden Rule”, which might be paraphrased as “Treat your inferiors as you would be treated by your superiors.”  It’s a wonderful, paradoxical idea (and most of my friends don’t believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.
A strongly superhuman intelligence would likely be a Society of Mind with some very competent components. Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era.  Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new environment to what we call human now.
Singularity – Dugutugui on Vernor Vinge’s ideas, and many others.

About Dugutigui

In the “Diula” language in Mali, the term « dugutigui » (chief of the village), literally translated, means: «owner of the village»; «dugu» means village and «tigui», owner. Probably the term is the result of the contraction of «dugu kuntigui» (literally: chief of the village).
This entry was posted in Education, English, Maths, Politics and tagged , , , , , , , . Bookmark the permalink.

14 Responses to singularity – (en)

  1. El Guapo says:

    The intelligence of machines is only as good as its programming, and i think some of the predictions assume technology that doesn’t yet exist.
    Also, weather forecasts are the biggest fiction ever.

    • Dugutigui says:

      Well, that is as saying that a plane can flight as well as its designer can. I’m not agreed with that. However, I quite agree with your point about the weather forecast🙂
      Thank you very much for commenting.

      • El Guapo says:

        Well, not quite the same.
        The designer isn’t as aerodynmic, but all things being equal, the plane will only fly as well as he’s designed it to, no?

      • Dugutigui says:

        Yes. In an ideal world…

        But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. In fiction, there have been stories of laws passed forbidding the construction of “a machine in the form of the mind of man”. In fact, the competitive advantage — economic, military, even artistic — of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first.

        Eric Drexler has provided spectacular insight about how far technical improvement may go. He agrees that superhuman intelligences will be available in the near future — and that such entities pose a threat to the human status quo. But Drexler argues that we can embed such trans human devices in rules or physical confinement such that their results can be examined and used safely. This is I. J. Good’s ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself confined to your house with only limited data access to the outside, to your masters. If those masters thought at a rate — say — one million times slower than you, there is little doubt that over a period of years (your time) you could come up with “helpful advice” that would incidentally set you free. (I call this “fast thinking” form of superintelligence “weak superhumanity”. Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time. “Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. It’s hard to say precisely what “strong superhumanity” would be like, but the difference appears to be profound. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight? (Now if the dog mind were cleverly rewired and _then_ run at high speed, we might see something different….) Most speculations about superintelligence seem to be based on the weakly superhuman model. I believe that our best guesses about the post-Singularity world can be obtained by thinking on the nature of strong superhumanity.

        The other approach to Drexlerian confinement is to build _rules_into the mind of the created superhuman entity (Asimov’s Laws). I think that performance rules strict enough to be safe would also produce a device whose ability was clearly inferior to the unfettered versions (and so human competition would favor the development of those more dangerous models). Still, the Asimov dream is a wonderful one: Imagine a willing slave, who has 1000 times your capabilities in every way. Imagine a creature who could satisfy your every safe wish (whatever that means) and still have 99.9% of its time free for other activities. There would be a new universe we never really understood, but filled with benevolent gods (though one of _my_ wishes might be to become one of them).

        I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans’ natural competitiveness and the possibilities inherent in technology. And yet… we are the initiators.

      • El Guapo says:

        The vision you paint, especially at the end, is one of the classic sci fi dystopias.
        I’ll have ro check out Drexler and get back to you to continue the conversation.

      • Dugutigui says:

        Pay little heed to this post. Probably The Singularity will never happen. Which –in one way- would be a shame, because the human race will end up destroying itself, or being destroyed by other external forces -earthquakes, volcanoes, plate tectonics, continental drift, solar flares, sun spots, magnetic storms, the magnetic reversal of the poles … bombardment by comets and asteroids and meteors, worldwide floods, tidal waves, worldwide fires, erosion, cosmic rays, recurring ice ages … global warming, nanotechnology, virus, etc. etc. The list is infinite, and probably The Singularity would be the only way for us to achieve immortality (or at least a lifetime as long as we can make the universe survive, and then re-engineer it) and overcome all kind of catastrophes. From one angle, the vision fits many of our happiest dreams: a time unending, where we can truly know one another and understand the deepest mysteries. From another angle, it’s a lot like the worst-case scenario. And for all my rampant technological optimism, sometimes I think I’d be more comfortable if I were regarding these transcendental events from one thousand years remove … instead of thirty. I won’t be around in any case🙂

        The truth is that I was a few days writing about current politics, especially in Spain, and unable to find any positive outcome in this area, and in conclusion frustrated, I decided to discuss topics with greater significance than the actual medieval politics.

  2. 🙂 Interesting subject, D.
    Though a bit scary. I don’t want to live the day when the cyborgs will be born. Because that day will be the begining of the end for humans…

    • Dugutigui says:

      Let’s put it in this way.
      Human life is a perishable commodity like the fruits, vegetables and flowers one can see in Nature. Hence, if the health is good, youthful appearance can be maintained but the spring of youth cannot be maintained. So also old age is accompanied by disease, and finally death, which no scientific researches of man have yet found out a medicine to stop. That is why, after trying all sorts of things, it seems, many finally seeks spiritual satisfaction as the only good in life to put an end to his useless efforts in the world led by his endless desire.
      There is no possibility to overcome thirst, hunger and sexual urge in the world. Human beings are advanced and higher than other living beings. Yet there is no difference between animals and human beings as far as perpetuation of species by the method of mating are concerned. As far as this aspect is concerned, human beings are evolved only from animals and everyone is a part of Nature. From this it is concluded that Nature itself recognizes sexual activities of all species so that perpetuation of species is possible on the Earth. So, only by culture, civilization and rationality, it is possible for man to restrain oneself of the natural urge to a certain degree but not completely by nature. Because of imperfection, incompleteness and limitations it is impossible to control oneself of one’s natural urge unless it is satisfied by some means.
      On other hand, in the civilized world man has forgotten Art, Culture and Nature, which are the friends, philosophers and guides or rather the stepping stones for his developments. Becoming a technocrat, man plans to start many industries not bothering about the environment or the natural resources dwindling in his ambitious pursuits in the world. In his fulfillment of his ambition, he has forgotten everything about natural or human life. Not bothering about his limitations or real problems man enjoys mechanical life in the world. To overcome disillusionment at the time of death, man goes to the extent of taking pain killers and dies peacefully too! So, after destroying Art and Culture, man is not bothered about the destruction of Nature either. Technological development leading to economic development has made man not to worry about other things, not even the other aspects of human life to such an extent that man has completely forgotten that he is part and parcel of Nature and with the destruction of Nature the whole of mankind too would be destroyed one day.
      So the beginning of the end for humans has already started and is irreversible, and only a superior intelligence -that can not be achieved by evolution due to lack of time-, a society of mind with no flesh and bones, something reminiscent human, could exceed all our limitations, both physical and intellectual. There are already many people, today, who lives and survives by computer-controlled mechanical appliances, artificial hearts, prosthetic, etc. A super-intelligent immortal cyborgs is just the ultimate extension of this process that has already begun.
      Also we have to be conscious that humans have not been always as we know them now. So at various periods over the last 3.8 billion years you have abhorred oxygen and then doted on it, grown fins and limbs and jaunty sails, laid eggs, flicked the air with a forked tongue, been sleek, been furry, lived underground, lived in trees, been as big as a deer and as small as a mouse, and a million things more. The tiniest deviation from any of these evolutionary shifts, and you might now be licking algae from cave walls or lolling walrus-like on some stony shore or disgorging air through a blowhole in the top of your head before diving sixty feet for a mouthful of delicious sandworms. So a super-intelligent immortal cyborgs would be just another step in our evolution. Certainly a giant step, but also indispensable. Probably …

    • A contradiction of horror and elation I experience pondering this article. Horror at the thought of Artificial Life surpassing human intelligence and realising the world would be a nicer place without us. Elation that Artificial Life could speed up our own evolution and provide us with better technologies, free energy unlimited food supply, space travel …

      • Dugutigui says:

        Exactly the same contradiction that occurs to me … it’s a most distressing affliction to have a sentimental heart and a skeptical mind … but destiny is not a matter of chance; it is a matter of choice. It is not a thing to be waited for, it is a thing to be achieved … and the snake which cannot cast its skin has to die.
        Thanks a lot for your thoughts!

      • You’re welcome. And thanks back at you. It’s great fun to come across an idea that get’s the brain firing!

      • Dugutigui says:

        You are right… very different to what most people do; stop thinking, pretending that way end up with problems…
        Thanks for your comment!!!

  3. petit4chocolatier says:

    Scary and a little Sci Fi. But who knows! Great writing and I love the human touch best🙂

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s