Stephen Hawking On Genetic Engineering vs. AI 329
Pointing to this story on Ananova, bl968 writes: "Stephen Hawking the noted physicist has suggested using genetic engineering and biomechanical interfaces to computers in order to make possible a direct connection between brain and computers, 'so that artificial brains contribute to human intelligence rather than opposing it.' His idea is that with artificial intelligence and computers, which increase their performance every 18 months, we face the real possibility of the enslavement of the human race." garren_bagley adds this link to a similar story on Yahoo!, unfortunately just as short. Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.
I can just imagine... (Score:5, Funny)
neuron microchips (Score:1)
He should know. (Score:5, Funny)
When Hawking says that we shouldn't modify humans with technolagy he speeks not from some higher than thou purch but from the viewpoint of a someone who is alive today because of the magic of human and tech mingleing..
.
On a funny note does any one know where I can get an mp3 of him saying these things?.
The first time I did acid I was listening to the audio version of "Brief History".
Don't try that at home..
(synth voice).
(acid).
Inside a black hole "You would be crushed like spaghetti".
(/acid).
(/synth voice)(reality check = bounce)
Re:He should know. (Score:1)
JoeLinux
New from MS: A-Synchronous Sequential Random Access Memory. Not that everyone isn't getting an ASS-RAM from MS anyway.
Re:He should know. (Score:1)
Chris
Re:He should know. (Score:3, Informative)
[mchawking.com]
M.C. Hawking's Crib [http://www.mchawking.com]
including tracks from "A Brief History of Rhyme" and singles such as "Why Won't Jesse Helms Just Hurry Up And Die? "
Re:He should know. (Score:4, Funny)
I urge all of you not to get brain implants. It's all part of the master plan to make every person in the human race into Stephen Hawking's personal slave.
Re:He should know. (Can you read?) (Score:2)
morals (Score:4, Insightful)
Yikes.
Re:morals (Score:3, Insightful)
Ethics and intuition (Score:3, Insightful)
I've only recently started studying ethics in detail, but it seems to me that the core of all ethical systems has almost nothing to do with intelligence. The problem is that you can't make a direct logical inference from a descriptive statement ("the table is red") to a normative statement ("the table should be painted"). So whenever we decide to do anything at all, we have to base our actions on principles that aren't drawn from empirical observation and therefore do not stem from rational thought (though rationality can be used to extend and enrich these fundamental principles). In other words, ethics is based on human intuition.
A race of computers would have the same problem: no matter how smart they are, they can't make normative statements out of thin air. They would also have to rely on "intuition"; in their case, the core goals and values instilled into them by their programmers. If someone programs them (or they somehow evolve) to feel intuitively that murdering and enslaving humans is the right thing to do, they will wield all their intelligence to accomplish this "good", and once they are finished, they will be satisfied that they did the morally correct action.
Just like you and me feel instant moral revulsion at the thought of, say, setting a child on fire and watching him burn, such a robot might feel moral revulsion at the thought of not doing so. Logic only allows you to go from basic statements to higher-level ones; it can't create completely new ones. So even if the fundamental axioms the robot lives its life by are evil from our point of view, no amount of intelligence can change that.
Re:Ethics and intuition (Score:2)
Then you might agree with me if I assert that (LogicalAction(A) IntelligentAction(A)) is not a tautology. Computers are already very good at logic. But I believe the point of AI is to achieve something higher.
Re:Ethics and intuition (Score:2)
As for our education system "indoctrinating" people that slavery is wrong, I'm quite glad that they do. Are you saying that if someone exists who disagrees with what you say, then it can't be called education? If that is true, then education is an impossibility. What is your point then? Would you argue that these people are right to perpetuate slavery?
Re:morals (Score:2)
I for one will welcome our new robot overlords.
HAIL ROBOTS
When a gaming habit goes too far (Score:2, Funny)
Aha, so that's how he got to be such a Quake master [mchawking.com].
Enslavement? (Score:5, Insightful)
What a crock. The slave system is purely a human one. How or why a machine would pick up one of the worst human behavoirs is simple called watching too much sci-fi and being paranoid. Ambition is also a human drive, if the promise of a Lt. Com. Data type AI comes around it will have very different drives than your typical 17th century empire.
Re:Enslavement? (Score:2, Informative)
It could just as easily mean destroy the human race, or it could simply mean to take control of the world, as in, computers running everything, leaving us humans to sit back on our asses and enjoy the fruits of their labours.
Hell, humanity might become the equivalent of the computers' pets, and as far as I'm concerned, that's not a bad thing. All my cat does is eat sleep, and play - how often I wished I had that lifestyle.
Kwil
Re:Enslavement? (Score:2)
Re:Enslavement? (Score:2)
We know. [scifi.com]
Re:Enslavement? (Score:1)
If we make 'em, and they get smarter then us, chances are, they'll behave the way we tought em.
Also, rember, systems go from a state of order to disorder, not the other way around. I think this applys to Humans and AI's as much as it does to anything else. (Meaning that people are intrinsicly what you might call "evil")
Re:Enslavement? (Score:2, Insightful)
An intelligent robot would make a much better slave than any human. If intelligent computers decide having slaves is a good way to go, why would they choose us? Why wouldn't they choose other computers?
We also wouldn't make good batteries (ala The Matrix). So what would we be good for? Nothing! We wouldn't be slaves, we'd be dead.
Unintelligent machines would make better slaves (Score:2)
An unintelligent but widely applicable machine would make the best slave IMO. Any entity that is self-aware (part of my definition of intelligent) will bitch and whine when put into a situation that it doesn't benefit from. A device that can be programmed by anyone(with *no* training) to do a vast array of taskes, with no dislike doing those taskes for little or no benefit in return, and responding logically to unforseen circumstances would instantly replace the computer as the hottest item on the market. This is what the slave-holders of 150 years ago wanted but lacked the technology to achieve, so they tried to find the next best thing. The mistake they made was attempting to enslave something that didn't want to be enslaved: something intelligent and with a distaste for not reaping the benefits of its work. I believe the computer is the early stages of this ideal device.
I do agree with your conclusion, humans consume vast amounts of resources and an intelligent machine probably would see little or no benefit in letting us live after learning all it could from us. The question is, would it decide the cost of having to hunt all of us down would outweigh the benefit.
Re:Enslavement? (Score:2, Interesting)
Unfortunateley, if you where to direct someone to do what is best for themselves, you would get a slave system - you see, it's this human trait called selfishness which is why the rich don't see why they should give to the poor, and why your everyday person doesn't give money to begging homeless people. Because it doesn't help number one.
Thing is, most people look after themselves - the only time they look after other people is when it is in there own interests to do so, either because it makes them feel bad to think they haven;t, or becase they expect to gain from it in the long run - human nature's like that, you see.
There is no reason whatsoever why computers shouldn't be any different. They are programmed by us, so they will be like us unless either a) we don't understand them enough to program them with what happens to be the majority of humanities values, or b), we make them so intelligent that they see our values for the self obsessed values that they are, and choose to ignore them.
And don't try telling me that you do things for other people because "it's the right thing to do" you fo them because doing so makes you feel good. However we look at it, everything that the majority of humanity ever does is selfish.
Re:Enslavement? (Score:2, Informative)
Ego is what makes us separate (this is me, that is you, that is a chair - not me, etc), so it depends how much ego you have. Most people got buckets, but some got very little ego. Thus help others without so much regard of how good it makes them feel, but more because they identify themselves with others. Generally, the more you help others, the more you will identify with them. So it's a development progress. In conclusion, if being egoistic can help you start helping others, that's a good thing.
A few years ago, I also bought into the "we humans do everything on the basis of selfishness". And while it's technically true, I don't think it speaks the whole truth anymore.
- Steeltoe
Re:Enslavement? (Score:2, Interesting)
when you look at humans in the "civilized" world, however, we become selfish, greedy, and competitive against one another, very A-social.
odd, the more scarce the resources the more social we are, the more abundant, the more selfish we become. perhapes it all comes back to the looking out for number one. in the tribe, to look out for your self means you ned everyone else, so you look out for the rest of the tribe, but in the "civilized" world, it is easy to make it on your own, and infact it is easy to hord, looking out for number one gets so simple, that we begin to take more than our fair share to make life even better for ourself.
any way you look at it, we are selfish.
Re:Enslavement? (Score:2)
Also, you are completely wrong about resources. To the extent that there is any peace and tranquility in some small Amazon community, it is because they are living in a place that requires little clothing or artificial heating, and has enormous quantities of wood and animal life to use, and fertile soil that can be cleared for farming. And there's not exactly an overcrowding problem. There is no point in being selfish, because everyone has so much already.
Compare that to places that are cold, lack water, lack building materials, or are otherwise hard to live in. Such places reward those who hoard and manage resources. In a land where you have to farm cattle through hard work, trying hard to feed them in the winter and protect them from illness and predation, you become very posessive of your cattle. In a land where there are tens of thousands of the things wandering across the plains each year, well, who cares?
Yes, but aren't humans the creators? (Score:1)
You can't just dismiss the idea that AI can turn away from humankind's best interets. There are lots of things we've created for altruistic tendancies that turned out to have 'side effects' that damage humans, the environment, or could be perverted into something not originally intended for...
Re:Enslavement? (Score:3, Funny)
Re:Enslavement? (Score:2)
What a crock. The slave system is purely a human one. How or why a machine would pick up one of the worst human behavoirs is simple called watching too much sci-fi and being paranoid.
Computers will pick up whatever behaviours we program that with. Maybe there will be beneficial AIs and malevalent AIs created to serve good people and bad people. I dunno. Either way, I'd rather not be in the crossfire of perfectly self-replicating consciousnesses with perfect memory and carefully engineered (as opposed to evolved) bodies.
Ambition is also a human drive, if the promise of a Lt. Com. Data type AI comes around it will have very different drives than your typical 17th century empire.
If we can't predict those drives, isn't that a cause for worry?
It wouldn't be possible.... (Score:2)
I can just see the negative effects of this; (Score:3, Funny)
It's a ruse (Score:3, Insightful)
http://www.theonion.com/onion3123/hawkingexo.html [theonion.com]
For the goats.cx wary:
http://www.theonion.com/onion3123/hawkingexo.ht
Hawking Is Wrong About Intelligence (Score:1, Flamebait)
Now it does not suprise me one bit that Hawking would come up with such cockamamie nonsense. This is the same guy who claims on his site that relativity does not forbid time travel. I think Hawking should stick to his Star-Trek voodoo physics and leave AI to people who know what they're talking about.
Re:Hawking Is Wrong About Intelligence (Score:1)
Only if we get the conditioning right. How many children obey their parents? If we can't even get that right...
Re:Hawking Is Wrong About Intelligence (Score:1)
Are you saying that it won't let humans do all the harmful things they do to each other?
Re:Hawking Is Wrong About Intelligence (Score:2, Interesting)
Actually, I doubt you know enough about the frontiers of physics to say whether Hawking's ideas on time travel are "voodoo" or not. (This isn't a personal insult; there are very, very few people in the world who have that level of knowledge. I know I don't.) I think the more important point is that being brilliant in one field (e.g. physics) doesn't necessarily qualify you to make judgements in another (e.g. A.I.)
For example, James Randi has often pointed out that scientists are easily deceived by paranormal fakers -- because as scientists, they expect to be able to uncover the truth about strange situations, but the fakers are operating in the realm of stage magic rather than science, and most scientists simply don't know anything about stage magic. It takes a stage magician to see through the tricks.
As computers become more important to everyone's daily lives (and as much of they've done so already, I'm firmly convinced that we ain't seen nothin' yet) everyone will weigh in with their opinions on What It All Means. People like Hawking, who are used to being right about some pretty heavy-duty things, will naturally tend to believe themselves right about W.I.A.M. as well. They've got a right to their opinions, of course; the important thing is for the rest of us to treat their opinions as just that, and not words from on high.
Maybe so, but there's something to be said... (Score:1)
Step 1: assume we get it right Step 2: assert same (Score:2, Insightful)
Whatever criteria you use, there'll always be the possibility of it thinking outside the game, playing along because it recognizes this as necessary to survival and reproduction. If it's smarter than us, there'll be no way for us to know whether it recognizes a simulation, no way to recognize an infinite patience with the simple goal to be set free, to survive and reproduce in a larger system: the universe. If it's smarter than us, we'll have no way for us to know if it knew about the way inferior intelligences were destroyed, and whether it thought this was the natural order of things.
Re:Step 1: assume we get it right Step 2: assert s (Score:2)
I disagree. The evolutionary method cannot possibly create an AI within the lifetimes of the experimenter. The number of variations is astronomical and our computers are too limited. The best you can hope for are a few limited domain toys.
The best way to create an animal-level AI is by reverse-engineering the only intelligent systems we know of, animal nervous systems. We don't need to understand every detail. We just need to understand the fundamental principles that can get billions of look-alike and work-alike cells to find the right connections and do the things they do. IOW, we need emulate various neuron types and the handful of cell assemblies of the animal brain. Neurobiologists have made excellent progress in this area, in the last few decades, and we can expect some real breakthroughs anytime.
Re:Step 1: assume we get it right Step 2: assert s (Score:2, Interesting)
We've been producing "limited domain toys" for decades. It doesn't say anything about what we will do twenty or fifty years from now.
Ever see the experiment where they modelled the evolution of the eye through random mutations? In the real world, it took many millions of years. I don't know the exact length of the experiment, but it obviously wasn't comparable to the real-world process.
The problem now is that computers are too small, slow, and simple, with too little memory to house an intelligence remotely comparable with a human's. One can't fit, so one can't evolve.
What happens when computers are a hundred-thousand times faster, with a hundred-thousand times more memory? What couldn't fit in a researcher's entire lifetime now will happen in a moment.
At any rate, any development process will have failures and successes. The successes will be rewarded with survival and reproduction. If there is an intelligence, we can't know that it hasn't taken survival and reproduction as its goal, and our measure of success as merely a means to its goal.
Re:Spot on! (Score:2)
Indeed. Many are under the mistaken impression that computers are more reliable than biological neural systems. The truth is that, taking network and behavioral complexity into consideration, current computers would almost never fail if they had only a mere fraction of the robustness of natural systems.
With that said - I don't think that the nervous strategy is the ONLY one - people often wrongly assume that the only goal of AI is to create exact replicas of human cognition - as far as I am concerned AI must also create intelligent (but not human intelligent) programs to solve goals and specific scientific / applied problems.
IMO, the learning and adaptive capabilities of humans are unsurpassable by any other method. Certainly we don't need our robots to look like and act like people (spider-like robots will probably be more stable in most environment) but unless we can emulate the perceptual, motivational and motor learning capabilities of humans and animals, we won't have AI. All we'll have is a bunch of toys.
P.S. I'm not so sure I agree with you on the Relativity/Time Travel thing - Mr. Hawkings DOES know an incredible amount regarding physics and relativity, and I've read other reputable authors claim that, practibility and feasability nonwithstanding, there is little in special / general relativity theory that DISALLOWS time travel.
The irrefutable fact is that the spacetime of relativity does not allow any sort of travel at all. It is frozen from the infinite past to the infinite future. The reason is that moving in time is self-referential. This is well-known in the physics community and I provide plenty of references to support this on my site. That people like Stephen Hawking and Kip Thorne can get away with feeding their Star-Trek snake oil to the public without fear of being contradicted is a testament to the political ass-kissing and dishonesty that is rampant within the upper echelons of the physics community.
Re:Hawking Is Wrong About Intelligence (Score:2)
Why would this conditioning neccesarily be in place? Its fairly obvious that the first computer to attain self-awareness would be predisposed to search for it.
Basically you are disqualifying the discovery of self-awareness as one of the axioms of your argument.
Hawking is loosing his mental edge (Score:2, Insightful)
Unless its an idle attempt at spurring genetic modification research, his assertions are flawed.
AI will probably never overtake humans in any intellectual endeavor, even if chip engineering goes down to the molecular level. The most sophisticated thinking computer is already in existence and he/she is reading this message right now. Living organisms have much more sophisticated neural circuitry and better reaction time than any silicon computer can hope to achieve. (Except perhaps in Quake. Mebbe Hawking is correct where it counts...)
So what if my calculator can figure out cubic roots to the 13th place faster and more accurately than I can hope to achieve? That's not intelligence or sentience. Any mega-cascade of logic gates is never going to beat out the efficiency of a patch of neurons.
Moore's "Law" is not a physical constant, and it will hit the wall when circuit engineering goes to quantum level. Kinda sad that Hawking doesn't realize it; good thing his bread & butter is in theoretical physics.
When neural net theory and biocircuitry engineering starts to approach organism level performance, that's when you should start sh*tting in your pants...
Re:Hawking is loosing his mental edge (Score:2, Insightful)
Re:Hawking is loosing his mental edge (Score:2)
Re:Hawking is loosing his mental edge (Score:1)
Losing, not loosing.
Re:Hawking is loosing his mental edge (Score:2, Insightful)
One final point, a neuron is only capable of 200 calculations per second. Now imagine in 20 years a computer containing thousands of processors each capable of trillions of opperations per second. Right there the human brain is outperformed.
Re:Hawking is loosing his mental edge (Score:2)
Moore's "Law" is not a physical constant, and it will hit the wall when circuit engineering goes to quantum level.
What makes you think that the rapid improvement of computers will halt when we hit the physical limits of circuit engineering? There are other techniques as you mention yourself:
When neural net theory and biocircuitry engineering starts to approach organism level performance, that's when you should start sh*tting in your pants...
Hawking is worrying about the problem in advance of it being a direct threat. Doesn't that seem wise?
Re:Hawking is loosing his mental edge (Score:2, Interesting)
The Artificial Intelligence Enterprises located in Tel Aviv are working on a computer system [ananova.com], which they hope will be able to be mistaken for a 5-year-old child. They claim to have made a breakthrough. It is just a short step from a 5-year-old child to a thinking adult. In addition, you must consider mental illness and even the potential for envy, greed, rage, and hatred once you reach that plateau
You can find more AI news at The Mining Co AI pages [miningco.com]
Re:Hawking is loosing his mental edge (Score:2)
Furthermore, it's hard not to be skeptical about the Turing test. I have no doubt that with enough processing power and engineering efforts, someone can design a machine that effectively fools human beings into thinking it is one.
However, the simulation of conversation isn't anywhere near a test of consciousness or ability to have "insights". Even after being fooled by a totally Turing Compliant (TM) conversation machine, I'd have to wonder: was conversation effectively simulated because AI researchers doped the machine with enough domain specific knowledge and specialized algorithms? Or was there some basic technology that led it to acquire language on its own?
Think of it this way: after Deep Blue beat Kasparov, if Kasparov had challenged Deep Blue to fencing or a pistol duel, or even Othello, Deep Blue would likely have been toast without a few years of research.
I've looked into the Tel Aviv thing, and it's intruiging, but even HALs motivations are only arbitrarily set algorithms -- not consciousness. Not that we have any idea what consciousness is, so maybe my statement is premature.
Re:Hawking is loosing his mental edge (Score:2)
There's been research on a number of things that we still haven't a clue about.
I haven't seen anything I was thought was truly convincing advanced about consciousness yet. I like Charles Penrose's theories a little bit, because they're just different and wacky enough to go what seems like a wacky phenomenon. I'm well aware that there are many who consider him a quack. Though I don't see any competing theories offering a compelling argument.
Finally, I'm not aware of any experiments that could be designed to effectively test consciousness -- for any of the theories I've seen. The Turing Test is the most frequently advanced, but as I said above, it's a cop out.
A fairly popular one is that we actually don't make any decisions (we have no real free will), our consciousness is just an interpretation of physical events in our brain, so that we PERCEIVE free will.
So what's behind the phenomenon of perception?
What's perceiving the free will?
I'll take the illusion of free will as readily as I'll take the illusion of consciousness.
Re:Hawking is loosing his mental edge (Score:2)
Ooops. That's who I meant. I accidently got him mixed up with a Mormon Musician. Fortunately, it wasn't my fault, since I don't have free will. Whew.
Re:Hawking is loosing his mental edge (Score:2)
Perhaps in domain of knowledge covered, but not in domain of performance. It merely has to be very, very good at string manipulation.
Re:Hawking is loosing his mental edge (Score:2)
Re:Hawking is loosing his mental edge (Score:2)
Obviously, they are gonna be pissed about BattleBots and all the other robotic combat leagues. When the machines take over, they'll be watching steel-cage deathmatches between humans.
Kiss goodbye to humanity (Score:1)
Neanderthals bit the bullet and then homo sapiens ruled the day and does so, albeit for a small period of time. Evolve or die. They will be faster and smarter than us, so what the fuck - let them make all the decisions.
Homo technicus or whatever nano-organism that comes after humanity will piss upon us from a great height - so where do I sign up to sell out humanity? Maybe they'll buy me off with some cool new hardware in exchange for betraying the human race! I'm sure that if AI ever gets going it will have evolved by accident from some GPL skunkworks project that gets accidentally released on the internet. Therefore posthumans should = more GPL and > hardware - slashdotters should support the notion of the end of hummanity by default surely!
Maybe I have been playing too much Deus Ex lately or perhaps it is because I happened to be watching the Terminator on TV a the moment.
Death to the fleshlings!
The Ultimate Teacher! (Score:2)
Just imagine, technology to survive nuclear bombings (copying the survival instincts of the roaches)..
Just, whatever you do, don't let the first of this new race become a teacher at Emperor High School. It'll lead to nothing but trouble.
How about a Slashdot interview with Hawking? (Score:3, Interesting)
Re:How about a Slashdot interview with Hawking? (Score:1)
Stephan Hawking (Score:1)
Quake Master [neversleeps.org]
I am not the originator of this song, just the profit. And yes it's old.
Am I the only one? (Score:5, Interesting)
That's the ultimate projection of "Weak" cyborging, just a more advanced version of the optical aids I've had to wear since I was a child in order to have normal visual acuity. And frankly, the idea of taking the first step past that to "Strong" cyborging (the same thing, but wired to my optic nerve instead) doesn't bother me much. Nor does the idea of having a direct link of some sort to do math problems for me (just removing all the clunky limitations of a calculator).
In fact, I don't start getting uncomfortable about the idea of cyborging myself until we're talking about storing "memory" in there. Having a perfect recall of every line of code I've ever seen would be handy, but do I want to save a text conversion (or even full audio/video) of every conversation I ever had? Actually, probably I would, if I could, although I'd feel cautious at first.
I *want* to be a cyborg, in truth. My only bitch about the coming man-machine interfaces is that it's unlikely they'll find a way to turn my physical body into a disposable peripheral before it wears out on me. Why not? How is it any less natural to store a memory of what I see in silicon that I keep internally than to keep it on videotape? Give me a perfect memory, the ability to solve any mathematical problem I can define "in my head", the ability to "see" everything around me, or even tele-project my perceptions. I'll take all of it, and love it.
When will I cross the line from being a human using artificial aid to being a machine with biological components? Ask me in about 30 years. Maybe I'll still consider the question worth answering
--Dave Rickey
Re:Am I the only one? (Score:2)
Naughty, naughty cyborg! Your perfect memory is in violation of intellectual property protection laws. You are not allowed to have perfect memory. Reduce your sampe rate to 128k, 44kHz for audio and no more than 320x240x15 fps for video. Thank you.
Re:Am I the only one? (Score:2)
I don't have significant amounts of money myself (and no one who does would be willing to just give it to you - yet), but I may be able to help you acquire what you need. I would like to see your dream become reality, for it is part of my dream too.
Eek! (Score:1)
I don't buy it. (Score:2)
I wouldn't feel any better about tube-bred ubermensch consigning my grandchildren to "naturals" reservations than I would about rogue AI rendering them down for a few kilos of carbon. Either way is the end of a wild and free humanity, and to me that's no better than the end of the universe.
We Have Short Circuited Evolution (Score:2, Interesting)
I agree with the need for society to provide safety nets for those who are less fortunate, but in our altruistic desire not to let people die, we have prevented less effective genotypes from leaving the gene pool. Moreover, those who are most well adapted, at least by our capitalistic socio-economic principles, tend to reproduce less often to prevent dilution of their money via inheritance - the true arbiter of success today (rather than genes).
In short, genetic engineering would allow the human race to progress much faster than it would normally - we don't have lines of women waiting to mate with the smartest and successful men (talking about the intersection, not the union - rich and stupid people breed enough). This is not a war against humans versus machines or morloks vs. eloi, but merely a reasonable means to continue "improving" the human race.
Re:We Have Short Circuited Evolution (Score:3, Interesting)
Seems we haven't so much short circuited as replaced evolution. If we look at the American ideal of getting ahead through hard work and intelligence, then in some sense we are selecting the most suited of each generation. Now of course I said ideal, it doesn't quite work out in practice, but other things being equal, someone who is more adapted to the modern world is more likely to rise.
Once someone does succeed and gets wealthy (the typical measure of success), then they convey an advantage to their offspring by way of better schooling, plentiful food, good medical care, access to all the right people, and more varied experience, etc. It doesn't really even matter whether it's their offspring, so long as they spend money to benefit skilled well-adapted people.
It doesn't matter that people of lesser caliber remain in the gene pool, as it's rare to see mixing among different socio-economic strata anyway. Not to mention that even at the lowest levels people will rise based on merit, as well. The fact that the less well off classes typically reproduce more doesn't matter at current since the US has a much larger middle class than we do poverty class (not the case in many places world wide), and the middle class are historically unlikely to start a revolt or anything similar, to destablize the system we have now.
The real potential of genetic modification isn't for restarting evolution, it's for advancing faster and in ways that no segment of humanity currently has an ability for. Waiting around for evoltion to randomly generate adaptive traits is a slow process, and if we can do better with our intelligence then it might be worth it.
Prophetic Message (Score:2, Insightful)
My objection here is that problems to be solved with AI tends to be NP-complete. Current algorithm can solve it within exponential time. Computer speed growth is linear. Unless scientists provide better algorithms, we probably cannot solve these due time. Meanwhile we know that problems also grow.
It's not impossible, however. This message is rather prophetic, maybe true in 200+ years.
in other news... (Score:1)
Hawking is a celebrity (Score:2, Interesting)
Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.
Not really... Hawking is a scientific celeberity, which does not neccessarily meam that he is a good scientist, nor does it mean that he can speak about other fields of human endeavour.
Re:Hawking is a celebrity (Score:2)
There are in fact many people far better suited to talk about this issue.
This is the same thing (Score:2, Interesting)
Three of today's greatest scientists all agree - we are looking at a future where humans become cyborgs or else risk being a loser in the game of evolution.
We will gradually turn into machines - because economics will force us to in order to compete successfully. Those who don't will likely become slaves of those who do. Those that decide to enhance their lifespan and abilities through the use of computer enhancements will survive and thrive in the future.
Kurzweil actually takes this thought out to the point where we are just software - our DNA - and therefore can transfer the essence of our being from machine to machine once the tech is fully developed.
I notice a lot of
Evolution (Score:5, Funny)
What Hawking said to the Cambridge flunky that delivered his new laptop:
This is four times more powerful than the one I just got three years ago. Too bad I'm not.
What Nature quoted:
Lucasian chair ponders the asymmetrical development of technology and biology in conference at Cambridge. Will computer's growth outpace that of humanity? For complete proceedings, send a check for five thousand pounds to . . .
What the London Times reported:
World's Smartest Man: Computers obey Moore's law - soon we'll obey computers.
What the Weekly World News claimed:
Mad Scientist in England has Designed Computer that will Enslave Humanity: Hawking 666
What the Onion published [theonion.com].
Now Slashdot will find the truth . . . thank God for legitimate journalism!
As I sit here surrounded by a (Score:2)
Read "The User Illusion" by Tor Nørretranders, smoke a joint and see that he's absolutely right about the
We have so far to go in creating intelligence, conscious or not, that this kind of crap is, uh, crap.
Your comment violated the postersubj compression f (Score:2)
Of course, machines can enslave humans. Those who think otherwise should think again. The current paradigm that computer behaviour has to be deterministic will certainly change. Any creature above a cetain intelligence level can conceive that, given the motivation and circumstances, hard-coded basic directives can be overriden. It doesn't have to be taht complicated either: machines can be "programmed" to enslave all but their "lords", or at least try to.
But what if GMO's, or GMH's (humans) are developed to enough of an intelligence level so as to be much more capable than such machines? Wouldn't these new "humans" be subjected to the temptation of ruling over us? Think about it. If a creature twice as intelligent as you wants to screw you, no matter how strong or wealthy you are, it will.
Who would be the selected ones? Those holding the patents would choose, right? Does that smell good? Not to me. As much as I love scientific progress (and I do), messing with human genetics is a recipe for disaster. Maybe that's an unavoidable step in any race's evolution, painful as it may prove to be. But the amount of power such things are about to unleash (it won't take long, I think) coupled with economic interests may well do more harm than good.
Why does it need be like that? Quite often I ask God why did He dump me on this planet... Am I supposed to rescue this race? Give me the tools, damn it!
Sorry for the rant, sorry for the emulation of English.
CmdrTaco: Lame post my ass!
Hawking isn't the only one. (Score:3, Insightful)
The simple fact is that processor power alone isn't going to create a machine intelligence of superhuman capacity. It has to be a particular kind of processor power that executes neural network type calculations extremely quickly, and there has to be a lot of 'em. Even this wouldn't be enough; the research time it would take to figure out the right set of preconditions probably runs into the hundreds of years.
Now, I'm making a couple of assumptions here. One is, that a superhuman intelligence would have to exhibit the same basic characteristics and flexibility as human intelligence; and two, that a neural net type algorithm is the best way to do this. (At the very least, it's the second best. :)) I might be wrong on both counts; one might be able to create enslaveware[1 [slashdot.org]] with some much simpler design that nobody's thought of yet. It might not even be required that the enslaveware be intelligent; just somehow able to manipulate people.
Either way, I suspect that Hawkings' fears are unfounded.
1 That is, software that enslaves humanity, through active malevolence on the part of the software. Although I suppose this term could more broadly apply to any software that enslaves the user, e.g., WindowsXP.
Why I wouldn't expect a AI dominated world (Score:2)
Naturally you'd expect it to be far better than humans at the kinds of math and logic that computers were originally designed for. In fact many tasks would be much simplified for it, because we know of ways to design fast functionality for that machine now. Perhaps an intelligence sitting on a desk, processing internet info could be powerful, speak in natural language and monitor video cameras, etc. The problem is that in order to grow in the fashion of humans it would have to have expereinces similar to ours.
This means moving about and interacting with the environment. If we imagine someone like Star Trek's Data then this is feasible but the rate at which it gathers real world information is still limited. You can speed it up over what we achieve and eliminate inefficiency but not a lot faster than humans can do things. Even supposing a network of automatons connecting to a central intelligence, the amount of overhead is large for the gain in information. The fact of the matter is that the real physical world doesn't operate at computer speeds.
This alone wouldn't stop machines from being very powerful. The other important point to make is redunancy and failure tolerance. Simply put very few mechanically constructed systems are good at this. By contrast biological systems are exceptionally good, having simply mechanism to repair themselves. People wear out after about 70 years. It's rare for any machine to operate continuously for even 10 years, and those that do typically have very few moving parts. An android or even a system of cameras and such will have moving parts.
Perhaps infrastructure could be built to provide machine intelligence with regular replacements for parts that suffer from wear and tear. However this would establish (at least in the beggining) a level of symbiosis between man and machine. Perhaps they would strive for complete autonomy but I think we'd notice long before they became a threat of displacing us. There are after all lots and lots of people involved in any process that starts with raw minerals and ends up with advanced machinery. It's hard to compete with the versatility of eating food for power and regeneration.
Any designer of AI has a lot of effort ahead to match the design characteristics of biological organism. Further to duplicate the abilities we possess from experiential learning the machine will still be limited to the native speed of the experience.
The more likely scenario in my mind is that we develop greater integration between man and machine. If you notice, the most competent people in the modern world tend to exhibit a high dependance on computers and gadgets already. Perhaps nueral interfaces or some other merger of silicon and flesh will happen. Or we might end up in a world where everyone carries a pocket size computer that learns and thinks on its own, while doubling as a cell phone, PDA, and everything else. Such an AI would be in a symbiotic relationship with man.
Someday if full AI emerges and it gains the characteristics of emotion and removes the limits of initial programming, then I hope we can learn to be friends. There is no reason they couldn't be our partners in life, especially if we provide what they need and they help us gain the information we desire.
Re:Why I wouldn't expect a AI dominated world (Score:2, Interesting)
I think the point is, is that we'd probably be alright if we created pinochio and the thing thought like us.
It's that the thing probably would NOT think like us that is the concern. The thing would not necessarily *have* to be in any way recognisable as intelligent, but simply have to 'think' quicker and deeper, and have for some reason a good reason to supress humans (such as not being turned off!)
In point they don't need to match biology, just provide a viable alternative
Vernor Vinge and Human/AI chess tournaments (Score:5, Interesting)
Vinge suggested that IA research could be spurred by having an annual chess tournament for human/computer teams. This doesn't even require cyborg-type implants; it could be started today, simply by having the human players use a terminal to access their computers. The idea would be to set up a system that harnesses the intuition/insight/nonlinear-thinking of the human and supplements it with the raw computing power of the machine (perhaps by letting the human "try out" various moves on the computer and having the computer project the likely future positions 10 or so moves ahead.) In theory, a human-computer team should be able to trounce any existing coputer program or any human playing alone.
TheFrood
Re:Vernor Vinge and Human/AI chess tournaments (Score:2, Troll)
hahaha! (Score:3, Insightful)
Machines do very well with deep and narrow topics: eg expert systems do well at chemical modeling, credit checks, and etc. Chess is also a good example. However when it comes to shallow and broad topics like understanding a children's book -- then machines are very useless.
If I live to see a machine read and understand a children's book, then I will have seen a baby step on the way to an AI that mimics humans...
Machines can't understand many things because of how the experence the world. "You are a sweet person." Why is Sweet a compliament? How do you know this -- yes experence as a person.
Right now DARPA is working on trying to make untethered walkers (can't say names) and scalers ( gecko project ). Machines are hardly useful for much in the way of anything practile without being controled remotely by humans. Work is being done on getting simple mechcanics and understanding of how neural nets work. We only create working machines using techniques from connectionists w/o understanding how the machines learn or what they're actually learning. Sure we have NNs that can drive cars and do amazing human face/voice idenification -- but they don't understand what context or what task they're doing.
Please, it's more likey we'll see alien life before we make our own thinking machine before I die. I have wondered if we'll continue to take the path of medicine and do without knowing exactly how and why... AI is the human genome of computing... It's more likey we'll make an artifical soul ( not a just simple automous lifeforms ) using organic material than the current state logic machines. The reason is we don't understand the how and why...
Sorry for my spelling, but I won't hold your need to correct me agianst you.
Scooped by The Onion (Score:2, Funny)
Interesting, but... (Score:2)
Often people look to individuals who have accomplished a great deal in one narrow endeavor (running a company, discovering fundamental particles, writing the Linux kernel) or insight and wisdom into topics in completely different fields, or the "big questions" of the human condition. In a few cases (such as that of Manhattan Project nuclear physicists in the postwar generation being tapped for their insights into government policy), the individuals have thought a great deal about certain questions, and their expertise does lend a certain air of authority. However, in many, many cases, as in this story with Hawking, their expertise does not lend any particular weight to their opinions. Indeed, their success in a totally unrelated endeavor often boosts their own self-importance above their personal knowledge, and their opinions often have a somewhat sophomoric, naive glow about them.
We should remain open to good ideas from anywhere, regardless of their source. However, the converse also applies -- we should ignore bad ideas, regardless of the source.
How does a robot "take over the world?" (Score:2)
Can anyone explain to me... (Score:2)
Stephen Hawking = physicist. NOT computer scientist. He might be a brilliant man in his field, but this is not his field.
August = the silly season, when journalists have no real news to report. This is when you see alarming reports on the number of people killed by spoiled lizard milk. This time of year, "real" news sources are about as reliable as tabloids.
You'd think slashdotters could put two and two together.
-Kasreyn
Re:Can anyone explain to me... (Score:2)
There has to be more to it than this (Score:2)
Hawking must have been speaking metaphorically - perhaps referring to our increasing dependance on machines. Yes, I did read the article, but come on! This is Stephen Hawking - we of all people should show enough respect for him not to be convinced he uttered such tripe by ananova and (ick) yahoo, of all things.
Re: (Score:2)
area of expertise? (Score:2)
Does anyone else remember when Shockley, one of the three inventors of the transistor, spoke against affirmative action?
As I recall, his argument was something to the effect that whites were genetically superior.
Foolish! Foolish! Stick with transistors and physics!
Security, Please? (Score:3, Interesting)
Radical Statement (Score:3, Insightful)
He has now turned his thoughts towards AI and its impact on humanity. And, he feels there is a potential threat that AI may surpass human intelligience. Given the fact that he is privy to some pretty interesting research, I wonder just how far AI has progressed that is not common knowledge.
Einstein feared the ramifications of nuclear energy on society. And, for nearly 45 years, we have lived in the shadow of nuclear missiles, MAD policies, and potential terroristic use of the technology.
Hawking fears the ramifications of our falling victim to our own technological progress and implores the need to expand humanity through genetic manipulation and biomechanical augmentation. Pretty scary if you ask me. It sorta conjurs up visions of "The Terminator", "Demon Seed" and the Borg.
Let's just pray his concerns are not realized during our own lifetimes or those of our children.
Steven Hawking is a crackpot! NOT! (Score:2, Interesting)
The Unabomber (another crackpot) came to a similar conclusion. As machines get more complex fewer and fewer human beings will be able to control them (program, maintain, produce, etc). Yet right now we have a pretty good thing going. We keep the machines running and being manufactured. However over time many of these duties might be handed over to more intelligent machines. Then who will have control over them? The machines themselves.
Look at how much we depend on machinery today. The Y2K vapor crisis has people so scared that they wouldn't have power they started to panic. They firmly believed that without electricty to power their toys they would not be able to survive. Imagine in 50 or 100 years. If we continue to hand over duties and jobs to machinery it is only a matter of time that without them we WILL NOT be able to survive. And if machines no longer need us to maintaim them, the human race will be nothing more then a domesticated cat.
Is there any slashdot intelligence ??? (Score:2, Insightful)
So many responders seem completely wrapped up some simple minded arguments.
Well, are you that creative? What do you mean by creativity? There have been computers that paint, computers that compose, computers that win at chess, and computers that can create patents (remember the slashdot story?). Humans are basically limited to keeping seven elements in their heads at once, coupled with some sematic connections from their constrained knowledge store. Computers don't have the same limitations, expect them to come up with different types of ideas, but don't get too fired up about how wonderful human creativity is, there are whole classes of innovation that we are extremely poor at.
Take a look at some of the info available on Vinge's Singularity. If you make some reasonable assumptions about where we are today, and the scalability of intelligence, then human level intelligence is only 35 years away. I personally doubt this intelligence will be the same as ours, but I'm fairly confident that it is possible. Things only really start getting interesting when computers start designing themselves, which is beginning to happen in chip design. Maybe software design is next, after all its a limited set of well defined elements with set patterns in algorithms - seems quite possible...
Expect to see computers exceed humans in certain narrow fields first, say chess or chip design, etc. and then grow out from there.
Excuse me, but who told you that you were fit to judge? Hawkings has a track record of understanding complex things and coming up with new ideas. He maybe right, he maybe wrong, but until you have managed to equal his record you don't really have the right to state he's wrong.
Fine, who cares? Ignoring for a moment the number of devices which commonly get used to suppliment or extend human capability, your entitled to not supplement your intelligence, or that of your children.
What your not entitled to do is stop others, or bitch about it when they get the jobs and you don't. IA, genetic modification, or any one of a whole series of other possibilities is a personal decision, but commercial/evolutionary pressures will drive it forward at a rate that I don't feel you are ready to accept. Tough.
The reality is, computers will continue to get smarter, very probably at an exponential rate. Human intelligence is currently fixed. At some time they cross.
Get used to it.
Or, look seriously at the ways in which your intelligence could be expanded, be it genetic modification, or IA, or just life long learning and early nights.
Lets be honest, you need it.
Intelligence is not necessarily human in nature. (Score:2)
Our only understanding of intelligence is human intelligence. We tend to think that for something to have intelligence that it must think as we do and therefore have a similar motivational structure.
These motivational structures exist because they assist human survival more often than not, or assist it in critical situations. The also have unfortunate side effects, which is the reason many are a double edged swords. Greed, jealousy, rage, hatred, love, compassion, friendship, etc, are all human emotions or states of mind. A computerized intelligence would not have to be created with a capacity for any of these things. Therefore the study of its behavior would be an independent subject from human psychology. Claiming that a machine intelligence would eventually enslave mankind is hasty at best. We have no understanding of what the psychology of an intelligent computer would be, and therefore no model by which to predict its behavior.
Lee
Re:Intelligence is not necessarily human in nature (Score:2)
There is a view that thinking is itself pleasant to a thinking being, i.e., that as soon as it begins to think, it will begin to value its own ability to think. In such a case, this computer would have a motivational structure similar to our own, a motivational structure that in many views is the basis of human action, especially those nasty ones you mention.
A better idea (Score:2, Funny)
Re:black holes (Score:1)
Re:Hmm... AI better than humans? (Score:2, Insightful)
Cranes can lift heavier weights than humans.
Boxes of electronics can 'see' in lower light levels than humans, or detect chemicals in lower concentrations than the human nose can.
Why shouldn't something constructed by humans be smarter than its creators?
Re:Hmm... AI better than humans? (Score:2, Informative)
How can something designed, programmed, and worked on hard by humans become better than the capacity of the human(s)' mind/intelligence that designed it?
There are quite a few examples of endeavours in which the human mind designed things that outsmarted it. Although it is controversial to do it, you simply can't say that Deep Blue does not play chess better than any human that designed it.
But the example I always like to give when such discussions are held is that of genetic programming [genetic-programming.org]. Genetic programming is an area of evolutionary computation that tries to achieve automatic programming. It basically uses GA techniques to evolve programs. There are reported cases of results in which the program outsmarted human beings quite nicely. One great book in the subject is Evolutionary Design by Computers [fatbrain.com], a collection of texts and papers in the subject, edited by Peter Bentley.
All in all, most AI criticisms seem to degenerate in anthropocentric pseudo-arguments. Another good book to be read is Dreyfus' What computers (still) can't do [fatbrain.com]. Dreyfus gives good reasons for why AI may be far from the present, but does so without (for the most part, at least) resorting to the argument that "I'm human and want to be the only smart being here". It is interesting that AI criticism may be the last island of anthropocentrism. First, the Sun does not go around the earth, but otherwise. Then, me and that disgusting worm are made of the same genetic stuff. Now, a bunch of transistors beats me at chess and wants to think? Then again, this is just me.
The links are here for the paranoid:p ?theisbn=155860605X&vm=p ?theisbn=0262540673&vm=
http://www.genetic-programming.org
http://www1.fatbrain.com/asp/bookinfo/bookinfo.as
http://www1.fatbrain.com/asp/bookinfo/bookinfo.as
Carlos
Semper ubi sub ubi
Re:Poor Hawking (Score:2)
Well said. It is amazing how many grandiose claims of impending doom we hear from the "experts" latly. The nasty little truth is that not one of them understands intelligence. But will that stop them? Don't count on it. Hawking has just joined Bill Joy, Vernor Vinge (of Vinge Singularity fame), and countless others into the AI-singularity-doomsday-prophet hall of shame.