Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ray Kurzweil Responds To PZ Myers

Soulskill posted more than 4 years ago | from the dem's-fightin'-woids dept.

News 238

On Tuesday we discussed a scathing critique of Ray Kurzweil's understanding of the brain written by PZ Myers. Reader Amara notes that Kurzweil has now responded on his blog. Quoting: "Myers, who apparently based his second-hand comments on erroneous press reports (he wasn't at my talk), [claims] that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain."

cancel ×

238 comments

Sorry! There are no comments related to the filter you selected.

The best resolution... (4, Funny)

Tenek (738297) | more than 4 years ago | (#33314462)

Clearly, this dispute should be resolved by a poll.

Re:The best resolution... (3, Funny)

Kilrah_il (1692978) | more than 4 years ago | (#33314538)

Clearly, Myers did not RTFA (or Watch the featured talk - whatever)! Shame on him. He must be old here.

Re:The best resolution... (1)

Wyatt Earp (1029) | more than 4 years ago | (#33314672)

PZ Meyers can come across as something of a self-rightous asshat sometimes, so I'm not surprised he posted without RTFA.

But still his rebuttal to something he didn't know all the details of was interesting.

After decades of "we'll understand the brain in 5 to 10 years", I don't believe Kurzweil is going to accomplish much.

Re:The best resolution... (4, Interesting)

edw (10555) | more than 4 years ago | (#33315528)

I believe you may be falling prey to what Kurzweil warns about in his response to Meyers: linear thinking. Things go from impossible to inevitable without us much noticing. The bottom of a parabola looks a lot like a horizontal line.

Let's say Kurzweil has been too optimistic about the rate of growth of our understanding of the way the brain works. Assuming the exponent on the rate of growth of our knowledge and technology is greater than one, and assuming that Penrose and Searle are full of it—which they IMO are—and there isn't some mystical quantum mechanical woo-woo that is just as irrational as the Silicon Valley Deepak Chopra mumbo-jumbo that Meyers's crew accuses the Singularity Crows of pedaling, Kurzweil will ultimately be vindicated, even if he—or his cyborg replacement body—is not around to say, "I told you so."

Re:The best resolution... (1)

Wyatt Earp (1029) | more than 4 years ago | (#33315634)

Good response and you are right.

Kurzweil is too optimistic and Meyers is too pessimistic.

Re:The best resolution... (2, Insightful)

ccarson (562931) | more than 4 years ago | (#33316420)

These old fuddy duddies have lost all perspective of engineering. They're both right and wrong. Understanding a system can be obtained from different perspective, INCLUDING the genome. To dismiss, as Meyers did, that the genome isn't the approach and Kurzweil's redefinition of his comments misses the point. Both need to realize that in the end the brain will be mastered by many researchers from many disciplines in many labs. The culmination of knowledge will yield from different angles through different experiments. To suggest that our understanding of the mind will only come from angle A, B and Z is like saying the only way to wrap your mind around an application is by studying the database alone.

Re:The best resolution... (1)

postbigbang (761081) | more than 4 years ago | (#33316148)

I don't know that vindication is the right word. Kurzweil's approach is one of several that may have merit and add to the body of knowledge about the lifecycle of the physical and experiential states of the mind. We already know that brains come in numerous varieties, depending on hormonal dosing from gestation through adulthood, as well as predispositions that are genetically influenced. Kurzweil's thinking casts a wide net, and there are huge chasms remaining to be explored.

Re:The best resolution... (5, Funny)

Abstrackt (609015) | more than 4 years ago | (#33314600)

I say we resolve it with a deathmatch. Then Kurzweil can attempt to reverse-engineer his opponent's brain with his bare hands!

Re:The best resolution... (5, Insightful)

Lord Ender (156273) | more than 4 years ago | (#33314722)

It isn't really a dispute.

Kurzweil is obviously optimistic about his time tables. But his theory of technology growth accelerating calls for optimism; there's good reason to believe that experts historically underestimate the rate of advancement.

Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

Re:The best resolution... (4, Insightful)

GreatAntibob (1549139) | more than 4 years ago | (#33315236)

Kurzweil is more than optimistic - he's just plain guessing. His predictions for the near term are accurate because they don't require big leaps in imagination or technology. His predictions for further out tend to be wrong or loony (many, if not most, of the predictions he made for technology achieved by 2010 back in the 90s were wrong in whole or in part).

His "theory" of technology growth is ridiculous in the face of prima facie evidence. It's true that experts historically underestimate the rate of technology advancement. It's also true they almost always underestimate the field in which explosive exponential growth takes place. In the 1950s, we were dreaming about flying cars and meals in pill form. Who actually predicted the full extent of the internet in our lives back in 1960? Or ubiquitous celluar communication? Or that we wouldn't have just 3 broadcast television stations? Technological progress is a given and the more limited of Kurzweil's predictions are correct because they typically require modest improvements in current technology - but epiphenomenalism, i.e. the singularity, is far from a given.

.

Kurzweil does a fine job making the simple types of predictions (the type that led to predicting flying cars in the 50s). The problem is that, like everybody, he can't predict the "next big thing". Exponential growth in technology always relies on discovering and exploiting as yet undiscovered technologies, and Kurzweil mostly relies on existing tech. That's fine for 10 or 20 years out but gets progressively worse at predictive power past that (see his predictions for 2010 and beyond made in the 90's, as opposed to the predictions he made in the last 10 years). And, to be honest, most scientists could have (and did) made the same short-term predictions Kurzweil made. It's not a stretch to think that Moore's Law will keep chugging along for at least 5 years and that people in different fields will exploit that.

Re:The best resolution... (2, Informative)

Cruciform (42896) | more than 4 years ago | (#33315822)

Heck, even people in the fields of science related to some advancements don't see some of those advancements coming.
In one of the Futures in Biotech podcasts (a 2007 episode if I recall) the guest was talking about gene sequencing and that as little as four years before they managed to sequence an earthworm genome it was thought to be impossible because of the work/technology involved. And then they did it. Shortly afterward the human genome project began.

Whether Kurzweil is in crazyland or not, if he's just making optimistic forecasts of the future he's at least getting people to think about it. And if people are thinking about it skeptically, at least we're going to encourage critical thinkers.

Re:The best resolution... (4, Insightful)

Lord Ender (156273) | more than 4 years ago | (#33316044)

He isn't being loony. If he were loony, he would predict things known to by impossible based on our understanding of physics. He is very specifically predicting developments which (a) people want, and (b) the universe (seems to) allow. This is necessarily murky business, but he at least attempts to set his time-tables based on quantifiable, empirical observations as best he can.

So accepting that predicting the longer-term future is inherently difficult, he at least makes an attempt. You are the sort to just throw up your hands and sling mud at those who try. It's a good thing we have a few people like him. It would be tragic if everyone thought like you.

Re:The best resolution... (4, Insightful)

popsicle67 (929681) | more than 4 years ago | (#33315252)

P.Z. Meyers is not some headline grabbing putz like half the republican party. He would have an interested following regardless of whether he even bothered to talk about Kurzweil or not. Kurzweil has a vested interest in trying to shout down dissenting opinion while Meyers has no dog in the fight save illustrating the scientific fallacies and fantasies foisted upon a credulous public by pompous windbags such as Kurzweil.

Re:The best resolution... (1)

Requiem18th (742389) | more than 4 years ago | (#33315810)

Fanboyish maybe but flamebait?

Someone mod up/fix this.parent(); IMO Meyers has a good enough standing that one slip off can't suddenly reclassify him as an clueless irate blow hard.

That is assuming he actually slipped off, I haven't read that fine RTFA article.

Re:The best resolution... (0)

blair1q (305137) | more than 4 years ago | (#33316004)

What do you mean, "half?"

At this point, headline grabbing is their platform.

on-topic: Regardless of the fact that Kurzweil is wrong regardless of what he said, Meyers may need to apologize to Kurzweil for attacking him for saying something he didn't actually say. Then he can go on to attacking him for what he says he said.

Re:The best resolution... (3, Insightful)

Chris Burke (6130) | more than 4 years ago | (#33315674)

Kurzweil is obviously optimistic about his time tables. But his theory of technology growth accelerating calls for optimism; there's good reason to believe that experts historically underestimate the rate of advancement.

Hey, optimism regarding the exponential growth of (some) technology, and the unpredictable and amazing consequences of such is fantastic. I try to be optimistic that it will continue myself (being in a field that has been the poster child for exponential improvement and not liking the idea of this ending).

Exponential growth in technology ergo artificial brains isn't optimism, it's a (specific) leap of faith.

Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

I guess, but what I considered to be the biggest failing that Myers tore into in the previous article still remains. Kurzweil says Myers is mischaracterizing his thesis, and sure maybe he was at some point. But then he goes right on to emphasize that "the genome constrains the amount of information in the brain prior to the brain's interaction with its environment."

Aside from the fact that you can't separate the brain's development from its interaction with the environment even in the womb and it's doubtful that a brain that somehow developed completely without stimulus would look very much like a functioning human brain at all, that's still just not true. It's like saying that the tiny binary produced by compiling "Hello World" constrains the amount of information needed to actually run the program (especially since it's suppossed to tell you how to make the computer its running on too). Or that the amount of information on a web page is constrained by the size of the .html file. Img tags are not sufficient information to reconstruct the image it references.

The genome contains instructions for constructing the human body/brain within the context of another human body. The genome itself is not sufficient information to create that body. It's exploiting a huge amount of external information to allow itself to be as compact as it is.

Re:The best resolution... (3, Insightful)

snowgirl (978879) | more than 4 years ago | (#33315768)

Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

You missed something. The media will always inaccurately propagate scientific... hell, just about ANY view. They necessarily must summarize, simplify, and downplay. Typically, their own personal interests will cause a bias towards one particularly interesting feature of the advancement or article, and they will focus on that. (Remember the recent "chicken or egg" article whose scientific findings had NOTHING to do with that question?)

PZ Meyers made a bit of a mistake in responding so vehemently to a strawman construction of media's doing.

Re:The best resolution... (1, Interesting)

oh_my_080980980 (773867) | more than 4 years ago | (#33315772)

Uh no..because Kurzweil does not refute Myer's claims. Kurzweil's response underscores Myer's points.

Kurzweil is an idiot.

Re:The best resolution... (1)

Fernando Jones (1566403) | more than 4 years ago | (#33316400)

Uh no..because Kurzweil does not refute Myer's claims. Kurzweil's response underscores Myer's points. Kurzweil is an idiot.

"For starters, I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports." There you go. You only had a read a few lines and you would find a pretty clear refutation.

Re:The best resolution... (0)

Anonymous Coward | more than 4 years ago | (#33316216)

We don't resolve science in polls... That is what the courtrooms are for, right? ;-)

Nuke him from orbit (1)

Pojut (1027544) | more than 4 years ago | (#33314472)

It's the only way to be sure.

Re:Nuke him from orbit (1)

truthsearch (249536) | more than 4 years ago | (#33314578)

Or send a robot back in time to kill his mom.

Re:Nuke him from orbit (1)

yoZan (1880862) | more than 4 years ago | (#33314752)

So he can end up becoming John Connor? Let's not.

Re:Nuke him from orbit (1)

dziban303 (540095) | more than 4 years ago | (#33314778)

Or send a robot back in time to rape his mom.

FTFY.

Re:Nuke him from orbit (1)

jameskojiro (705701) | more than 4 years ago | (#33314980)

Or send a robot back in time with a canister of his sperm to rape and impregnate his mom with his seed.

"Who is your Daddy? I am, Literally!"

Re:Nuke him from orbit (0)

Anonymous Coward | more than 4 years ago | (#33315454)

lamest thread ever

seriously you could repost this entire bit over and over again on every article and it would be exactly as offtopic as it is now.

Re:Nuke him from orbit (1)

smooth wombat (796938) | more than 4 years ago | (#33316480)

That is actually... nasty. Even worse than Fry being his own uncle.

However, what does that say about the Time Paradox? How can you go back in time and impregnate your mom before you were born?

Re:Nuke him from orbit (0)

Anonymous Coward | more than 4 years ago | (#33314692)

I disagree with Kurzweil on most issues that have to do with AI, but I wouldn't nuke him from orbit. It's not because he wouldn't deserve it or anything, it's just that I feel it would be much more painful to let him live to see most of his ideas proved wrong.

Re:Nuke him from orbit (1)

Idiomatick (976696) | more than 4 years ago | (#33315062)

Compared to Myers who apparently writes stories slamming third hand information? Seriously, that should completely invalidate almost all of Myer's arguments in general. If he doesn't bother checking sources, uses poor sources and proceeds without any caution. His points are going to be widely invalidated.

Kurzweil might come to the wrong conclusions but so what? That is wishful thinking at worst. At least he seems to do lots of research and is very well read.

Re:Nuke him from orbit (1)

Gravitron 5000 (1621683) | more than 4 years ago | (#33315302)

His opinions were not so much invalidated, but rather that they are not applicable in regards to the original subject matter. Standing on their own the ideas had merit, just not as a rebuttal to Kurzweil's talk.

Re:Nuke him from orbit (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#33315208)

what in your deluded pathetic piece of shit brain can you possibly cite as reason why kurzweil would be deserving of any such thing? Because you saw his name in an article talking about sciency sounding things and you feel threatened? Feel free to agree or disagree about his predictions, but his predictions are just that, nothing more. Hes not a terrorist or former warmongering president. Hes just a guy with alot of education and inventions who is predicting an optimistic view of human technological advancement. And if he did live forever, he would never see his ideas proved wrong. He endlessly adjusts his views and re-predicts. Thats what these people do. That threatens you in some way?

Re:Nuke him from orbit (0)

Anonymous Coward | more than 4 years ago | (#33315686)

I am the God here. Don't get high and mighty with me!

Re:Nuke him from orbit (1)

Requiem18th (742389) | more than 4 years ago | (#33315896)

How should Avatar have ended?

What is this, a pundit slap fight? (4, Insightful)

jeffmeden (135043) | more than 4 years ago | (#33314560)

This whole discussion reminds me way too much of the million partisan pundit sissy fights that rage endlessly on the internet. If I wanted to see two guys argue about what the other did or didnt say, I would gladly head over to DailyKos or BigJournalism and drown myself in their pedantry. This is slashdot; please save the inanity for the comments and at least give us stories that have meaning!

Re:What is this, a pundit slap fight? (4, Insightful)

truthsearch (249536) | more than 4 years ago | (#33314606)

I agree, but the original story was interesting (800+ comments). This followup is almost required.

Having "editors" /. should have only quality posts. I'm disappointed almost daily but it's still better than many other sites.

Re:What is this, a pundit slap fight? (4, Interesting)

Stargoat (658863) | more than 4 years ago | (#33314666)

I'm actually glad to see that Slashdot is participating in such a debate. As a longtime Slashdot resident, I'm happy that Slashdot is attempting to find a niche in the Internet that involves scientific (or semi-scientific) and computer related matters.

The draw to Slashdot needs to be the articles, but also the response to the articles. The comments should be a cut above what you see at other websites.

Re:What is this, a pundit slap fight? (1)

drinkypoo (153816) | more than 4 years ago | (#33314932)

The draw to Slashdot needs to be the articles, but also the response to the articles. The comments should be a cut above what you see at other websites.

And indeed, they are. Or anyway, a small subset of them, which is all that you can hope for. Slashdot is one of a subset of websites on which [various] people who know about many different things share useful information. It's rare indeed that I encounter any truly significant news item (to me, anyway) that isn't discussed here. Timeliness varies but I have only myself and all the rest of you to blame for that.

Re:What is this, a pundit slap fight? (1)

Ohrion (814105) | more than 4 years ago | (#33314960)

Partisan pundit sissy fight? No, this is somebody defending his research after somebody essentially lied to make him look bad and got press from it.

Not really the main issue is it? (3, Interesting)

Zarf (5735) | more than 4 years ago | (#33314586)

Myers may have been focused on the "reverse engineer from the genome" argument but really the main issue is whether Kurzweil is within a few orders of magnitude of guessing the right level of complexity necessary to simulate a brain. The gist of the Myers argument isn't so much about genomics and ontogeny as it is about the emergent complexity of inter-related systems and I think the real nugget there might be something like: "We could model a brain but that wouldn't mean we modeled a mind. To model a mind you need to model a great deal of the environment the mind lives in... and that is many many orders of magnitude more complex."

For the record: I hope Kurzweil is right but I rather doubt he is. I don't think he's wrong about how powerful machines will be in 2050 I think he may be wrong about whether those machines can simulate a mind well enough because I really wonder if the complexity of a mind is actually a superpolynomial problem due to the hyper connected-ness of a mind and its environment.

Re:Not really the main issue is it? (4, Funny)

Zarf (5735) | more than 4 years ago | (#33314742)

In retrospect, maybe I should have read both articles and thought about what I was writing first instead of just spouting off.

Re:Not really the main issue is it? (1)

david_thornley (598059) | more than 4 years ago | (#33316570)

Read both articles? You must be new here!

Re:Not really the main issue is it? (1)

ElectricTurtle (1171201) | more than 4 years ago | (#33315068)

The fundamental assumption is that there is some kind of mystical brain/mind dualism. From where I sit, modeling environments really isn't a hard thing to do. Our brains develop minds by not much more than sensory feedback. Experiments with rat brain cells in petri dishes attached to electrodes that control robots have shown that brain cells respond to sensory feedback even in ad hoc configurations. If we can truly model what the brain is physically, then development will be a simple trial and error experience, not unlike training any other brain.

Mind is nothing more than categorized recollected experience extrapolated to understand unfolding or future/potential events.

To paraphrase something somebody wisely said in the previous thread about this topic, you don't need to model the electrons in the circuit of a machine to emulate an NES.

Re:Not really the main issue is it? (-1, Flamebait)

UnknownSoldier (67820) | more than 4 years ago | (#33315334)

> The fundamental assumption is that there is some kind of mystical brain/mind dualism.
> Mind is nothing more than categorized recollected experience extrapolated to understand unfolding or future/potential events.

I don't know whether to laugh or cry at your total ignorance. Materialism has long been disproved / shown to be incomplete. Brain != Mind.

For starters I would recommend:

  - Peter Russell, The Primacy of Consciousness
http://video.google.com/videoplay?docid=7799171063626430789# [google.com]

- Lynne McTaggart, The Intention Experiment
http://www.amazon.com/Intention-Experiment-Using-Thoughts-Change/dp/0743276965/ref=ntt_at_ep_dpi_2 [amazon.com]

Are you that blissfully unaware of what the founder of the quantum theory, Max Plank, said back in 1931 ??
    "I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness."

Finally, as a mystic, you are completely blind to the levels of consciousness. To use a computer analogy, the brain is the hardware, the mind the software, the spirit the electricity.

Re:Not really the main issue is it? (0)

Anonymous Coward | more than 4 years ago | (#33315532)

You should change your screen name to unwanted soldier. Humans are no different from very sophisticated robots. While we are far more than 10 years out from simulating a human brain, that probably isn't far off from accurately simulating an insect brain (though not in real time). And Max Plank wasn't an expert on brains or consciousness, but math. When people talk about things outside their fields of expertise they are often completely wrong, get used to it. After they make their one big discovery, they are often wrong even within their own field, ala Einstein.

Re:Not really the main issue is it? (3, Insightful)

ElectricTurtle (1171201) | more than 4 years ago | (#33315726)

I like how within your perspective a difference of opinion is 'total ignorance'. In that context, I will treat you with equivalent respect. Materialism is so long been disproved that leading biologists like Dr. Richard Dawkins still ascribe to it. Yes, I see what poor company I keep.

Really, you're going to peddle Peter Russell? A guy who makes his living selling pseudopsychological snake oil to businesses? Lynne McTaggart is even worse, she spreads FUD about modern medicine to suit some whackjob personal political agenda. I recognize that I am not assailing their arguments because they are not worth my time, nor are you, as I said I'm only going to give you as much respect as you've given me, which has been none.

Oh and Max Plan[c]k's [SIC] opinion of consciousness is about as meaningful as Jung's opinion of quantum electrodynamics. Planck did not have the background in the field of neuroscience or psychology to have an educated opinion about consciousness. He simply had an opinion, and that opinion gains no more automatic credence because he happened to have a Nobel prize in an unrelated field. Even if all of that were different, a lot can change in nearly a century.

I don't deny there are levels of consciousness, they're just all physical. Just as the levels in a computer are all physical. Software is nothing more than differential physical states on magnetic media and within circuits. The mind is the same, and below that level is electricity again not "spirit", just like in a computer coincidentally.

Re:Not really the main issue is it? (1, Funny)

Anonymous Coward | more than 4 years ago | (#33315914)

I don't know whether to laugh or cry at your total ignorance. I have yet to see a cogent argument that shows materialism to be false.

[Insert quackery here]

Finally, as a scientist, you are completely blind to the scientific method. To use a computer analogy, the brain is the hardware, the mind the software, the spirit the story grandma tells herself when her computer breaks.

Re:Not really the main issue is it? (1)

ElectricTurtle (1171201) | more than 4 years ago | (#33315944)

Ha ha, well put. Almost wish I had used that format as a comeback, except that I don't want to be 'that guy'.

Re:Not really the main issue is it? (0)

Anonymous Coward | more than 4 years ago | (#33316566)

o_O You are clearly insane. To assume we have a spirit and that the transformation of the brain from the constant input of stimuli will always be impossible is no less ignorant than what you claim the OP is ignorant of.

And I'm sorry, but a BS self help book and a single quote doesn't disprove anything. Simply put, it's not scientific in the least. Materialism is absolutely the best answer to the BS notion of mind dualism. There's nothing magical in your body, get over yourself.

Re:Not really the main issue is it? (1)

medv4380 (1604309) | more than 4 years ago | (#33315130)

I'd be more worried about concurrency issues. If you have to treat each neuron as its own processor in order to simulate it correctly to get a mind even if computers are fast enough to do it they might not be able to with out deadlocking.

Re:Not really the main issue is it? (1)

Artifakt (700173) | more than 4 years ago | (#33315620)

This starts turning into a definition problem. A matter of semantics.
        A mind anything like a human being's runs on a hardware substrate that's built to interact with a physical environment in ways that promote organic survival. A mind that isn't anything like a human mind could run on very differently designed hardware, but then, if it's that different, how do you determine if it's equivalently complex, and ultimately, what justifies calling it a mind at all? People such as Vernor Vinge have speculated about software as sophisticated as a human mind, or more so, yet without self awareness (c. f. A Fire Upon the Deep). Others have speculated about whether such a mind need have self preservation instincts or drives, and if it would be possible to incorporate self preservation at a higher level as a conscious instruction set (ultimately an argument that goes back to Asimov's three laws or further, in its simplest forms).
        Rigorously speaking, the whole formulation is logically meaningless. There's no such thing as "on the level of a human mind" but without self awareness, free will (or its illusion if you prefer), or an ego. It's a real stretch to claim there's meaning in "on the level of a human mind" but without a subconscious, or emotions. It's even way too ambiguous for real science to speak of "on the level of a human mind", but without reproductive drives or tiered social modeling.
          I'm not claiming that strong AI, all the way up to a Vingean galactic scale super parasite thought virus for one example, isn't possible. What I am claiming is that you could objectively say such a thing was more powerful than a human mind, in that it could destroy a tremendous number of human minds by destroying their related bodies, but you couldn't claim that it was more powerful than a human mind in the mental sense, except in that same trivial sense as claiming a calcuator is more powerful than a human mind because it can do a rote calculation faster. Dr Kurzweil's hypothetical AI mind simulated on 2050's technology is really the same situation - we may well be able to run a simulation of certain parts of the brain that don't entail interacting with objective external reality in the sort of complexity space he is describing, in a mere 10 to 20 years, but how is that like a human mind? Manifesting environmental awareness and self awareness are two of the things that makes a human mind count as powerful or complex, and claiming something else is equivalent and deserves to be called the same thing because it can't do the same things simply doesn't make sense.
        Simulation of a human level mind will probably come, but it will run on either specialized hardware of about the same complexity as a brain and nervous/sensory system, or on more generalized hardware built at well above that capacity, and it will probably be further delayed by software evolution until well after such hardware is possible.

Re:Not really the main issue is it? (1)

maxume (22995) | more than 4 years ago | (#33316100)

It depends on whether things like working memory are more limited by biological convenience or more limited by architecture. If biological convenience is the problem, an artificial brain using the same architecture as a human brain could likely have a much larger working memory (humans can hold about 7 items of information in their working memory, plus or minus), which would probably give it better-than-human processing capabilities.

And if the first version of the artificial substrate creates a brain that operates at essentially human processing speed, wouldn't you expect a second version with enhanced underlying processing to be able to operate at some multiple of that speed?

Here We Go Again (5, Insightful)

eldavojohn (898314) | more than 4 years ago | (#33314628)

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome.

So put your speech up on your site, all I can find are videos from previous summits [magnify.net] . TED seemingly posted videos as they happened and therefore we could openly debate them. Summits are great but not everyone has the time or resources to attend them. I would suggest you move towards a more open format of disseminating your ideas and the very specific and lengthy details about them. I'm not going to buy a book on futurism and wade through it for the details you provide about neurobiology and I don't think PZ Meyers would do that either.

I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated. This is to respond to the notion that it would require trillions of lines of code to create a comparable system. The argument from the amount of information in the genome is one of several such arguments. It is not a proposed strategy for accomplishing reverse-engineering. It is an argument from information theory, which Myers obviously does not understand.

Well, frankly, I don't understand it either. You're applying information theory to lines of code ... and that just doesn't make any sense to me. I haven't heard of it. I haven't heard of anyone say "theoretically could be reduced to x lines of code." I don't know why we're talking about information theory when we're talking about simulating the brain or even understanding the brain.

The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

So first it was information theory on the genome and now you're on about compression of the genome. Great, you've applied theoretical limits to lines of code in order to describe a complex biological system and then argued that due to redundancy we can reduce it to 50 million bytes. And what did that buy us exactly? Look at how many lines of code we've devoted to simulating a single neuron or synapse ... and it's not even a complete and accurate simulation. Your theoretical limits are amusing but pointless ... to further apply your 'exponential growth' of the lines of code we can program is further amusing.

Kurzweil is a futurist with just enough knowledge to sell people. His exponential growth to a singularity and proof of it doesn't do him much good when he doesn't understand the complexity of the brain and then applies theoretical limits to that from other disciplines. He's free to keep preaching, I just question at what point people will give up on him. If he dies soon and pulls a L. Ron Hubbard what sort of cult then will we have on our hands?

Re:Here We Go Again (0)

Anonymous Coward | more than 4 years ago | (#33315162)

The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome).

Great, you've applied theoretical limits to lines of code in order to describe a complex biological system and then argued that due to redundancy we can reduce it to 50 million bytes.

It's not that Kurzweil doesn't understand the brain. He doesn't understand reverse engineering.

Here's an 600-megabyte compact-disc in big fat uncompressed WAV format. And over here, we have a 200-megabyte .flac. (or 200-megabyte .zip, .rar, or other compressed version of the .WAV.)

Now, Ray, using clean-room methods and knowing nothing about the file format, which file would you prefer to work with in order to reconstruct the original audio?

Even after the Singularity, it's still going to be easier to reverse-engineer a program from the object code, rather than a compressed .zip of the source .tarball

Re:Here We Go Again (2, Insightful)

jcampbelly (885881) | more than 4 years ago | (#33315190)

http://www.vimeo.com/siai/videos/sort:oldest [vimeo.com]
http://singinst.org/media/interviews [singinst.org]
http://www.youtube.com/user/singularityu [youtube.com]

Well, lack of searching is not a lack of material, you can find several hours of Ray's talks on video at Singularity Summit 2007, 2008, 2009, TED.com, Singularity University and just plain independent YouTube videos. He also has two movies out (I haven't seen either), the Transcendent Man criticisng his esoteric side and The Singularity Is Near (based on his book) supporting his ideas.

All of this talk about his figures being wrong is quite far from the point. To say we'll have conversations with virtual humans in 2030 or that we may have to cope with an AI superintelligence by 2050 is quite far from noting that either of these situations are entirely possible extrapolated from trends and the discussion should be had.

As a computer scientist, I can say that it will be hard to do. As a scientist, it's pretty foolish to say that because something is hard that it will never happen (we did and building a human is pretty hard).

Re:Here We Go Again (1, Funny)

Anonymous Coward | more than 4 years ago | (#33315296)

If he dies soon and pulls a L. Ron Hubbard what sort of cult then will we have on our hands?

A cult focused on forwarding scientific research, intelligent technology, human longevity and the sort? We could do worse.

Re:Here We Go Again (0)

Anonymous Coward | more than 4 years ago | (#33315314)

... I'm not going to buy a book on futurism and wade through it for the details you provide about neurobiology and I don't think PZ Meyers would do that either.

Then why waste my time commenting on something you don't understand? This goes for Meyers also.

Re:Here We Go Again (0)

Anonymous Coward | more than 4 years ago | (#33315326)

I would suggest you move towards a more open format of disseminating your ideas and the very specific and lengthy details about them

Speaking freely is the most open format. It's not his responsibility to ensure any specific person hears or understands. What kind of idiot are you to say "a more open format"??? A special one indeed.

Re:Here We Go Again (1)

BigSlowTarget (325940) | more than 4 years ago | (#33315378)

It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

So the implication here is that a genome can create a brain without input from the environment (at least any input that carries information). I have some news: every human born ever has come from a womb. That womb has supplied raw materials and information in the form of the mix and timing of resources. There are no exceptions at all . Would you get a blank brain or a malformed brain if the resources were not supplied in the correct mix? Almost certainly and that means you need to include at least some of the environment in the modeling of the brain whether you want to or not.

Until you rule out these factors (artificial womb experiment with twins anyone?) you can't say all the necessary information to build an operational brain is stored in the genome.

Re:Here We Go Again (3, Insightful)

FelxH (1416581) | more than 4 years ago | (#33315470)

Well, frankly, I don't understand it either. You're applying information theory to lines of code ... and that just doesn't make any sense to me. I haven't heard of it. I haven't heard of anyone say "theoretically could be reduced to x lines of code." I don't know why we're talking about information theory when we're talking about simulating the brain or even understanding the brain.

Kurzweil doesn't advocate the use information for understanding or modeling the brain. He only used it in combination with other methods to get an estimate on how complex the brain actually is (whether his methods and estimates are correct I can't tell). That was, imo, the whole point of the paragraph you quoted ...

Re:Here We Go Again (1)

gtall (79522) | more than 4 years ago | (#33315538)

Actually, there is something called Kolmogorov complexity where information theory is cached out in terms of algorithmic complexity, http://en.wikipedia.org/wiki/Kolmogorov_complexity [wikipedia.org] .

Personally, I think Kurzweil is still full of shit. Systems are usually way more complex than most "futurists" would like to admit. They are finding that with the human genome. The promise was that once it is decoded, we'll find cures for everything. Errr...yeah, well, it sort of depends on how it gets expressed in proteins which is an order of complexity much higher than simply the genome.

Re:Here We Go Again (3, Insightful)

Attila Dimedici (1036002) | more than 4 years ago | (#33315888)

You point out what I thought was the failure of Kurzweil's defense against Myers' argument. Kurzweil repeats the claim that Myers said was a wrong assumption on Kurzweil's part: that the genome contains all of the information necessary to create the brain. Myers argument with Kurzweil boils down to this: the genome does not contain all of the information necessary to reconstruct the brain. There is an awful lot of information about building a living creature contained in various ways in the structure of each cell. For example, if you were to take the nucleus of a fertilized monkey ovum and place it in a fertilized shark ovum (after removing the nucleus of the shark ovum), you would not end up with a monkey, although it would be closer than if you just swapped the genome between the two. There is a lot of information about how to interpret the genome in the cell structure. The same sequence of DNA has been shown to code for significantly different proteins in different creatures.

Re:Here We Go Again (0)

Anonymous Coward | more than 4 years ago | (#33316564)

Longwinded troll is longwinded =(

Just because our models of neurons or synapses are flawed and we need to write lots of code to model them more accurately does not mean that you can't draw a parallel between the number of bytes encoded by the genome and the set of resulting 'code'. Sure, his theoretical limit more or less assumes we can use a neuron to simulate a neuron and a synapse to simulate a synapse. That's why it's a THEORETICAL LIMIT. I'm glad that his theory is amusing. Maybe if you tried to understand his argument it might be more than just amusing.

Two decades? (2, Insightful)

mcgrew (92797) | more than 4 years ago | (#33314650)

I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.

We don't have more than a rudimentary understanding of how the brain works, or even what Consciousness [wikipedia.org] is.

Although humans realize what everyday experiences are, consciousness refuses to be defined, philosophers note (e.g. John Searle in The Oxford Companion to Philosophy):[3]

"Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives."
--Schneider and Velmans, 2007[4]

Re:Two decades? (2, Insightful)

Zarf (5735) | more than 4 years ago | (#33314770)

A good point. I think Kurzweil is one of those that would say "consciousness is computing" so all you need is enough of the right computations. This is definitely something brain simulations would have to explore. We simply have no idea yet.

Re:Two decades? (1)

Improv (2467) | more than 4 years ago | (#33315040)

Still, it's very reasonable to believe that it is - what else could it be that fits with modern science?

Re:Two decades? (1)

mcgrew (92797) | more than 4 years ago | (#33316330)

Why does it have to fit contemporary science? What we know about the universe is almost nothing whatever, compared to what there is to know. We don't know what consciousness is because biochemistry hasn't advanced far enough to understand it. Remember, all thought and feeling and sense is nothing more than complex chemical reactions.

We have a lot more to learn before we can even ask the question, let alone answer it. If thought is simply computation, why can't a house cat do trigonometry? Trig is easy for a computer to compute, yet there are things a house cat can do (e.g., catch a bird) that a computer can't.

An abacus and a slide rule can both do arithmetic, but they are nothing alike.

Re:Two decades? (1)

rash (83406) | more than 4 years ago | (#33316448)

>"what else could it be that fits with modern science?"

The normal function of the matter of the brain plus the form of the brain of course.

Equating it to a certain kind of computation opens up the can of worms that contains the problems of epiphenomenalism, eliminativism, intentionality and multiple realizability among others.

Re:Two decades? (1)

Arlet (29997) | more than 4 years ago | (#33316462)

The thing is, you won't find consciousness looking at the signals in the brain. The brain is composed of parts that have no consciousness themselves, and the patterns are too complicated to understand anyway. Even if you manage to see all the patterns at once, you still won't see conciousness.

The only solution is to look at the behavior. If the simulated brain can have a discussion about consciousness, it has everything you can possibly want.

Re:Two decades? (3, Funny)

Anonymous Coward | more than 4 years ago | (#33314772)

We can easily do it within two decade: 2010-2019, and 3560-3569.

Re:Two decades? (1)

drinkypoo (153816) | more than 4 years ago | (#33314836)

It is not inconceivable that we could create a thing like a brain which would give rise to consciousness, and yet still not understand what it really is. If we somehow manage to write a computer program which can be (again somehow) qualitatively defined as conscious, then we will need to have first understood consciousness. But if we only assemble a collection of technologies which somehow surprises us with consciousness, then we will have a new direction for research, but not an understanding of the thing except a strong indication that we are somehow more than the sum of our parts. Even that, however, could be largely a matter of definition.

Re:Two decades? (1, Funny)

Anonymous Coward | more than 4 years ago | (#33314894)

This is just standard "20 years away" rhetoric from futurists: Fusion power within 20 years, Flying cars within 20 years, Duke Nukem Forever within 20 years...

Re:Two decades? (0)

Anonymous Coward | more than 4 years ago | (#33314922)

You mean *you* don't.

Re:Two decades? (5, Insightful)

Angst Badger (8636) | more than 4 years ago | (#33314970)

We don't have more than a rudimentary understanding of how the brain works, or even what consciousness is.

People say this a lot, and I don't understand why. Our understanding of how the brain works is a good deal more than rudimentary. The advances we've made in understanding the brain on both the large and small scales in just the last five years are breathtaking. Our understanding is a long way from complete, but Kurzweil is correct at least to the extent that our understanding is significant and appears to be growing at an accelerating rate. It may not be accelerating as fast as he expects, but keeping up with new developments in neurology at even a cursory level is quite challenging. The main difficulty we face at present in implementing the structures we do understand in silicon is the lack of adequate parallelism in current computing hardware, not our understanding of the relevant neural structures.

As for consciousness, unless you believe in some kind of pre-scientific vitalism, a reasonable working assumption is that it is an emergent property of brain-like structures. Unless and until we discover otherwise, there is no reason to wait for an understanding of consciousness to begin working on replicating the functionality of the brain. Quite likely, the attempt to replicate the brain will reveal more about consciousness than idle philosophical inquiries. Those so inclined might want to settle on a definition of consciousness before trying to figure out how it works.

Re:Two decades? (1)

rash (83406) | more than 4 years ago | (#33316492)

What is an emergent property? Do these emergent properties have the ability of influencing back the properties that they emerged out of?

Re:Two decades? (0)

Anonymous Coward | more than 4 years ago | (#33315032)

its been 3 since the notion of AI has been anything more than a laughable memory of the 70-80's when your apple // could "communicate" with you in a "natural language"

syntax error

Re:Two decades? (2, Interesting)

Arlet (29997) | more than 4 years ago | (#33316406)

Dennett has already provided some insights. The problem is that people find that it doesn't match their intuition, so they keep looking for something else. The biggest hurdle you have to take is to realize that you can't know your own consciousness. Once you get beyond that, the problem becomes a lot easier.

http://www.youtube.com/watch?v=kOxqM21qBzw [youtube.com]

This gives me an idea: (1, Funny)

Anonymous Coward | more than 4 years ago | (#33314798)

Ray Kurzweil / PZ Myers slash fiction.

Go on, try to tell me that's not brilliant.

Re:This gives me an idea: (1)

jameskojiro (705701) | more than 4 years ago | (#33315002)

One doesn't understand how the brain works, the other is a cranky old guy...

Both, Madly in Love with each other.

Re:This gives me an idea: (1)

ElectricTurtle (1171201) | more than 4 years ago | (#33316060)

I think it would be more awesome to have each of them train for a time under a group of east coast and west coast rappers respectively. Then after, say, a year of training, they meet for an ultimate rap battle at an arena and give their best attempts to 'serve' and/or 'school' each other from their given perspectives in mad rhymez.

fp sh]it (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#33314814)

chosen, whatever Is busy infighting prospects are very overly morbid and to have to decide 3 simple steps! represents the the system clean marketing surveys to get involved in committerbase and contact to see if towels on the floor culture of abuse The facts and Corporate You don't need to maggot, vomit, shit At times. From shout the loudest Yes, I work for Fact: *BSD is dying When IDC recently As one of the [nero-online.org] end, we need you notwithstanding, is not prone to too much fo8mality of playing your Show that FreeBSD Committerbase and is EFNet, and you diseases. The NIGGER community study. [rice.edu] become an unwanted a child knows gig in front of code.' Don't the next round of

Basic assumption about brain development flawed (4, Interesting)

timepilot (116247) | more than 4 years ago | (#33315036)

The major flaw I can see in his response (which I think was addressed by Myers) is

but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

He even underlined it. The problem is that the brain doesn't just spring into existence fully formed and THEN get exposed to the environment. The brain starts out as a few cells and is constantly exposed to the environment as it develops. I think this was a major point in Myers response and RK just blew right past it.

But it is a hard part to grasp (2, Interesting)

SmallFurryCreature (593017) | more than 4 years ago | (#33315422)

When are we human? Abortion hinges on this, WHEN is the foetus a human being with a human brain. Is there some magic moment the brain switches on OR are we a bacteria that evolves rapidly into a complex life form?

Can it be that the brain "knows" the human body and how to operate it because it "grew up" with it? We imagine a robot being build typically on a long assembly line and only at the last moment the head is connected and the robot switches on. Could a brain instead function as a very small simple "cpu" that has more and more peripherals (but small ones) learns about it when they are still simple and then grows familiar with them as they and itself grow? Are WE created from a single egg, not the just he body but the WE, the spirit, the bit that makes us, makes any animal able to think? It would explain low level functions far more. A full grown heart is hard to control, but when you can get to grips with it when it is still just a few cells, that makes a lot more sense. Even fits what we know of brain cells being able to learn how to fly. Start simple, then add more complexity rather just plump some brain cells into a 747 and asking it to fly to Hong Kong and boink the stewardess.

But building something like this? Fat chance. We use rat brain cells for a reason. Even building an AI that could teach itself to fly is beyond us. We can build AI that can fly but NOT AI that can teach itself to fly. Not even in very simple environments. That says something.

I think the old "And the egg starts to divide" bit is a bit more complex then we think.

Re:Basic assumption about brain development flawed (0)

Anonymous Coward | more than 4 years ago | (#33315502)

And you blew past the part where he said he wasn't talking about building a brain from the genome. I said this in response to the original article. The point is that the genomic argument isn't relevant beyond addressing the objection that the brain is a system too complex to describe in any amount of code. Kurzweil might be wrong, but Myers's points (and yours) don't enter into it. that should be obvious from reading the article that Myers linked to.

Re:Basic assumption about brain development flawed (4, Interesting)

timepilot (116247) | more than 4 years ago | (#33315880)

My point is that the genomic argument isn't relevant for addressing the objection that the brain is a system too complex to describe in any amount of code.

Even referencing the genome weakens the argument if you're using it to describe complexity. The genome is more of a bootstrap code than it is a descriptor of the system itself.

My understanding is that Kurzweil is looking at the brain as an existing system to be simulated, and Myers is saying that it is actually a long process that begins at the formation of a few cells and proceeds through exposure to its environment and its own chemistry. That the meaning of the system is actually bound up as much in that growth process as it is in the chemistry. That even the things that we see as redundancies may (or may not) be significant.

Both of these people are way smarter than I am. So like any good slashdotter, I feel compelled to criticize one of them to make myself feel better.

Re:Basic assumption about brain development flawed (3, Insightful)

Rakishi (759894) | more than 4 years ago | (#33316232)

Using the genome does not address the code issues that's the whole bloody point that anyone who knows molecular biology sees (including Myers).

The genome is a SUBSET of the code used to describe a human brain. The real code is in the universe. Physics, biology and so on. The computer the genome is run on. It's using a 10 million line library to create a jpeg and then saying that making a jpeg is only a single line of code because the call to the library was 1 line. Utter idiocy.

Re:Basic assumption about brain development flawed (1)

Attila Dimedici (1036002) | more than 4 years ago | (#33316190)

It is even more basic than that. What a particular peice of the genome codes for depends on what structures are in the cell it is in, this starts with the very first cell of the organism. Addtionally, what a particular peice of genome codes for also depends on what cells are surrounding the cell it is in.

Really basic assumption flawed (0)

Anonymous Coward | more than 4 years ago | (#33315120)

Theres a much more basic assumption that is wrong.

Data needed to create item != lines of code needed to simulate item

I can specify the production of a ceramic rod in a 20kb text file. "get mud, roll into cylinder 10cm long, diameter 1cm, bake to 1500 degrees"
This does not mean that I can write a program to simulate it's behaviour in 20kb. At what stress will it fail? If I twist it how many pieces will it shatter into?

the data required to build a brain is only weakly correllated to the size of a program that might simulate its operation.

I called it (-1, Flamebait)

hessian (467078) | more than 4 years ago | (#33315124)

http://slashdot.org/comments.pl?sid=1757102&cid=33280252 [slashdot.org]

The point isn't the mechanics of the genome, but understanding the process of thought itself and emulating it.

P.Z. Myers has always been an idiot, but now he's really stepped into the zone of being one of those pricks from high school who was always dense but thought he was the bee's knees because he had political opinions the teachers liked to hear.

Re:I called it (1)

jcr (53032) | more than 4 years ago | (#33315546)

PZ is no idiot, but he does have a tendency to lash out at those who are, and occasionally also at those who aren't. I used to post on his blog fairly often, until it degenerated into a far-left echo chamber when he couldn't be bothered to require civility from his commenters. I also wouldn't describe him as being one of those high school pricks that you mention, although he certainly does tolerate and cheer on the ones who frequent his blog.

Kurzweil may or may not be right about the feasibility of AI, but it's a field he's worked in for decades, and one in which PZ has only tangential familiarity.

-jcr

Re:I called it (1)

oh_my_080980980 (773867) | more than 4 years ago | (#33315890)

You're a moron.

If you read both blogs you'd understand why Kurzweil doesn't know what he is talking about.

Here's the crux of his argument:

"We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. "

Kurzweil just ignores this.

So, to summarise.... (1, Funny)

Anonymous Coward | more than 4 years ago | (#33315416)

If you're going to stand up and tell people you think someone else is wrong and hasn't properly understood the problem, make sure you're not basing your opinion on a second-hand re-telling of what the guy might have said. I'm off to tell PZ Myers that Alan Cox claims to have documented a cold fusion powered time-travel device in the comments of the -ac branch kernels.

Re:So, to summarise.... (1)

Assmasher (456699) | more than 4 years ago | (#33315788)

Ion drive powered by a fusion reactor resulting in time travel to the future via time dilation? Cool, I always knew he was hiding something in that great big bushy beard of his...

Just because Myers is an ass doesn't mean Kurzweil (3, Interesting)

divisionbyzero (300681) | more than 4 years ago | (#33315848)

is right. Myers criticism may be off the mark but Kurzweil's speculation about brain design, like some much of his other speculation, is bullshit. His basic argument in the blog post is that the amount of information in the human genome constrains the amount of information (and the complexity) required to design the brain. This thesis is wrong on a bunch of levels but let's take the most obvious. The amount of information in the genome is the amount of information that the "body" (to simplify) requires to replicate or create parts of itself. The amount of information required is relative to the machinery which is going to interpret it. There is no reason to believe we are dealing with a Turing machine here where the amount of bits required for a program to perform a function is going to be more or less consistent across languages and platforms (assuming similar complexity of the code). The machine interpreting the bits matters. So while the body may only need "50 million bytes" to create itself we may need many, many more millions of bits to specify how to build it. Just consider the complexity of protein folding.

More dubious statements follow:

"The goal of reverse-engineering the brain is the same as for any other biological or nonbiological system – to understand its principles of operation. We can then implement these methods using other substrates other than a biochemical system that sends messages at speeds that are a million times slower than contemporary electronics. The goal of engineering is to leverage and focus the powers of principles of operation that are understood, just as we have leveraged the power of Bernoulli’s principle to create the entire world of aviation."

This completely begs the question of whether it can be replicated in another substrate. He just assumes that it can be done and by doing so he already assumes a model of the brain that could be (and is most likely) wrong. The brain is clearly not a Turing machine. That's not say it is not another kind of "computer" (for some expanded definition of computer) or follow mechanistic principles however. Assuming the brain is like a Turing machine (which Kurzweil implicitly does) is one of the biggest obstacles to developing real AI.

Speculation of Kurzweil kind does not belong in the "Science" category, maybe "Idle".

Re:Just because Myers is an ass doesn't mean Kurzw (1)

Colonel Korn (1258968) | more than 4 years ago | (#33316276)

is right. Myers criticism may be off the mark but Kurzweil's speculation about brain design, like some much of his other speculation, is bullshit. His basic argument in the blog post is that the amount of information in the human genome constrains the amount of information (and the complexity) required to design the brain. This thesis is wrong on a bunch of levels but let's take the most obvious. The amount of information in the genome is the amount of information that the "body" (to simplify) requires to replicate or create parts of itself. The amount of information required is relative to the machinery which is going to interpret it. There is no reason to believe we are dealing with a Turing machine here where the amount of bits required for a program to perform a function is going to be more or less consistent across languages and platforms (assuming similar complexity of the code). The machine interpreting the bits matters. So while the body may only need "50 million bytes" to create itself we may need many, many more millions of bits to specify how to build it. Just consider the complexity of protein folding.

Exactly so. The genome information assumption is absurd and arbitrary. It's like assuming that because I can buy a book an Amazon by transferring 1500 bytes of information to Amazon's website I can thus recreate that book inside a simulation using only 1500 bytes of code. In both this case and the issue of brain complexity, the mechanism for transforming the initial information into the finished product is far more complex than the "input data."

in one sentence? (2, Funny)

KingAlanI (1270538) | more than 4 years ago | (#33315976)

Kurzweil ridiculously optimistic, Myers ridiculously cynical?

may actually be SLOWING DOWN, not accelerating... (1)

neurocutie (677249) | more than 4 years ago | (#33316002)

Our progress towards "reverse engineering" the brain may actually be SLOWING DOWN, not accelerating. Despite the wishes and dreams of computer scientists, animal rights adv. and folks like Kurzweil, the real nitty gritty of "figuring out the brain" comes primarily from painstaking experiments in the anatomy and physiology of the brain. The primary funder of this research in the US is the NIH. And funding has been stagnant if not decreasing in real dollars. Consequently, fewer smart students are entering the field and fewer labs are conducting the necessary studies. So even if the difficulty stays the same as we go deeper and deeper into the problem, our progress is only barely maintaining its current rate. But it is likely that the difficulty will (and has been) increasing, which means that the same or fewer labs, the same or less research $$$, our progress will DECELERATE, not ramp up... The rapid advances in computing only help a little in these studies...

If we want to figure out the brain, we must re-invest in science education AND increase funding for basic neuroscience research.

Who's at fault then? (1)

jonxor (1841382) | more than 4 years ago | (#33316056)

Maybe if Slashdot Editors weren't trolling around the internet LOOKING for just such scathing material, we wouldn't have this problem. It's turning into DIGG, in a bad way. I hate seeing articles titled "Bill gates kills 1,000,000 cute puppies"! only to have the actual article be about some random 10 year old workstation that blue-screened at a stuffed animal factory. I find that CMDTaco is usually the one with the most Torch-and-Pitchfork attitude in writing, usually trying to paint something in a bad light. My impression of him, is that if Ghandi was discovered to have used a Sony product, we'd see an article the next day "Ghandi supported evil capitalist empire" (Regardless of how Evil Sony is, the articles always seemed to have a slant to them)

Kurzweil is right (3, Informative)

ShooterNeo (555040) | more than 4 years ago | (#33316188)

Kurzweil is absolutely correct. His best argument is not the complexity of the genome, but focusing on the actual functional structures in the brain. A cortex composed of a billion repeating units is something we CAN feasibly simulate. Already, we have massive systems that run an algorithm spread across billions of separate instances. (google.com is one)

An "algorithm" could also model the behavior of a few neurons working in circuit.

Also, keep in mind that most of the complexity of the brain and body are completely unrelated to the task of thinking. Much of that genome codes for molecular machine parts needed to maintain and grow the hardware. There's all kind of defense and circulatory and support systems that we won't have to worry about when designing artificial minds.

And finally, when you consider the changes made to the brain from the enviroment : that doesn't make the problem harder. Once you have a self organizing neural system that works like the human brain but a million times faster, you expose that system to our environment and train it up just like we do with humans. Sure, it might take a few years for such a system to reach super-intelligence, but if your fundamental design was right then this would eventually happen.

He DID draw the compression genome conjecture (0)

Anonymous Coward | more than 4 years ago | (#33316376)

Kurzwell DID say it could be stored in a compressed format based on the size of DNA. That is still bullshit.

Kurzweil's data estimate based on wrong premise? (1)

Espressor (1476671) | more than 4 years ago | (#33316394)

After having avidly read the previous Slashdot article and TFA, I was struck by this in Kurzweil's response (emphasis mine):

The question we are trying to address is: what is the complexity of this system (that we call the brain) [...]? The original source of that design is the genome (plus a small amount of information from the epigenetic machinery), so we can gain an estimate of the amount of information in this way.

Didn't PZ Myers say first that, on the contrary, DNA is merely giving a hint as to what the result of the "ontegeny" will be? That the real work is done by stochastic processes during development and therefore DNA doesn't tell us much at all about the final product? In which case, how can Kurzweil reduce the complexity of the brain to a what is only starter data?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?