Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Our Brains Don't Work Like Computers 737

Roland Piquepaille writes "We're using computers for so long now that I guess that many of you think that our brains are working like clusters of computers. Like them, we can do several things 'simultaneously' with our 'processors.' But each of these processors, in our brain or in a cluster of computers, is supposed to act sequentially. Not so fast! According to a new study from Cornell University, this is not true, and our mental processing is continuous. By tracking mouse movements of students working with their computers, the researchers found that our learning process was similar to other biological organisms: we're not learning through a series of 0's and 1's. Instead, our brain is cascading through shades of grey."
This discussion has been archived. No new comments can be posted.

Our Brains Don't Work Like Computers

Comments Filter:
  • comparisons (Score:5, Insightful)

    by sound+vision ( 884283 ) on Wednesday June 29, 2005 @08:30PM (#12946663) Journal
    And it is for this reason that I loathe comparisons of computing power to brain power. "By 2015, we'll have computers as smart as humans." What kind of bullshit comparison is that? They're two completely different processes.
    • Re:comparisons (Score:5, Insightful)

      by NoImNotNineVolt ( 832851 ) on Wednesday June 29, 2005 @08:39PM (#12946760) Homepage
      "By 2015, we'll have computers sufficiently powerful to simulate a full working model of a human brain in enough detail to be functionally equivalent" would be what is actually being predicted. Because we have no convenient way of quantifying human smarts, like you said we cannot effectively compare how "smart" a computer is with respect to a human. That doesn't mean that computers will not be able to be functionally equivalent to biological intelligences, and there's no logical reason to suspect that they won't be in due time.
      • By 2015... (Score:3, Funny)

        by uberdave ( 526529 )
        By 2015, we'll have computers sufficiently powerful to simulate a full working model of a human brain...

        of course, it will be as large as a four storey building, take all the power of Niagara falls to run it, and all of the water of Niagara falls to cool it.
      • Re:comparisons (Score:5, Insightful)

        by RhettLivingston ( 544140 ) on Wednesday June 29, 2005 @11:30PM (#12947713) Journal

        Unlikely. First, what they are saying here is that there is no clock. The brain is fundamentally analog in both state and TIME. To "simulate" it using computer algorithms would likely require finely stepped integrators for every connection of every neuron and every chemical pathway. Even the modeling of the blood flow and its nutrients is likely critical to a successful simulation of the thought process in some way. Its not at all like a normal computing problem. Its more like computing physics. We'd need processors like the new PhysX chip though vastly more sophisticated. I'm thinking that a high fidelity of all of the connections of a single neuron in real time would likely take a full chip.

        Furthermore, there is no evidence that we'll even be close to understanding how to teach the simulation if we created it. I'd put better odds on the creation of some sensing technology that could fully map the physical connections and the electrochemical state of every neuron and other component involved in thought (does anyone really think we know all of the components?). And I'd still place those odds very low.

        And what if we could simulate it... should we? It is likely that we'd create many insane intelligences in the process, either because we didn't duplicate the processes close enough, didn't put in all of the instinct portions of the brain that actually have much more to do with true intelligence than the thinking portions, didn't provide the inputs that they were designed to have, or tried to improve on a analog machine with a complexity level far beyond modern math's ability to balance. And, whether or not its true, many would call them life. Turning them off would likely be considered the same as killing them. The ethical dilemmas that would come about are tremendous.

    • And by 2007, we'll have computers as smart as your average Slashdotter. What's that? You say I'm typing on one already? Well I never!
    • by suitepotato ( 863945 ) on Wednesday June 29, 2005 @09:05PM (#12946951)
      "By 2015, we'll have computers as smart as humans."

      And given the people I deal with as customers in tech support, this is not an improvement. Quite the opposite really.

      "I don't know what the IP address is Dave and I don't care. I just want you to make me work or I'll e-mail your supervisor with a nasty complaint."
      • Heh. You're funny (Score:3, Insightful)

        by Moraelin ( 679338 )
        What cracks me up is the nerd infatuation with, basically, "only the trivia _I_ know are the essential things. And you're an idiot if you don't know them, no matter how utterly useless or irrelevant they are to _your_ job or interests."

        No, sorry. The world doesn't revolve around you or your hobbies. There _are_ plenty of jobs for which the computer isn't the important part. It's not what makes them money.

        E.g, for a lawyer it's a better investment of their time to study the laws and precendents, than to le
      • by Moraelin ( 679338 ) on Thursday June 30, 2005 @04:57AM (#12948736) Journal
        You know, we're all nerds, and we're all arrogant.

        But what cracks me up is that the most arrogant assholes are the ones with the least skill or achievement. When you see someone harping the most about how he's uber-L33T because he knows what an IP address is, and how everyone else is an idiot... chances are it's someone who actually knows the _least_ about those. Chances are it's not a programmer who actually writes socket code, it's not a hardware engineer who's designed a network card, etc. No siree, it's a script-reader from the hell-desk that does the "I'm so l33t and everyone else is an idiot" fuss.

        So you want to call people idiots if they don't know some computer trivia you know (off a list of canned answers)? Well, then being an EE and having some 20+ years of programming experience, I'll call _you_ an idiot, because you're below _my_ skill level.

        Sure, you know what an IP or port number is or how to find it out in Windows. (Or can find it out on your list of canned answers.) But can you actually _use_ a socket on that port? Can you for example write a game server that listens on that port? If I gave you an old network card, can you find the right Linux kernel driver and change it to make it work with that card? Or what?

        Or, ok, you do know what an IP address is. Congrats. Do you also know what a B-Tree is, how it works, and how to implement one in your code? Do you also know the difference between, say, MergeSort and QuickSort, and the influence of external (e.g., DB file on a disk) vs internal (in RAM) sorting on their performance? Can you implement either purely as, say, a state-machine driven by exceptions to signal state changes, just to prove that you actually understand the algorithm, as opposed to copying someone else's code off the net? Do you know the difference between bitmap indexes and b-tree indexes in Oracle, and can discuss when you might need one instead of the other?

        Hey, it's computer stuff too. Very basic stuff too, nothing esoteric. We established already that computer stuff matters, and you're an idiot if there's something you don't know about them.
        • I think the reason is that the smarter you are, the more you realise you know nothing.

          It's like any complex problem where it seems easy until you look into it. The more you understand about it, the more you realise how little you understand.

          Me? I know that I know nothing at all - so I must be the wisest guy alive *grin*.
    • Re:comparisons (Score:3, Insightful)

      by Andronoid ( 816502 )
      I hate these comparisons too. AND they're not even useful for predicting when we can simulate a "fully functioning brain." All of these predictions are based on equating neurons (which for one aren't the only "computations" going on in the brain) with simple transistor-like units (e.g. Perceptrons [wikipedia.org] ). The truth is that when a neuron fires this leads to many possible different chemical cascades resulting in the production or destruction of neuro-transmitter, neuroreceptors and who knows what else. Talk to
    • No, they are not (Score:3, Informative)

      by autopr0n ( 534291 )
      Yeah, they are totaly diffrent. For example a computer would probably never try to base philosophical arguments on a slashdot blurb.

      Seriously, computers can work with things more complex then 'ones and zeros'. They can be programed to deal with shades of grey as easily (well, maybe not 'easily' but it definetly can be done)

      The fundemental part of the human brain is the neuron, and it's either firing or not. 1 or 0 just like a computer. What triggers it is a bit more complicated, but the process can be e
  • Fuzzy Networks (Score:3, Insightful)

    by Sir Pallas ( 696783 ) on Wednesday June 29, 2005 @08:30PM (#12946664) Homepage
    That's what I heard. Even if they don't work like sequential or even parallel digital computers, I'm pretty sure that brains still compute. Mine tries, at least.
  • by QuantumG ( 50515 ) <qg@biodome.org> on Wednesday June 29, 2005 @08:30PM (#12946670) Homepage Journal
    Fuck off.
  • by justforaday ( 560408 ) on Wednesday June 29, 2005 @08:30PM (#12946681)
    Looks like the submitter forgot something. Lemme see if I can help him out a little:

    How will this study affect your next thought? Go here [primidi.com] to discuss it further.

    There, that feels more complete.
    • by Anonymous Coward

      S***** Network Administration

      site: primidi.com
      classification: spam/advertising
      access: denied

      If you think this is an error please contact ***@**

    • Thanks. I was wondering what the hell was going on (and still am). When I could only find the link directly to the document the blurb was talking about I scanned over the blurb a couple more times looking for the primidi.com link.

      I wasn't until later that I actually noticed that Roland's name itself was the only link to primidi.com in the entire blurb. Weird, eh? Maybe he's trying to to get on our good sides.

    • ...It wasn't posted by Timmothy. What's with that?
  • by Doc Ruby ( 173196 ) on Wednesday June 29, 2005 @08:31PM (#12946689) Homepage Journal
    Each neuron is like a tiny, slow analog DSP, feeding back FM around a base frequency (eg. about 40Hz in the brain's neural tract). The neurons have feedback among themselves locally, and send out some larger feedback in fiber bundles, signalling other clusters along the way. It's like a teeming kazoo symphony, without a conductor.
    • by SilentChris ( 452960 ) on Wednesday June 29, 2005 @08:55PM (#12946884) Homepage
      Well, actually, from the article it sort of sounds like a multibranch computing article I read a while back. I'm not sure if Intel actually went through with this, but the idea was to have a CPU process multiple "paths" ahead of time.

      So, for example, for a simple if statement waiting on user input, part of the CPU would process the "true" result of the statement and part would process the "false" one. When the user made a decision, one would be used and one would be thrown out. In theory, computing these branches ahead of time was supposed to be faster than doing things linearly.

      Again, though, I'm not sure Intel went through with this. They were the subject of the article.
      • I think you're talking about speculative multithreading, and I'm pretty sure this was part of the original pentium architecture - but I'm no John Siracusa.
        • predictive branching (Score:5, Informative)

          by rebelcool ( 247749 ) on Wednesday June 29, 2005 @09:48PM (#12947202)
          Modern processors do in fact, do this. They maintain statistics on the branches and go forward on the branch deemed most likely to be taken. Its based on a simple principal - if you've taken the same branch a few times before, you're likely to keep taking it from now on. Think of how loops work.

          Granted, if the processor is wrong, it has to clear the pipeline and start anew (which is costly), but the benefits outweigh the negatives.

    • It's like a teeming kazoo symphony, without a conductor.

      Actually there are many conductors. What makes cognitive control an especially interesting problem is determining which conductors are in control at any point in time.

      This is a fantastically intricate and difficult problem: how to prioritize limited resources to interact with a rapidly changing environment in real time. i.e. What do you ignore at any given point in time?

      Our conductors trade off control constantly, over the time course of seconds,
      • Well, actually, it's more like "no conductor", because lots of signal loops and metaloops have stable attractors. The "conductors" are virtual, composed (hah) of the stable feedback states in the system, either biases or artifacts of the signaling transfer functions, or attenuations from repetition (learning). What fascinates me is the selforganization of the signal paths (engrams) that creates predictive models (mind) of the signal generators outside the brains (reality), including models of the models, an
  • IBM has been working on this [slashdot.org] for a while
  • Obvious (Score:2, Funny)

    by Bloater ( 12932 )
    In other news, the sky is blue...

    Come on, it's not like this is neuroscience... Oh.
  • by Anonymous Coward on Wednesday June 29, 2005 @08:32PM (#12946693)
    ...with floating point arithmetic. A "double" can represent a number between 0 and 1 with 15 decimals of precision, way more precise than any biological phenomenon. Computers can think like us, it's just a matter of writing the right floating-point code.
  • Fascinating (Score:5, Funny)

    by Vengeance ( 46019 ) on Wednesday June 29, 2005 @08:32PM (#12946694)
    The idea that our brains might work like biological organisms is a real breakthrough.

    Next week's research topic: Do farts stink?
    • Re:Fascinating (Score:3, Insightful)

      by QuantumG ( 50515 )
      It's Roland Piquepaille, what did you expect, he's a fucktard and the only reason he's on Slashdot so much is that he has a business relationship with them.
      • Re:Fascinating (Score:5, Informative)

        by nmoog ( 701216 ) on Wednesday June 29, 2005 @09:44PM (#12947168) Homepage Journal
        Thats why everyone needs to install this super dooper greasemonkey script: De-Piquepaille Slashdot [daishar.com]

        It blocks stories submitted by Roland. Of course, you'd have to have installed greasemonkey. Which I forgot to do on re-install and hence saw this fucking stupid article.
  • by Doc Ruby ( 173196 ) on Wednesday June 29, 2005 @08:34PM (#12946708) Homepage Journal
    More like:

    Our Brains Don't Work, Like Computers
  • by windows ( 452268 ) on Wednesday June 29, 2005 @08:35PM (#12946712)
    ...is that our brains (like TVs) are inferior analog devices and human brains need to be replaced with new digital versions. :-)
  • Yep (Score:2, Insightful)

    by jrivar59 ( 146428 )
    "... Instead, our brain is cascading through shades of grey."


    I guess some brains just have more contrast then others...


  • Gee, that's a surprize.

  • Wow (Score:5, Funny)

    by CardiganKiller ( 854899 ) on Wednesday June 29, 2005 @08:35PM (#12946717)
    I've been waiting for a scientist to tell me that I'm capable of thinking in abstract and fuzzy terms for years. Things I can now forget thanks to the brilliant scientist:

    1.) The GPS coordinates of each key on my keyboard.
    2.) The streaming audio of my name and all of my friends and families name.
    3.) The bio-mechanical force sequences for the hundreds of muscles used in picking up a glass every morning.

    Beer will no longer render my circuits useless!
  • Newsflash (Score:5, Informative)

    by tupshin ( 5777 ) <tupshin@tupshin.com> on Wednesday June 29, 2005 @08:35PM (#12946721) Homepage
    Headline: Brains More Like Neural Nets Than Traditional Programs

    Who woulda thunk it.

    ftp://ftp.sas.com/pub/neural/FAQ.html%23A2 [sas.com]

    'Most NNs have some sort of "training" rule whereby the weights of connections are adjusted on the basis of data.'

    Insert joke about the 1980's (or 60's/50's/40's) calling). Somehow I don't think Norbert Weiner would be the slightest bit surprised.

    -Tupshin
  • Maybe one day I will have an amd cluster in my skull. Until then, I will accept my alcohol-cooled brain.
  • Sounds like the elusive "analog computer".

    "Shades of grey" sounds like working with analog values (i.e. 0-255) instead of binary levels (on/off) or even trianary values (on/maybe/off).

    • by Sir Pallas ( 696783 ) on Wednesday June 29, 2005 @08:56PM (#12946889) Homepage
      Analog computers still exist in some places, but you list discrete values. An analog computer works with an essentially continuous range of charges instead of discrete values; and it works continuously in time, instead of in discrete steps. They're very good at integrating, which is the application I used them in.
  • I dont think its suprising to anyone that the mind works in an analogue fashion, weighing the choices available to it up as the decision is made, but I think this experiment is interesting in measuring the effect through physical reaction to verbal triggers. By using that many core subsystems of the brain, I think its possible that effects could have been drawn into the experiment that are not wholly connected to the input/output streaming methods within the brain, and more to do with physical operation of
  • .. that not only do we think in 0's and 1's, but we have 2's and 3's as well!
  • Misleading (Score:5, Insightful)

    by rjh ( 40933 ) <rjh@sixdemonbag.org> on Wednesday June 29, 2005 @08:37PM (#12946737)
    The article's summation is far more accurate than Slashdot. In TFA, a researcher says our minds don't work like digital computers.

    The Slashdot headline says our minds don't work like computers, end of sentence.

    Had TFSH (The Fine Slashdot Headline) been accurate, this would've been a mind-blowing result and in need of some extraordinarily strong evidence to support such an extraordinary claim. The question of whether the human mind--sentience, consciousness, and all that goes with it--is a computable process is one of the most wide-open questions in AI research right now. It's so wide-open that nobody wants to approach it directly; it's seen as too difficult a problem.

    But no, that's not what these guys discovered at all. They just discovered the brain doesn't discretize data. Significant result. Impressive. I'd like to see significant evidence. But it's very, very wrong to summarize it as "our brains don't work like computers". That's not what they proved at all.

    Just once, I'd like to see a Slashdot editor read an article critically, along with the submitter's blurb, before posting it.
    • What came to mind after I read the article was that their results looked like the behavior you'd expect from a standard tree search in a digital computer program, if you moved the mouse according to each branch decision...
    • Re:Misleading (Score:4, Insightful)

      by jafac ( 1449 ) on Wednesday June 29, 2005 @09:15PM (#12947014) Homepage
      They just discovered the brain doesn't discretize data.

      I don't see how that's at all possible given the underlying physical process. As voltage, or frequency, or whatever is the carrier for the "signal" traverses a synapse, at some level, nature itself quatifies it. There has to be a point where the level of the signal is distinguished as discrete from another level. One electron more or less, one Hz more or less. . . The question is, how consistent is the hardware at distinguishing the signal differences as discrete? I'm guessing that neurons probably aren't as sensitive as a purpose-designed piece of silicon could be. But maybe that inconsistency is a crucial part of the characteristics of data processing of biological nervous systems - those characteristics being what distinguishes them from technological systems. . . ?
      • Re:Misleading (Score:3, Insightful)

        by Wolfier ( 94144 )
        The discretization most likely exists.

        However, their experiment did not look close enough to pick out the jaggies.

        Someone can write a computer program that behaves the same way as the experiment subjects. Now what can they conclude?

        Looks like another example of Cargo Cult science.
      • Re:Misleading (Score:3, Informative)

        by mikael ( 484 )
        From research carried out on retinal cells, it the time between pulse (depolarization/repolarization of the synapse) that conveys the most information - stronger stimulation => more frequent pulses.

        And there is a minimum time between such pulses. For a higher response rate/precision, more cells are used.

        A single neuron will take in inputs from up to as many as 10,000 other neurons, with a threshold that has to be exceeded before it will fire itself. And each inputs can have the effect of increasing or
  • by ugen ( 93902 ) on Wednesday June 29, 2005 @08:38PM (#12946748)
    Birds do not fly like airplanes, they continuously wave their wings - and do not have turbines or propellers.

    Sure hope my taxes don't pay for that "research".
  • by Space cowboy ( 13680 ) * on Wednesday June 29, 2005 @08:41PM (#12946769) Journal

    Does anyone *really* think that computers and the brain work in the same way ? Or even in a significantly similar fashion ?
    Like them, we can do several things 'simultaneously' with our 'processors.'

    Well, by 'processors', I assume you mean neurons. These are activated to perform a firing sequence on output connections dependent on their input connections and current state, heavily modified by chemistry, propogation time (it's an electrical flow through ion channels, not a copper wire), and (for lack of a better word) weights on the output connections. To compare the processing capacity of one of these to a CPU is ludicrous. On the other hand, the 'several' in the quote above is also ludicrous... "Several" does not generally correspond to circa 100 billion...

    No-one has a clear idea of how the brain really processes and stored information. We have models (neural networks), and they're piss-poor ones at that...
    • There's evidence that the noise-level in the brain is critical - that less noise would make it work worse, and the same for more noise. That the brain uses superposition of signals in time (with constructive interference) as a messaging facility.
    • There's evidence that temporal behaviour is again critical, that the timing of pulses from neuron to neuron may be the information storage for short-term memory, and that the information is not 'stored' anywhere apart from in the pulse-train.
    • There's evidence that the transfer functions of neurons can radically change between a number of fixed states over short (millisecond) periods of time. And for other neurons, this doesn't happen. Not all neurons are equal or even close.
    • Neurons and their connections can enter resonant states, behaving more like waves than anything else - relatively long transmission lines can be set up between 2 neurons in the brain once, and then never again during the observation.

    The brain behaves less like a computer and more like a chaotic system of nodes the more you look at it, and yet there is enormous and significant order within the chaos. The book by Kauffman ("The origins of order", I've recommended it before, although it's very mathematical) posits evolution pushing any organism towards the boundary of order and chaos as the best place to be for survival, and the brain itself is the best example of these ideas that I can think of.

    Brain : computer is akin to Warp Drive : Internal combustion engine in that they both perform fundamentally the same job, but one is light years ahead of the other.

    Simon.
    • by Lemuridae ( 741647 ) on Wednesday June 29, 2005 @10:51PM (#12947527)
      Finally, a few good comments.

      The point under discussion in this article is summed in this quote:

      "More recently, however, a growing number of studies, such as ours, support dynamical-systems approaches to the mind. In this model, perception and cognition are mathematically described as a continuous trajectory through a high-dimensional mental space; the neural activation patterns flow back and forth to produce nonlinear, self-organized, emergent properties -- like a biological organism."

      The goal is to forcefully point out (using an experiment) that the one way we think about mental processing, the digital computational model, is not very useful even at the trivial level of mental signal processing.

      It's interesting how all the sarcastic comments about the "biological organism" reference completely miss the point. The point is that the signal is being processed in a way that could be modeled by the way a biological organism moves through space. It sniffs here, then there, then jumps to the solution. The signal processing itself exhibits emergent properties.

      The reference to the dynamical system (http://en.wikipedia.org/wiki/Dynamical_system [wikipedia.org]) is key. (I think people frequently fail to gloss the additional "al" and think this refers to some sort of generic "dynamic system"). Dynamical systems, although deterministic, are a foundational tool for developing chaos theory.

      For me the interesting idea is that the default state of thought is in-betweeness. We stay jittering back and forth in an unresolved state until, suddenly, we aren't.
  • The W3C rejected my idea for a "sarcasm" HTML tag, when it would have been so useful at a time like this. Well, I can still fake it:

    Our brains don't work like computers? <sarcasm>Noooo, you're kidding!</sarcasm>

  • People aren't born with an innate foundation in predicate calculus?

    I suppose it can be a useful line of research in robotic "muscle" coordination and world interaction.
  • I still believe in the Church-Turing Thesis... Our brains might not work LIKE computers but they don't do work DIFFERENTLY than them either.
  • "the researchers found that our learning process was similar to other biological organisms"

    Why was this a suprising result? Prior to this they thought what, that people were human-made binary computers in diguise? We have developed computer systems using binary math not because a binary system of logic is necessarily the best, but because binary components can be made easily and cheaply.

    Also, figuring out a system of low-level operations such as NAND and XOR is more difficult for other number systems li

  • They talk in the article of a "1's and 0's" concept of brain function, but they fail (at least through what is in the PR release about their experiment) to disprove that the brain operates on binary data.

    Even computer software, which is known to operate on a strict binary system at the lowest layers, can have the appearance of linear, curving outputs as the data fed to it changes. This linearity breaks down at some granularity if you look closely enough at the output and see it jumping from one value to t
  • The article seems to assume that the only type of computer is a _binary_ computer. This is simply not true! There are all sorts of models for computing based on quantum states, fluid-controlled logic systems and who knows what else. To confine computing to binary systems is like confining mathematics to the set of real integers!

    I believe that the mind is (simply?) a quantum computer, and the article seems to support that idea. The human brain utilizes a sort of general interconnectedness of things to process thoughts as dynamic probabilities of state, with conclusions only being properly arrived at after a certain ammount of calculation has occured, but with all probabilities esiting well before the completion of the thought.

    Anyhow, I should probably stop rambling and go outside or something.

  • Evolution (Score:5, Insightful)

    by __aaijsn7246 ( 86192 ) on Wednesday June 29, 2005 @09:03PM (#12946936)
    ...the researchers found that our learning process was similar to other biological organisms....

    That makes perfect sense, seeing as our brains evolved [talkorigins.org] from other biological organisms.

    Check out evolutionary psychology [wikipedia.org] for some information. You'll view the world differently afterwards.

    Evolutionary psychology (or EP) proposes that human and primate cognition and behavior could be better understood by examining them in light of human and primate evolutionary history... The idea that organisms are machines that are designed to function in particular environments was argued by William Paley (who, in turn, drew upon the work of many others).
  • by G4from128k ( 686170 ) on Wednesday June 29, 2005 @09:03PM (#12946939)
    Just because brains aren't binary or synchronously clocked doesn't mean much. One can create analog computers [wikipedia.org] to represent shades of gray or create clockless computers [computer.org] that don't operate in lock-step synchronization. Furthermore, any digital, synchronous computer and simulate both shades of gray (with floating point numbers) and continuous processes (with sufficiently small time slices). Moreover, given the messiness of neuro-electrochemical systems, one can argue that it doesn't take a very precise float or a particularly dense time slicing to simulate neurons.

    Some people ascribe the seeming magic of consciousness to some ineffable property of the brain, e.g., quantum mechanical effect. While other insist that its just what happens when you connect enough simple elements in a self-adaptive network.

    The question is, are there neural input-output functions that are fundamentally not computable? If not, then a digital computer will, someday, reach human brain power (assuming Moore's law continues).
  • also worthy of note (Score:5, Informative)

    by twiggy ( 104320 ) on Wednesday June 29, 2005 @09:17PM (#12947025) Homepage
    The book "On Intelligence" [amazon.com] by Jeff Hawkins (of Palm fame) and Sandra Blakeslee is all about how the brain works, and why people's approach to AI is not going to come anywhere near emulating the brain...

    Figured it was worth mentioning given the subject matter of the thread... I liked it.. good read, if a bit dry at times...
  • Brain vs. Mind (Score:5, Insightful)

    by Kaenneth ( 82978 ) on Wednesday June 29, 2005 @09:19PM (#12947040) Journal
    I don't think the chunk of meat in my head works using digital logic; but I'd like to think my Mind does a reasonable job of it.

    Natural numbers (1,2,3...), true/false, up/down...

    It's not unnatural to divide everything in half, heck our bodys are mostly symmetrical; the distiction comes in where the dividing line is.

    We can weight our decisions in endless ways, if someone makes a statement, our belief of that statement depends on how many times we have heard it, our trust in the stater, if it meshes with known facts in the current context.

    What I wonder is how far can a human mind be pushed in terms of concepts it can grasp and control it has, can a human visualise a 5 dimensional virtual object? control emotional responses, without supressing them? hold multiple contridictary world models? accelerate long-term memory access?

    Even if you think of an electronic computer, it's just hordes of electrons rushing down pathways, only reliable because the voltage levels are continually refreshed at each step, a few electrons might wander off the path, but they are replaced at the next junction. Quantum Mob Rule.
  • by Shimmer ( 3036 ) on Wednesday June 29, 2005 @09:24PM (#12947068) Journal
    We have no clue how the brain actually works. Sure, we know how individual neurons work, but no one can explain how a bunch of neurons creates a mind.

    We look around our world and notice that computers are superficially similar to brains (e.g. they can both do math), so we hypothesize that they work similarly.

    However, there's very little hard evidence supporting this hypothesis in the first place, so there's no "news" in this story.

    Bottom line: The brain is not just a super-powerful computer.
  • Schema Theory (Score:3, Interesting)

    by bedouin ( 248624 ) on Wednesday June 29, 2005 @09:26PM (#12947078)
    How is this different than a schema? [wikipedia.org] Haven't we known this since the 70's?
  • Pretty Please (Score:4, Insightful)

    by pete-classic ( 75983 ) <hutnick@gmail.com> on Wednesday June 29, 2005 @09:31PM (#12947101) Homepage Journal
    Dear Slashdot Editors,

    Could we pretty, pretty please have a Roland Piquepaille section, so we can opt-out? I've been good all year, and it's almost my birthday, and I won't ask for anything for Christmas.

    -Peter
  • by DynaSoar ( 714234 ) * on Wednesday June 29, 2005 @09:39PM (#12947128) Journal
    "In this model, perception and cognition are mathematically described as a continuous trajectory through a high-dimensional mental space; the neural activation patterns flow back and forth to produce nonlinear, self-organized, emergent properties -- like a biological organism."

    Fine, let's see the math. Let's see the trajectory calculations. How about those calculating the space? Calculating the number of dimensions the space has, and how fast that number changes over time?

    40 years ago brain scientists realized that computer architecture made a good metaphor for how the brain works. (They did NOT assume there was no feedback, contrary to the article). It made a handy and productive way to look at things so they could figure out more about what was really going on.

    10 years ago brain scientists realized that they could use the way cool chaos stuff the describe the way the brain works. Believe me, I know; I've been to the Santa Fe Institute twice. It worked particularly well for me because I'm essentially a signal analyst -- I HAVE to define a set of variables, estimate how well they work, and decide how many of my arbitrary variables to keep or throw out.

    It's still only a metaphor. And unlike the specific specific processes described by cognitive science, the dynamic system stuff remains nebulous. It claims a mathematical legitimacy which it can really claim only in concept because the actual math of the acutal operations are is beyond the abilities of anyone making the claims. The fact that it *can* be described this way is no less trivial than the fact that processes can ge grouped according to the traditional cognitive science concepts.

    Trajectories on phase space are soooooooo sexy. But if it's any good, it'll result in something more concrete than more people picking up this flag and waving it while shouting the new slogans and buzzwords. Until that happens I peg this with the study that "calculated" the "fractal dimension" of the cortex just because it has fold and folds in the folds.... so fsking what.

  • by Animats ( 122034 ) on Wednesday June 29, 2005 @10:09PM (#12947303) Homepage
    Where does he find this stuff?

    The path planner goes slower and generates paths that are initially ambiguous when faced with multiple alternatives. That's no surprise. I'm working on the steering control program for our DARPA Grand Challenge vehicle, and it does that, too. Doesn't mean it's not "digital".

  • So essentially... (Score:3, Insightful)

    by Ninwa ( 583633 ) <jbleau@gmail.com> on Wednesday June 29, 2005 @10:24PM (#12947394) Homepage Journal
    Essentially what I got out of this article is that our thought process is much like google's auto-search that will guess the word you want to search for as you;re typing itm but wont know for sure until the entire word is finished.

    Hm, duh?

    In all seriousness though, I wonder how the curvature of the mouse shows gravitation to one side versus the other, maybe they're just a quake2 player and enjoy cirlce-strafing.
  • by gordona ( 121157 ) on Wednesday June 29, 2005 @10:42PM (#12947487) Homepage
    There's a saying by neurophysiologists: "If the brain were simple enough to be understood, it would be too simple to understand itself"
  • same old story (Score:3, Insightful)

    by dario_moreno ( 263767 ) on Thursday June 30, 2005 @08:41AM (#12949299) Journal
    For centuries, people have compared the human brain with the most advanced technology of the era : clocks in the 17th century, automatons in the 18th, Jacquard weaving machines or steam engines during the 19th, automated telephone exchanges in the 1920's, and digital computers from the 1950's on. Now it's (neural) networks, quantum computers or fuzzy logic, but the idea is the same.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...