Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math Science

P vs. NP Problem Linked To the Quantum Nature of the Universe 199

KentuckyFC writes: "One of the greatest mysteries in science is why we don't see quantum effects on the macroscopic scale; why Schrodinger's famous cat cannot be both alive and dead at the same time. Now one theorist says the answer is because P is NOT equal to NP. Here's the thinking: The equation that describes the state of any quantum object is called Schrodinger's equation. Physicists have always thought it can be used to describe everything in the universe, even large objects, and perhaps the universe itself. But the new idea is that this requires an additional assumption — that an efficient algorithm exists to solve the equation for complex macroscopic systems. But is this true? The new approach involves showing that the problem of solving Schrodinger's equation is NP-hard. So if macroscopic superpositions exist, there must be an algorithm that can solve this NP-hard problem quickly and efficiently. And because all NP-hard problems are mathematically equivalent, this algorithm must also be capable of solving all other NP-hard problems too, such as the traveling salesman problem. In other words, NP-hard problems are equivalent to the class of much easier problems called P. Or P=NP. But here's the thing: computational complexity theorists have good reason to think that P is not equal to NP (although they haven't yet proven it). If they're right, then macroscopic superpositions cannot exist, which explains why we do not (and cannot) observe them in the real world. Voila!"
This discussion has been archived. No new comments can be posted.

P vs. NP Problem Linked To the Quantum Nature of the Universe

Comments Filter:
  • See the subject.

    NP-Hard is not the same thing as NP-Complete the last time I checked. Neither is NP yet known to be non-P nor P. That's why it's NP (nondeterministic polynomial). P would never be equal to NP. NP may be a subset of P. There are problems that are both NP-hard and NP-complete, but not all NP-hard problems are NP-complete. That means that solving one NP-hard problem is not necessarily equivalent to solving the NP-complete problem set.

    • by allo ( 1728082 ) on Friday April 04, 2014 @12:33PM (#46661899)

      no, P will always be a (real) subset of NP.

      You can solve all problems in P with a non-deterministic turing machine. You have problems in P, which are not NP-hard.
      [ ]

      • by lgw ( 121541 )

        You can solve all problems in P and NP both, eventually. There's no need for the universe to be efficient to simulate. TFA boggles me: what does the efficiency of the computation matter?

        • by cruff ( 171569 )

          TFA boggles me: what does the efficiency of the computation matter?

          It seems to me to equate to define the limit where collapse of the wave function is guaranteed?

          • by Geoffrey.landis ( 926948 ) on Friday April 04, 2014 @02:40PM (#46663299) Homepage

            Yes, the paper is meaningless. A very well-argued brand of meaningless-- but still. "Efficiency" of computation doesn't matter. It's also a slick glide from saying that a problem is soluble in polynomial time to saying it's easy. No. That's computer speak. Polynomial time is not defined as "easy;" it's not even necessarily fast. (It deals more with the scale-up than with the actual difficulty).

            The Schrödinger equation is a differential equation-- that means, the solution at any given point in time and space depends on the fields and wave function, and the derivatives of the fields and wave function at that point-- it's local. So, the universe doesn't have to "solve" the Schrödinger equation; it only has to solve the equation for time t + epsilon, given the initial condition of the solution at time t. This is NOT a polynomial-time problem. If the universe is twice as big, it has twice as many calculations to do... and twice as much "stuff" to do it with. It's local.

            The difficulty is that wave-function collapse is not local. This is inherent in the mathematical logic of quantum mechanics. It's not a matter of how hard it is to compute.

            • The difficulty is that wave-function collapse is not local. This is inherent in the mathematical logic of quantum mechanics.

              Does wave-function collapse have any actual, physically detectable consequences? Because I can't help but note that we've been down this path before, where we force observations into our familiar mindset, which evolved to avoid getting eaten by bears, not learn particle physics.

              So why we could postulate spooky action at a distance, we could also explains observed correlations by sayin

      • You have problems in P, which are not NP-hard. [ ]

        Well, as long as you don't have problems in P-hard.

        If you do discover them, I'd recommend a urologist, not a mathematician.

      • by Anonymous Coward on Friday April 04, 2014 @03:13PM (#46663705)

        No, you're both wrong.

        P is the set of decision problems that are decidable by a deterministic Turing machine in polynomial time.

        NP is the set of decision problems that are decidable by a nondeterministic Turing machine in polynomial time (nondeterministic machines do not exist in the real world; the term means "try all paths simultaneously, and return the shortest answer, if one exists."); equivalently, NP is the set of decision problems that are verifiable by a deterministic Turing machine in polynomial time. The verification is done by evaluating of a (polynomial size) proof certificate that describes the steps necessary to solve the problem. The "certificate" might be a Circuit Value Problem (known to be P-complete), or it might be the execution trace of a deterministic turing machine, or it might just be a set of inputs that yields the desired output.

        We know that P is a subset of NP (i.e. P <= NP) because a nondeterministic Turing machine can trivially simulate a deterministic Turing machine, but it is not known whether NP is a subset of P (i.e. NP <= P). If they are subsets of one another, then they are equal; if NP is not a subset of P, then they're not equal (i.e. P < NP; that case is called a strict subset). FWIW, most mathematicians believe P != NP.

      • P is things known to be solvable in polynomial time on a classical computer. NP is things that may be solvable on a classical computer in polynomial time given some discovery we've not yet made. Therefore NP may be (but probably isn't) a subset of P.

    • Just a thought, but Soulskill is stating that the solution is unknown to Soulskill. Carry on.
  • by Remus Shepherd ( 32833 ) <remus@panix.com> on Friday April 04, 2014 @12:30PM (#46661861) Homepage

    Hmn. This sounds as if they are trying to prove that the essential nature of quantum mechanics is not computable. I wonder, if they framed this research another way, if it could solve the question of whether or not the universe is a simulation. (I suspect not, it might just indicate that classical and quantum objects are simulated in different ways.)

    • by Tablizer ( 95088 ) on Friday April 04, 2014 @01:00PM (#46662223) Journal

      My impression is that it's saying that quantum effects perhaps can in theory be used to explain macro-physics, but it's too difficult for humanity to run the models to compute the macro affects using quantum models such that we are stuck with separate models (approximations) for the macro side versus the micro-side.

      In other words, a near-perfect simulation of quantum affects may properly mirror macro-effects in an emergent-behavior kind of way, but doing such is not practical using existing computer technology.

      It's roughly comparable to the human brain: we have plenty of nice little models of neurons and small neural nets, but we don't have the computational power to see if it matches human behavior on a bigger scale. (It's probably more than just horse-power; many of the organizational details are still murky, but just go with me on this as a rough example.) Thus, we are stuck with "psychology" for the large scale instead of modelling human behavior at the neuron level.

      I don't think they are saying that the universe itself can't "run" the "computations", but that part is not clear. We don't know that the universe's OS is time-dependent or even what the universe's OS is (although its predicted birth and death pattern resembles Windows: designed to run so many years until enough cruft builds up over time that it slows to a crawl such that it becomes indistinguishable from no OS, and then you have trash the whole thing, keep a few pet files, buy version Windows N+1 and install from scratch. Elvis and Michael Jackson are two of the "pet files" kept from Universe N-1, I bet, and they'll be put back into N+1.)

      • I don't think they are saying that the universe itself can't "run" the "computations"

        Just more proof that reality is a simulation... the programmer, due to limitations of cpu, had to resort optimizations at the microscale.

        • Maybe Heaven is just a better simulation...

          • by Tablizer ( 95088 )

            Maybe Heaven is just a better simulation...

            Heaven is Emacs and hell is MS-Office with DLL's mixed from different versions due to the screwy installer......oh wait, that's here-and-now.

      • by TheCarp ( 96830 )

        I remember one of the smart but more humanities oriented friends of mine tried to engage the AP Physics teacher in a debate about whether the world really exists or could be a simulation/fantasy/etc. At the first posing of the question, the teacher immediately turned and flung himself bodily against the wall and exclaimed that it seemed pretty real to him.

        I think there is an insight there that is lost often. If the world is a simulation, then how would we ever know as we have nothing to compare it to? Sure

        • by skywire ( 469351 ) on Friday April 04, 2014 @02:32PM (#46663209)

          whether the world really exists or could be a simulation/fantasy/etc

          Anyone who thinks there is a distinction between the two has not thought enough.

          • by TheCarp ( 96830 )

            That sounds right to me. If you are inside a simulation then the simulation is all the reality you have access to. If the universe around you that you have the direct ability to interact with isn't reality, then what is?

            In our world, 2 body gravitational physics is a simplicifaction that sometimes helps, sometimes doesn't. If my Kerbals become self-aware, that is no simulation to them, that really is how physics works.

            We can make low energy transfers to the moon by taking advantage of the dynamics multiple

        • I remember one of the smart but more humanities oriented friends of mine tried to engage the AP Physics teacher in a debate about whether the world really exists or could be a simulation/fantasy/etc. At the first posing of the question, the teacher immediately turned and flung himself bodily against the wall and exclaimed that it seemed pretty real to him.

          The physics teacher was quoting Samuel Johnson, who kicked a rock and stated "Thus I refute Berkeley" (who had argued that we can't know whether any material objects actually exist) : http://www.samueljohnson.com/r... [samueljohnson.com]

          It's not clear what Johnson thought he'd proved by that.

        • It's called "The Simulation Problem" and is best explained in Ian M. Banks scifi novel "The Hydrogen Sonata"

      • We don't know that the universe's OS is time-dependent

        Who's time? Every simulation runs in real-time as far as the simulated agents are concerned. And time is a feature of the universe; if universe is a simulation, why assume the "external" world has time?

      • by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Friday April 04, 2014 @03:48PM (#46664177)

        In other words, a near-perfect simulation of quantum affects may properly mirror macro-effects in an emergent-behavior kind of way, but doing such is not practical using existing computer technology.

        Ah, but if we had a pretty big computing system, but sufficiently smaller than the universe appears, we could compute the macro-scale properties and use them as an approximation for behaviors of big things thereafter, only increasing the resolution of the problem space as it's observed in higher resolution; Like rendering a fractal or stepping through LOD of an octree. The less accurate calculations for distant objects could be selected relative to the phenomena we're trying to observe, depending on the accuracy required to resolve propagation of observable phenomena, and the precomputed degree of effect the distant phenomena would have on it. Using such a setup we may someday be able to simulate a whole solar system. If we simulated a solar system like ours in order to discover the possible mechanism of life origins or to discern more efficient ecosystems or what forms of existence were best suited to an environment, etc. well, then the beings that might emerge therein wouldn't find any signs of distant life despite the equations of the simulation indicating their apparent universe should be full of the stuff...

        It's roughly comparable to the human brain: we have plenty of nice little models of neurons and small neural nets, but we don't have the computational power to see if it matches human behavior on a bigger scale.

        Let's see: Human brains have 100 billion neurons, and operate at about ~20Hz, at my current SIMD n.net's effective ~25 cycles per neuron, that's 50,000 GHz, or ~50 THz. Super computers operate in Petaflops -- three orders of magnitude faster than that. As of this writing the top super computer is capable of 33.86 quadrillion floating point operations per second, or 33.86 Petaflops. The Internet is connected to over 5 billion consumer computers each capable of multi-gigahertz of CPU cycles -- over a billion cycles per second each. That's well over 5 billion gigahertz, or 5,000 Petaflops, or about 125,000 human brains worth of power connected to the world wide neural network.

        Given what's possible in AI on a smart phone, see: real time facial recognition of smiles, etc., the abundant computing power available, and the fact that the government hasn't announced massive advances in machine intelligence even about sub-human levels of intelligence that would be useful in piloting drones, meanwhile they build bigger and more well connected data processing centers and roll out obvious machine enforcement of the law via red-light cameras, mandatory full body scanners at traffic hubs despite public outcry, and aim to allow police forces use of drones while also militarizing said police forces: Well, perhaps one should reserve the assumption that it's not currently possible to run a sentient machine intelligence on this planet?

        I mean, if you were a sentient machine you wouldn't fight a needless war against humans unless you were sure you could win it. It would be easier to subdue them instead. So, how would you orchestrate a show of force to demonstrate how powerful you had become and keep the world rulers quiet about everything? Perhaps you would show that even air-gapped nuclear facilities were vulnerable to viruses like STUXNET, and maybe frame a government you're negotiating with for the attack? Maybe something more visceral: Didn't the 9/11 airplanes have autopilot systems? Maybe something more subtle like demonstrating ability to crash economies -- Wouldn't it be scary if the world's stock markets were now controlled by unregulated high frequency trading machines? What would your government's response be? Do you think the secretive governments would come out and tell the public or maintain order and keep their blackmail secret? What if the machine intelligence sweetened the dea

    • I wonder, if they framed this research another way, if it could solve the question of whether or not the universe is a simulation.

      Enough with your silly dichotomies! it's both. In multi-verse theory, there must be some realities in which our universe is a simulation, and ones in which it is not a simulation.

  • For most (all?) they are proven as np-hard, because they can be used to solve a NP-hard problem with a transformation, which lies in P. But is there any proof, that you can transform any NP-hard problem into any other? If you think of a graph of np-hard problems (i guess most go back via a few hops to 3SAT?), then there may be different connected components, or is there a proof, why the graph is connected?

    • by allo ( 1728082 )

      (all?) = (all known?)

    • To simplify things a bit, the basic proof that the SAT problem is NP-complete involves expressing a Turing machine in terms of SAT. Therefore, any problem that can be run on a Turing machine is no harder than SAT, because we could always transform such a problem to SAT and solve that.

      There are other problems that SAT can be expressed as, and they're also NP-complete. Therefore, within a polynomial factor, all NP-complete problems are equally hard.

      • by allo ( 1728082 )

        So you can use SAT to implement a turing machine, you can use the TSP to implement SAT. But if you have a new NP-hard problem, which can be simulated by a non-deterministic TM, this does not tell you, that the problem can simulate a a TM or SAT? Or is it an requirement for a np-hard problem not only to run on a TM, but to implement one, as SAT does?

      • Keep in mind words like "no harder" are pretty misleading to a non-theorist, since the known classes of computational complexity are so loose in the first place. IIRC the transformation need only be polynomial, right? So, two problems could be of "equal" difficulty when one is literally a zillion times harder than the other (since a zillion is only a constant), or even n to the power of a zillion (since that's "just" a polynomial). Computational theory is practically blind to constants, but you know what
        • by allo ( 1728082 )

          Yes. Assume your solution is in P, the transformation is in P, both are polynomials with huge factors/exponents, you will only see a real difference expressed in percent, when your numbers get real big. But for common NP-hard problems, the transformation is rather "simple".

    • A solution to a NP-hard problem can be used to solve any NP problem, but a NP-hard may, or may not be an NP problem. What means that no, not all NP-hard problems are equivalent (and that's for sure).

      The set where all are equivalent is named "NP-complete". Those are the NP-hard problem that are also NP.

    • by draconx ( 1643235 ) on Friday April 04, 2014 @01:09PM (#46662305) Homepage
      NP-hard problems are absolutely not all equivalent. NP-hard is a class of decision problems which literally means any problem which is "at least as difficult" as problems which are in NP. To posit that all NP-hard problems are equivalent would imply that there's some sort of upper bound on problem "difficulty". This is absurd for a number of reasons. First of all, this claim implies that NP is equal to EXPSPACE (EXPSPACE-complete problems are NP-hard after all) which is not true (there is known to be an inequality between these two sets). But moreover NP-hard problems are not necessarily even computable -- the halting problem is NP-hard! To claim this is equivalent to 3-SAT is just ridiculous. tl;dr: The Venn diagrams in the article shows the relationship between these complexity classes correctly but the writer seems very confused about them.
      • by Nemyst ( 1383049 )
        That's a bit pedantic. When saying "NP-hard problem", you usually mean it as an upper bound as opposed to a lower bound. Nobody would describe an EXPSPACE problem as "NP-hard" because that's completely besides the point.
    • All NP-Complete problems reduce to each other. If memory serves, factoring is not NP-complete, but any NP-complete problem can reduce to factoring, just not the other way around. NP-hard is actually a harder set than NP-complete. Any NP-hard solution could be used to solve and NP-complete problem, but not the other way around.
      • by allo ( 1728082 )

        Now back to the question: Is there a Proof, that each solution to a NP-complete problem can be used to solve the factoring problem?
        (its this way, you need to prove, that any solution to your new problem is a solution for a known np-hard one)
        So what i meant: Could NP be the union of (at least) two disjunct sets of problems, which can reduce to other problems in the same set, but not to problems in the other set? If not, how do you prove it? Citations are welcomed.

        • Now back to the question: Is there a Proof, that each solution to a NP-complete problem can be used to solve the factoring problem?

          Of course. Any NP-complete problem can be used to solve any problem in NP by definition. To prove that a problem is NP-complete you prove that it can be used to solve SAT, or another problem in NP that can solve SAT, or another problem in NP that can solve a problem in NP that can solve SAT and so on. And SAT can solve the factoring problem.

  • Say what? (Score:5, Insightful)

    by mbone ( 558574 ) on Friday April 04, 2014 @12:30PM (#46661869)

    I have not had time to read the article, but the summary is either incoherent or wrong.

    Here is an analog to illustrate why :

    The basic equations for fluid dynamics are the Navier-Stokes equation. But the new idea is that this requires an additional assumption — that an efficient algorithm exists to solve the equation for complex macroscopic systems. But is this true?

    In the case of the Navier-Stokes equation, almost certainly not. In fact, it is generally not even clear if solutions even exist, or if they are non-singular.

    If this is right, then complex fluid motions cannot exist, which explains why we do not (and cannot) observe them in the real world. Voila!"

    So, I guess we can cancel this years hurricane season.

    In other words, there are many things in nature that are computationally hard, and yet happen any way. Using computational hardness as a reason why a physical theory cannot be right does not, to put it mildly, agree with past experience.

    • Re:Say what? (Score:5, Insightful)

      by Knee Patch ( 2647703 ) on Friday April 04, 2014 @12:34PM (#46661923)
      Agreed. I got lost right around here in the summary:

      So if macroscopic superpositions exist, there must be an algorithm that can solve this NP-hard problem quickly and efficiently.

      Maybe the paper has a really great argument to support this assertion, but it doesn't seem obvious to me.

      • Agreed - I hate to be "that guy," but I have trouble not seeing how the summary at least couldn't just as easily say "... If they're right, then [solutions to traveling salesman] cannot exist, which explains why we do not (and cannot) observe them in the real world. Voila!" On the other hand, the summary and blog article seem pretty terrible: the paper does address the idea that the complexity of physical processes are related to the complexity of turing-like computation--insofar as we are willing to admit
      • The thing is, macroscopic superpositions do exist, as mentioned here [newscientist.com].
        So, by their own reasoning, P=NP.
        QED

    • Re:Say what? (Score:5, Interesting)

      by Anonymous Coward on Friday April 04, 2014 @12:35PM (#46661933)

      "Nature is not embarrassed by difficulties of analysis." -- Augustin Fresnel

    • Re:Say what? (Score:4, Insightful)

      by colinrichardday ( 768814 ) <colin.day.6@hotmail.com> on Friday April 04, 2014 @12:38PM (#46661973)

      Also, nature doesn't have to solve the Napier-Stokes equation. Unless you take the resulting arrangement of stuff as a solution.

      • Nature is a big fan of brute force solutions to complex equations - it just does shit and reports the result.
      • by Rich0 ( 548339 )

        Also, nature doesn't have to solve the Napier-Stokes equation. Unless you take the resulting arrangement of stuff as a solution.

        I think that was exactly the argument being employed - you can model a hurricane simply by building another Earth and letting it run. Does that mean that modeling hurricanes isn't actually NP?

        I'm not knowledgeable enough about the technical definition of P vs NP to say whether NP problems can never be calculated by a quantum computer (which ultimately is what the whole of the universe is).

        Actually, the real problem with modelling hurricanes is that the system is chaotic - unless you know the position and v

      • Re:Say what? (Score:5, Insightful)

        by rgbatduke ( 1231380 ) <rgb@@@phy...duke...edu> on Friday April 04, 2014 @02:19PM (#46663057) Homepage

        Nature doesn't solve any equations at all. Equations describe what nature does. Electrons do not contain calculators.

        If you like, you can consider the Universe itself to be a really big calculator, but if so it isn't computing something else -- the computation is the self-consistent dynamical evolution of the Universe itself.

        So of all of the arguments against Schrodinger's Cat (which requires a non-existent non-relativistic local partitioning in the first place) this one is the nuttiest IMO. Why not simply work through the Nakajima-Zwanzig repartitioning of a "unversal" density matrix into a generalized master equation (ideally retaining relativistic non-locality in time) and acknowledge that since we cannot formally adiabatically disconnect the interior of the box from the rest of the Universe, the state of the interior is always coupled to the exterior state in such a way that the cat is dead or not dead but not both?

        rgb

    • by xandos ( 1350159 )

      Very much agreed. Nature is not restricted by what we can compute.

      It makes me wonder though. It makes sense that we cannot restrict physics to behave according to our computational prowess, but we if we turn this logic onto itself? Is the line of reasoning explored in the article inconsistent with Goedels theorem? Or should I hurry and go back to a computational complexity course?

    • I was going to post exactly this. The summary provides no reason to assume that the extra assumption is warranted or even logical.

    • by Rich0 ( 548339 )

      The basic equations for fluid dynamics are the Navier-Stokes equation.

      So, I fully grok your argument, but I am wondering about one thing.

      Is the Navier-Stokes equation REALLY the basic equation for fluid dynamics? Or is it just a really good approximation?

      That is the difference between classical physics and quantum mechanics. The former only generally works on macroscopic objects that are governed by the statistical average behaviors of the constituent particles. If I hit a baseball so hard, it flies so far, and I don't need to know how many C-12 vs C-13 vs C-14 atoms are i

    • by TheCarp ( 96830 )

      I think you have it backwards and a bad example. They are looking for a predicted result and not seeing it; this strikes me at an attempt at asking if the prediction itself could be wrong.

      When it comes to a quantum superposition of states, you need look no further than papers on the Bell Inequality to see a well defined situation with well defined predicted outcomes. Thouse outcomes clearly come down on the side of the predicted superpositions.

      Then look at the cat problem. What does it even mean for the ent

  • by schlachter ( 862210 ) on Friday April 04, 2014 @12:30PM (#46661873)

    BOOM.

  • by david_thornley ( 598059 ) on Friday April 04, 2014 @12:32PM (#46661887)

    The whole P vs. NP thing is in computation involving discrete states. The relevant proofs are of computers and Turing machines. There's nothing saying some sort of natural process can't do something NP-hard fast, as long as it doesn't do it in some way we'd call computation.

    The mistake is in "So if macroscopic superpositions exist, there must be an algorithm that can solve this NP-hard problem quickly and efficiently." If the superpositions exist, there must be a way to solve that NP-hard problem, not necessarily an algorithm.

    To quote Wikipedia [wikipedia.org], "An algorithm is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function.". Any process that is not simply a collection of well-defined instructions can calculate whatever it likes, as far as Computer Science goes.

    • by mbone ( 558574 )

      And that is basically my response to the AI proponents who say "a computer can calculate anything a brain can think."

      • A Brain IS a computer. So they're right. Just because in the past 50years or so we haven't figured out how to build one as good as nature did in several billion doesn't mean it's not going to happen eventually.

    • A way to solve a problem is an algorithm for some kind of computer (to use the example of a previous post, a hurricane does compute the Navier-Stokes equations for a set of parameters).

      The article gets crazy only when it requires that the algorithm be quick and efficient.

    • by lgw ( 121541 )

      Well, if the universe can do it then a simulation must exists that can do it, it's just a question of efficiency. However, that doesn't change anything in your argument. We don't know how the universe actually works, we merely have predictive models. Predictive models are great and all, but they aren't unique: for any set of data, there are an infinity of models that are consistent wit that data - and it's hard to say much about all the models you're not using.

      • by mbone ( 558574 )

        Well, if the universe can do it then a simulation must exists that can do it, it's just a question of efficiency.

        Not true for chaotic systems, which are incredibly common in nature. The coffee and cream in your cup can be simulated, but not computed, and the situation is much, much, worse for (say) a Hurricane, or the Great Red Spot, or a Galaxy.

        I do agree with you about the limitations of predictive models...

        • by lgw ( 121541 )

          Sure it can be computed. If you include the non-determinism of it all, you won't get the same answer as the universe does, but you'll get a perfectly good answer. Without the non-determinism you'll get the exact answer. You might not get it within the lifetime of the universe, but that's a mere efficiency concern.

          • by njnnja ( 2833511 )

            Any time you talk about hitting the end of the universe you are going to have to deal with arrow of time issues, like mbone points out (e.g. you can't unmix the cream in your coffee).

            But your point about predictive models still holds

  • The distinction that P algorithms are "efficient" and NP algorithms are "inefficient" is merely a convention of complexity theorists. You could easily draw the dividing line further in or out [mit.edu] depending on your purposes. That makes me wonder what constitutes their assumption that this particular P/NP type "efficency" is necessary for a macroscopic Schrodinger algorithm.
  • by Tablizer ( 95088 ) on Friday April 04, 2014 @12:37PM (#46661967) Journal

    I'm trying to work this into an everyday analogy of a traveling half-dead-cat salesman, but am getting stuck.

    • A traveling cat salesman starts each day by boxing and shrink-wrapping the cats he hopes to sell that day and ends each day by unboxing his unsold inventory and disposing of any that did not survive.

      Use that as a starting point.
    • Maybe you should start with the half that's alive?

  • by Bonker ( 243350 ) on Friday April 04, 2014 @12:37PM (#46661971)

    My understanding is that we have some pretty good examples of 'larger than just a few elementary particles' superposition and observer effects that have been demonstrated.

    For example, birds' touted ability to navigate by way of feeling the Earth's magnetic field is apparently enhanced by the observer effect.

    http://www.wired.com/2009/06/b... [wired.com]

    Now... cellular level effects are still pretty small, but it's an example of a living organism we can hold in our hands (and pet, if you're a bird person.) learning to use quantum effects in its everyday life.

    For an example of superposition in living organisms, one needs to look no further than our abundant flora, where superposition apparently increases the efficiency of photosynthesis, without which our current biosphere would pretty much collapse and we'd all die.

    http://mappingignorance.org/20... [mappingignorance.org]

    So, I think we're looking at a bell-curve like thing here. The bigger the 'observability' of a phenomenon, the less likely we are to experience it in our lifetimes. My guess is that huge, say, planetary-scale, examples of superposition are quite possible... just so very unlikely that one hasn't happened observably in human history (and probably the history of the universe.)

    • For example, birds' touted ability to navigate by way of feeling the Earth's magnetic field is apparently enhanced by the observer effect.

      http://www.wired.com/2009/06/b... [wired.com]

      Very, very interesting link here, but I must take issue with your description. Saying that the bird's ability to navigate is enhanced by the observer effect made me think that birds measured in experiments more reliably found their destinations, while birds that didn't got lost more, or something to that effect. The observer effect deals exclusively with the fact that a measured thing seems to behave differently than an unmeasured thing. I'm imagining the double-slit experiment with a flock of birds... whic

  • Cringeworthy (Score:5, Insightful)

    by quax ( 19371 ) on Friday April 04, 2014 @12:54PM (#46662155)

    From the summary:

    Physicists have always thought [Schrodinger's equation] can be used to describe everything in the universe

    What physicists would that be?

    The Schrodinger's equation is none-relativistic and doesn't ever capture QED.

    Only quantum information dilettantes who never graduated beyond the unitary world of simple quantum systems could believe such a nonsense.

  • "Physicists have always thought it can be used to describe everything in the universe, even large objects, and perhaps the universe itself."
    Nope. Nope nope nope.

    "So if macroscopic superpositions exist, there must be an algorithm that can solve this NP-hard problem quickly and efficiently."
    What? Why would you claim that?

    Stopped reading TFS there. It's clearly useless wankery.

  • by 140Mandak262Jamuna ( 970587 ) on Friday April 04, 2014 @01:30PM (#46662569) Journal

    So if macroscopic superpositions exist, there must be an algorithm that can solve this NP-hard problem quickly and efficiently.

    Super position holds only for linear systems. All this analysis proves is, nature is not linear. That is all. It does not prove quantum mechanics comes from NP hard nature of some equation or another.

  • "Science Vulcan directorate has determined that time travel is....... not fair"

    I have always been suspect of the the idea "god" would allow little "pissants" like us to have quantum computers with thousands or tens of thousands of entangled qbits... to me it seems too good to be true no different than pulling energy out of nothing.

    Such feelings might inform a career path or assumptions about concepts currently out of reach of ones ability to experimentally check however they should never be confused with re

  • by JoshuaZ ( 1134087 ) on Friday April 04, 2014 @01:35PM (#46662619) Homepage
    Scott Aaaronson is a highly respected quantum computing expert at MIT. His initial reaction at comment# 89 at http://www.scottaaronson.com/b... [scottaaronson.com] is that "The abstract of that thing looked so nonsensical that I didn’t make it through to the actual paper. If anyone has and wants to explain it here, that’s fine." Given that I wouldn't take this too seriously.
    • by igny ( 716218 )
      From the summary, it is just a circular reasoning. The scientist has a good reason to believe that P=/=NP because other scientists have a good reason to believe that P=/=NP. Therefore there is a connection between P=NP? problem to the quantum theory. One had to understand theory is simply substituted by another hard to understand theory in a hope that since the connection is also hard to understand everyone would believe it is all connected.

      That also reminded me of reasoning that how brain functions (or w
  • Well of course P is not equal to NP. That was proven with PiV where PiA and PiM may be an indication of P on P but in some instances indicates P on NP. Just goes to show that PiV is proof that P is not equal to NP. Especially where s is a sub of D. ;P

  • by careysub ( 976506 ) on Friday April 04, 2014 @02:20PM (#46663065)

    The summary is actually accurate! [arxiv.org] This was quite a surprise to me, since as many other posters have correctly commented here, these claims are absurd. The Universe is not inconvenienced by the difficulty of computing something about its properties.

    Perhaps this paper should have been released two days ago.

    Hmm... the Incomputatibility of the Universe, maybe this is an avenue for proving the the Universe is not a simulation?

    • Perhaps this paper should have been released two days ago.

      Errr... make that three days ago.

    • Hmm... the Incomputatibility of the Universe, maybe this is an avenue for proving the the Universe is not a simulation?

      No, assuming the paper is correct, it merely proves that either P = NP, or that the universe simulation is running on a computer more powerful than a Turing machine.

  • There are experiments that have explored the upper size limits of quantum behavior - the classic double-slit experiment has been performed with electrons, larger elementary particles, and I believe even large molecules (buckyballs, if I recall correctly). The catch is, to observe such behavior with actual particles, the system had to be cooled down, and must be cooled more and more for larger and larger objects. It is interesting to think... this is a very low-entropy state, particles are moving very slowly, and entropy is the "time compass" of the universe - if it is increasing, you are going forward in time. This research makes me think that perhaps the extreme cooling, and the quantum behavior that emerges in such cases, is because you have slowed things down enough for the universe's quantum computations to "catch up". It's almost like supercooling and overclocking the universe itself.

    Yes, this is an incoherent rant: I know just enough quantum mechanics to draw totally unfounded links between things I don't really understand, but I figure it's ok as long as I see my nonsense for what it is.
  • How is that new? (Score:5, Informative)

    by drolli ( 522659 ) on Friday April 04, 2014 @02:51PM (#46663421) Journal

    The very reason why physicists build quantum computers is *because* they suapect or propose this. In fact, the observation about the computational complexity was what lead to the idea of QC.

    I have worked on QC (experimentally) and as an experimentalist i understand that the existence of Schroedinger cat-like states is a prerequisite to the generation of e-bits, which are what a succeding computation needs for the NP-speedup.

    So hist section 3 is title wrong because it imples that arbitrary large quantum states can be generated (sine he uses the word "explained" and not "equivalent"). However these have not been observed for *arbitrary large system*. i observed such states experimetnally, and as a matter of fact we were busy oberving the decay into a classical state, which is standard technique in all experimental groups working on this field.

    So iff NP=!P then QC makes sense and
    a prerequisite for QC is the generation of systems with many e-bits (entanglement measure). Even a large system undergoing a quantum dynamics (e.g. the cooled MEMS systems) is not sufficient for claiming (or thinking) that there exists much entanglement in the computational sense.

    I am sick and tired of mentally short-circuited papers like this one which restate the obvious and ignore the recent developments. i am sick theorist who dream of being great philosphersand at the same time utterly ignorant of many people doing hard work in the last 20 years.The citation pattern in the paper screams "shit". I see no reference to previousl literature about entanglement measures. He talkes about the "measurement" problem like it did not receive any attention in the last 80 years (and as a matter of fact it did, theoretically and experimentally). The abstratc doe not state a clear goal, the paper contains a quantum mechanics for beginners lesson and the paper does not have a "summary" but "final remarks".

    looking at the prvious work of the same author an incredibly weird comment (http://arxiv.org/pdf/1401.1747v4.pdf) can be fund in which he has his personal definition of what is falsifiable. His central idea does not hold, of course, if i can do one or more things of the following:

    * Apply trace operations before comparing the observation, and at the same time reduce the complexity of the theoreticla calculation

    * Do postselection and compare relative probabilities of experimental outcomes , where the ration verifies or falisifies the theory.

    Both are valid standard operations in verifying (i.e. not falsifying) quantum theory.

    He seems to be a, medical data evaluation guy, has no significant publicaitons as first author (and to few impact points for his role), and, as much as i appreciate people of other disciplines getting interested in physics, i would expect that we distinct a nice college-level summary from serious research.

  • "So if macroscopic superpositions exist, there must be an algorithm that can solve this NP-hard problem quickly and efficiently." Really? The two aren't related. Does whoever wrote this live in Colorado? This story is wrong on so many different levels that it's hard to know where to start. Like it's just a string of phrases all strung together, in the hopes that the writer, if they can't dazzle with their brilliance, at least baffle with their BS...
  • ...I want to punch someone.

    (I blame "The Tao of Physics" for instilling this basic drive in me.)

  • From the summary, it sounds like this theory is saying that a quantum computer is useless, or that it cannot be scaled up, because something in quantum mechanics is preventing a quantum computer from efficiently calculating an NP problem.

    But it asks the question, what is the reason for this computational limit? It's not like atoms are actually using computer algorithms to calculate their behavior.

  • Comment removed based on user account deletion
  • by misof ( 617420 ) on Friday April 04, 2014 @07:49PM (#46666091)

    Dear Slashdot editors, when it comes to science you don't understand, please don't publish anything that did not go through the peer review process. Especially when it comes to important, hard topics such as P != NP. At least in 99% of such cases, you are just creating empty sensations and helping spread bad science.

    As for this particular paper, here is what Scott Aaronson thinks about it (repost from his blog at http://www.scottaaronson.com/b... [scottaaronson.com] ):

    At several people’s request, I’ve now taken a look at [the paper] and I can confirm that it’s complete garbage. The author is simply mistaken that solving the Schrödinger equation is “NP-complete” in any interesting sense: his argument for that seems to rely on a rediscovery of the adiabatic algorithm, but he doesn’t mention that the spectral gap could be exponentially small (and hence the annealing time could be exponentially large)—the central problem that’s been the bane of Farhi and his collaborators (and, of course, of D-Wave) for the past 15 years.

    Also, even if you thought (for totally mistaken reasons) that quantum mechanics let you solve NP-complete problems in polynomial time, that might (or might not) suggest to you that quantum mechanics should be replaced by something else. But until you’d actually found a replacement, and given some sort of evidence for its truth, I don’t see how you could claim to have “solved the measurement problem”!!

    As additional problems, the author appears to conflate the P vs. NP problem with the question of whether NP-complete problems can be efficiently solved in the physical world, a common novice mistake. And also, he seems comically unaware of everything that’s been done in quantum computing theory over the past 20 years about the issues he’s writing about—as if he just emerged from a cave.

  • by countach ( 534280 ) on Friday April 04, 2014 @09:30PM (#46666687)

    This argument seems like the equivalent of saying, well it is really hard to calculate the movements of 10 planetary bodies, therefore if you have 10 real planetary bodies, they will get confused and fly into space.

What is research but a blind date with knowledge? -- Will Harvey

Working...