Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Medicine Robotics Technology

Scientists Tout New Way To Debug Surgical Bots 68

coondoggie writes "When it comes to having robotic surgeons slicing around inside your brain, heart or other important body organ, surgeons and patients need to know that a software or hardware glitch isn't going to ruin their day. That's why a new technique developed by researchers at Carnegie Mellon University and the Johns Hopkins University Applied Physics Laboratory that promises to reliably detect software bugs and verify the software safety of surgical robots could be a significant development."
This discussion has been archived. No new comments can be posted.

Scientists Tout New Way To Debug Surgical Bots

Comments Filter:
  • I guess it's really a public beta...

  • For real safety (Score:3, Informative)

    by fustakrakich ( 1673220 ) on Monday April 08, 2013 @01:25PM (#43393047) Journal

    Do not connect these things to the internet, please.

    • but... how else can the doctor perform surgery from the beach? You don't want any sand in your open wound do you?

    • by Dahamma ( 304068 )

      The original idea for this robotic surgical equipment was remote operation, so it's pretty certain that's going to happen eventually.

  • ruin their day ? (Score:4, Interesting)

    by middlemen ( 765373 ) on Monday April 08, 2013 @01:27PM (#43393071)
    This is typical "doctor" talk. They are worried about ruining their day or malpractice suits. How about for a change think about not ruining a patient's life with your negligence !
    • Re: (Score:3, Insightful)

      by Anonymous Coward
      This is what you get with a market-based approach to healthcare.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        No, this is what you get with a market-based approach to healthcare coupled with a government-granted monopoly in the form of ridiculous licensing laws.

        If you increased the competition these people face, they wouldn't be so smug anymore.

      • I am not convinced there is a reliable system short of market forces for rooting out
          * People who would take advantage of the system (whatever it is)
          * People who would charge unreasonable rates (of course, insurance seriously hampers this)
          * Underachievers

      • by gatkinso ( 15975 )

        Because somehow a surgical mistake have far more dire consequences in the US than in, say, the UK or Canada.

    • by ColdWetDog ( 752185 ) on Monday April 08, 2013 @01:36PM (#43393197) Homepage

      If the doctor doesn't ruin your life, then you don't get to ruin his day. Works for both 'concerns'.

      Not that I'm sure the touted system will work as planned (it never does) but trying to minimize bad outcomes is sort of the whole point to improving medical practice.

      It's not just BMWs and trophy wives.

      • If the doctor doesn't ruin your life, then you don't get to ruin his day. Works for both 'concerns'.

        How do you get to ruin his day? He has insurance. Know if or how many times your doctor has been accused of or sued for malpractice? Of course not. That information is secret.

        trying to minimize bad outcomes is sort of the whole point to improving medical practice.

        One of the best ways to do that is to make information about doctors public.

  • by jamesl ( 106902 ) on Monday April 08, 2013 @01:31PM (#43393129)

    Just run it through the autoclave.

    • Be sure to leave the lithium batteries inside.

      (We never did like that particular autoclave anyway.)

      • (We never did like that particular autoclave anyway.)

        You need to completely disassemble something so every surface is visible in an autoclave. If you don't do that, you don't kill all the bacteria. Which means, you failed at your job -- with potentially lethal results. If you ever put something in that had a battery compartment to begin with, that tells me before you even close the door that you should be ejected out of the fourth floor window in a firey blob as a lesson to other technicians to not mess around with safety procedures.

    • Before or after it is taken out of the patient? You could debug both at the same time...

  • by Zerth ( 26112 ) on Monday April 08, 2013 @01:31PM (#43393139)

    wow, does it solve the halting problem as well?

    • by Jane Q. Public ( 1010737 ) on Monday April 08, 2013 @01:45PM (#43393299)

      "wow, does it solve the halting problem as well?"

      Pretty close to my thoughts as well.

      Reliably finding all "bugs" is not possible without an intimate knowledge of exactly what the software is trying to accomplish. You can find certain kinds of errors algorithmically (what amount to "compile time" errors), and even prevent many kinds of run-time errors.

      But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.

      • by radtea ( 464814 ) on Monday April 08, 2013 @02:35PM (#43393797)

        But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.

        Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.

        That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.

        With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.

        Algorithmic testing of the kind described here will catch some bugs, but provably correct algorithms can still run into practical implementation details that will result in arbitrarily large deviations from ideal behaviour. For example: the inertial navigation module on the Ariane V could have had provably correct code (for all I know it actually did) and the rocket still would have destroyed itself.

        Most problems with embedded systems are of the kind, "The hardware did something unexpected and the software responded by doing something inappropriate." Anticipating the things the (incorrectly assembled, failure-prone, unexpectedly capable) hardware might do and figuring out what the appropriate response is constitutes the greater part of building robust embedded code, and this kind of verificationist approach, while useful as a starting point, won't address those issues at all.

        • I think referring to it as "useful as a starting point" very much under values this approach. None of that other stuff you referenced matters, at all, if the algorithm is guaranteed to screw up under certain circumstances. The example they gave in the article was of a provably screwed-up algorithm.

          However, I'm really commenting along the lines of "tone" or "emphasis". It is true that once you have a known-good algorithm, there's still other stuff that can go wrong that you need to pay attention to in how

        • Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.

          That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.

          With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.

          More accurately, this is a distinction between stateless and stateful systems. Stateless machines always perform the same operations. They makes them much easier to test than stateful systems whose behavior depends on memory of prior events.

          Software is always stateful although it can approach stateless behavior. Analog electronics and mechanical systems are generally stateless. Digital systems can be built either way but registers and memories are pretty fundamental so almost all modern digital chips

          • "Software is always stateful although it can approach stateless behavior. Analog electronics and mechanical systems are generally stateless. Digital systems can be built either way but registers and memories are pretty fundamental so almost all modern digital chips are stateful even if they don't embed cpu's and software."

            This is all true, but the very nature of the kinds of applications being discussed demand that they be stateful.

    • It's doable to ensure a program does exactly what its specs say it should do. See http://en.wikipedia.org/wiki/Formal_verification [wikipedia.org].

      It's not often done because it's extraordinarily time-consuming; the time it takes to do goes up exponentially with program length/complexity, I believe.

      • It's doable to ensure a program does exactly what its specs say it should do. See http://en.wikipedia.org/wiki/Formal_verification [wikipedia.org].

        It's not often done because it's extraordinarily time-consuming; the time it takes to do goes up exponentially with program length/complexity, I believe.

        But it's not doable to automatically ensure the specs are correct. i.e. If you have correct specifications for an adult, those might not be appropriate for an child or infant.

    • wow, does it solve the halting problem as well?

      The insolvability of the "halting problem" is only for arbitrary programs with arbitrary input. Certain defined classes of problems with defined inputs are very solvable.

    • by Anonymous Coward

      Yes.

      The halting problem says that you can't determine if a program halts or not in the general case. You definitely can check that for a chosen restricted set of programs (usually relatively small ones, but the core control system of a robot or the avionics control system for an airplane are small enough). See model checking [wikipedia.org] for how this is done in general. The article actually talks about a different technique which is to develop of a methodology for formally proving that class of programs correct, which i

    • wow, does it solve the halting problem as well?

      That's not insightful, it's clueless about the subject.The halting problem is very rarely a problem in program verification. It's possible to write a program for which halting is undecidable. That's a bug in the program.

      The Microsoft Static Driver Verifier, in practice, runs into problems it can't decide about 5% of the time. The goal is to determine that the driver is memory-safe to be in the kernel and calls all the driver APIs with valid parameters. If it does that, the driver can't crash the rest of t

      • by radtea ( 464814 )

        The software world isn't as aware of the technology.

        It's not "less aware of" so much as "this technology is less useful/more difficult to apply in the kind of complex semantic domains where software operates".

        Hardware has very limited semantics compared to software: the number of distinct types on a chip is small compared to the number of distinct types in even a moderately complex application.

        Each type of thing has its own set of behaviours that have to be encoded in the checking software. This encoding process has proven thus far to have very high difficu

    • In the general case it is true that you can not tell if an program will halt. However most programs do halt and most programmers understand why. Theorem provers present a language to their users that does not admit the general case in which halting is guaranteed which is part of a more important guarantee that all programs can be reduced to a normal form.

      Which means that the halting problem is solved for a program written in a language like coq.

      The practical challenge seems to be that formal detaile

      • The practical challenge seems to be that formal detailed requirements and specifications for algorithms are about as hard to write as the algorithms themselves.

        Yes; it's intuitively obvious that this must be the case, but I think someone managed to prove it a few years back. However, there's some doubt if the proof is valid...

  • by QuietLagoon ( 813062 ) on Monday April 08, 2013 @01:39PM (#43393229)
    ... will lead to sloppy software being written. Why write high-quality software when you know any bugs will be caught later on?
    • Because it's going to be used on a person's organs?
    • by Anonymous Coward

      I bet you say the same thing about stack traces, right?

      • I bet you say the same thing about stack traces, right?

        No. I prefer such safety back-stops.

        .
        The reason why I wrote my prior comment was that I read the reason the GNU-Linux people do not want to adopt BSD's strlcpy function is that the strlcpy encourages sloppy coding. The rationale given for that assertion is that strlcpy checks for the destination buffer being too small and zero-terminates regardless (unlike strncpy, which returns a non-zero-terminated buffer if the destination buffer is too small).

        I think such types of rationale for not implementing s

    • The proof verifier won't verify sloppy software without a proof of correctness. It's much harder to write a formal proof of correctness for sloppy code, so anything that can pass the verification stage will most likely be well written, simple, and very human readable since a human has to read and understand it sufficiently to write the formal proof of correctness for it. Writing the formal proofs will also probably be the largest part of the workload in the software development process so it makes no sens
    • will lead to sloppy software being written.

      As opposed to what is being written now, right?
  • I wanted to read the article to find out information about the method, but there website ain't working. Too much traffic.
  • by Anonymous Coward

    I believe that all people fall into one of two categories: either they can write good code or they can come up with clever names for things. Clearly this software was written by the former group.

  • Misleading summary (Score:4, Insightful)

    by MasseKid ( 1294554 ) on Monday April 08, 2013 @01:58PM (#43393463)
    This debugs the algorithm, not the software, nor the hardware used. There is an enormous difference.
    • The whole article is written with the notion of a total replacement debugging/designing solution for all software.

      It is really dealing with mechanical and human motion behaviours like inertial overshoot, path avoidances and realistic feedback to the operator rather than things like buffer overruns, atomic accesses and race conditions.

  • by Anonymous Coward

    Huey, Louie and Dewey, please report to the Medical Bay.

    (I must have gotten the drone mfr/model from the novelization. I don't think it's ever mentioned in the movie.)

  • by necro81 ( 917438 ) on Monday April 08, 2013 @03:04PM (#43394023) Journal
    posting to undo an accidental moderation
  • Am I the only one who read the title and laughed because of a Patton Oswalt joke?

    Source: http://www.youtube.com/watch?v=qbai-yBRyHg [youtube.com]

  • by TheSkepticalOptimist ( 898384 ) on Monday April 08, 2013 @03:47PM (#43394347)

    bool DidBrainBotOperateSuccessfullyOn(Patient patient)
    {
              if ( patient != null && patient.IsVegetable && !patient.WasVegetable )
                        throw new PendingLawsuitException(patient.Lawyer, Doctor.Lawyers, Hospital.Lawyers, University.Lawyers, Manufacturer.Lawyers);
              return true;
    }

  • Here's what they actually do according to the paper:

    The problem is to construct a control system such that the robot scalpel (controlled by a person by a force-feedback device) will not exit some pre-defined boundaries. E.g. boundaries that prevent the scalpel cutting a nerve. The preivous system was flawed as it didn't take into account delays in the reponse of the scalpel. By creating a more accurate model, and using methods that are able to prove things about the dynamic system, they prove that usin

  • Let's stick these folks who believe in this verification technique underneath the robot as the first patients.

    Just do some pointless surgery, open em up, and stitch them back up.

    If they survive then trials on other people can start.

     

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...