For the first time ever in medicine, a robot operated on a patient during a laparoscopic procedure without the assistance of a surgeon’s hand. An article in Science Robotics describes the development of an improved Smart Tissue Autonomous Robot (STAR) that was successful in completing a difficult task on a pig’s soft tissue. An important milestone has been reached in the development of completely automated robotic surgery.
One of the most complex and sensitive operations in surgery may be automated, according to our research. We can connect the two ends of an intestine.” Axel Krieger, senior author and assistant professor of mechanical engineering at Johns Hopkins’ Whiting School of Engineering, said in a press statement that the STAR completed the treatment in four animals and generated much better results than people conducting the identical procedure.
Surgeons do abdominal or pelvic surgeries laparoscopically, through small incisions and a camera. In many cases, laparoscopic anastomosis—connecting two tubular systems like blood arteries or intestines—is the preferred method. In spite of the fact that the operation is minimally invasive, it has the potential for major problems for the patient if any leakage happens owing to improper suturing.
Medical efficiency, patient safety, and trustworthiness could all be improved with autonomous robotic surgery in the future. An autonomous anastomosis can be problematic when it comes to imaging, tissue monitoring, and surgical planning that require precision. A problem that develops during surgery necessitates speedy adaption in many of these techniques.
Improvements have been made to this year’s STAR model, which is capable of suturing an animal’s intestine, but it requires human assistance and makes a wider incision than the previous generation.
The newest STAR can alter its surgical plan in real-time thanks to enhanced robotic precision and suturing instruments, a 3D imaging system, and machine learning-based tracking algorithms.
Using machine learning, computer vision, and advanced control approaches, we were able to follow the target tissue movement in response to patient breathing, detect the tissue deformations between suturing steps, and operate the robot under motion limitations,” researchers wrote in the report.
A convolutional neural network (CNN)-based machine learning method predicts tissue mobility and helps surgeons plan sutures. Anastomosis procedures provided the researchers with 9,294 samples of different motion profiles, which they then fed into CNNs to learn about tissue motion during surgery.
Stabilized tissue is scanned and created suture plans by the robot in coordination with a camera. STAR develops two initial suture plans to join nearby tissue using advanced computer vision and a CNN-based landmark detection technique. To check for tissue deformation, the robot administers a suture after an operator sets a plan.
Suture planning and approval must be restarted if the position of the tissue changes by more than 3 millimeters from the surgical plan. Each and every stitch goes through this process.
There were four convolutional and three dense layers and two outputs tracking tissue motion used in the training of the CNNs using an NVIDIA GeForce GTX GPU. In order to train and test the landmark detection algorithm using a cascaded U-Net architecture, an NVIDIA T4 GPU was used.
Quality of anastomosis, which includes needle insertion, modifications to suture spacing, size of suture bites, and completion time were studied by researchers. They discovered that the autonomous STAR excelled both expert surgeons and robot-assisted procedures in terms of consistency and precision.
It is the first robotic system to design, modify, and carry out a surgical plan in soft tissue with a minimum human intervention that makes the “STAR” remarkable, according to Krieger’s words.