eoSurgical are now part of the Limbs & Things family

eoSurgical essay competition winning entry

How immersive will surgical simulation become in the 21st century?

Simulation has the potential to revolutionise surgery. For trainees, simulation enables practice of skills and procedures in a risk-free environment, and irrespective of the availability of patients. For experienced surgeons, simulation enables practice of complex or rarely-performed procedures before attempting them in real-life.

Already, surgical simulation forms an important component of training for minimally invasive procedures[1]. Immersive virtual environments have already been developed for laparoscopic simulators[2]. Recent advances in technology have raised the tantalising possibility of immersive simulation of open surgical procedures before the turn of the 21st century.

Björk and Holopainen identified four facets to immersion: spatial, sensori-motor, cognitive and emotional[3]. With respect to these four facets, I will discuss existing technologies that will enhance the immersiveness of surgical simulation.

 

Spatial immersion

Spatial immersion requires realistic representations of simulated objects. This is simpler for minimally invasive procedures, which requires only two-dimensional representations of the simulated surgery on a flat screen. By contrast, open procedures require three-dimensional representations of simulated objects.

The gold standard is life-scale holographic projections, with which the users will interact. However, holographic technology would need significant advancements before it becomes viable for surgical simulations.

A more immediately viable alternative is augmented reality, Microsoft Hololens being the most promising candidate[4]. Hololens creates pseudo-holographic images in the real world space. Although these images are not true holograms, they replicate all important aspects of holograms, in that true three-dimensional images are formed.

There are currently two important limitations of Microsoft Hololens to be overcome. First is the extremely limited field of view for the pseudo-holographic images (described as the size of a “deck of cards”). The second limitation is that users cannot approach close enough to touch the holograms, which will instead disappear from view.

 

Microsoft Hololens simulation.[8] Top: Three-dimensional virtual femurs, viewable through the Hololens. Bottom: The field of view through the Hololens headset.

 

Sensori-motor immersion

Sensori-motor immersion can be described as the responsiveness of simulated objects to user actions (motor component) and vice versa (sensory component).

Again, this is simpler for simulation of minimally invasive procedures, where the procedure occurs through laparoscopic instruments. The instruments can act as interfaces between the user and the simulated objects: the motions of the instruments can be translated into a response by simulated objects, and the instruments can be manipulated to provide haptic (sensory) feedback to the user.

For simulation of open procedures, the motor component of sensori-motor immersion would require motion sensing technology. Microsoft Hololens already has integrated motion sensing and object detection technology. This means that the user's bare hands can be detected by the Hololens and can directly interact with the holograms without the need for external motion-sensing controllers.

On the other hand, it is easier to provide haptic feedback through external controllers. Although rudimentary haptic holograms (holograms which can be directly felt) have been developed[5], holograms which can mimic the hardness of bodily structures, such as bones, remain far off.

Gosselin et al. developed a method of providing haptic feedback through external controllers[6]. “Very realistic” haptic feedback for simulated maxillofacial procedures was provided by mounting surgical instruments onto robotic levers, which provided directional resistance depending on where the instrument was relative to a virtual object. Future development could focus on developing haptic “glove” controllers, which fit over the user's hands and provide haptic sensations directly to the hands rather than through instruments.

As the century progresses, development of haptic feedback for surgical simulation may be rendered unnecessary if robotic surgery becomes widespread, as tactile feedback is minimal in robotic surgery. As with laparoscopic surgery, the robot will act as an interface between the user and the simulated reality.

 

 

Haptic interface from Gosselin et al. [6] Left: Design of the haptic interface. Right: Interface in use. The interface is attached to surgical drills with sensors, and a simulated object is visible on the screen.

 

Cognitive immersion

Cognitive immersion in surgical simulation consists ideally of two things: Firstly, the software should be programmed such that users are able to choose from a myriad of possibilities, and perform any correct or incorrect actions, rather than simply re-enacting a sequence of predetermined events. For example, a slip of the hand might result in a severed nerve or artery.

Secondly, each repeat procedures should provide unique challenges by simulating the unique anatomies of different patients. In addition to encouraging constant cognitive engagement and decision making, this would also allow practice simulation runs of unique and complex cases before performing the real surgery. There already exists methods for three-dimensional rendering of MRI and CT scans which can reproduce the specific anatomies of patients[7,8]. The next steps are to integrate this into 3D simulation technology (discussed in Spatial immersion) and to develop haptic response systems. For the latter, machine learning might be used to “teach” the software to identify various parts of the anatomy, so that appropriate haptic feedback for each part can be provided.

 

Emotional immersion

Emotional immersion refers to a sense of being invested in the simulated reality. This often involves a system of rewards and losses, which is more relevant to the gaming industry. As such, the details of its implementation are beyond the scope of this essay.

Nevertheless, greater emotional immersion in surgical simulation may be beneficial: Firstly, emotional immersion may increase a trainee's motivation to practice. Secondly, by instilling a desire to succeed and/or a fear of failure, users may be better prepared for the pressures and atmosphere of real-life surgery.

 

Conclusions

The technologies that will enable immersive surgical simulations are already in existence. Once the limitations in individual technologies (discussed above) have been overcome, the next step would be to integrate all aspects of immersion into one seamless experience: a three-dimensional virtual body of a unique patient will be recreated (Spatial immersion), which the user will be able to feel, touch and manipulate (Sensori-motor immersion). The user will analyse this body and perform any combination of appropriate actions or mistakes (Cognitive immersion). Emotional immersion, meanwhile, would instil an intrinsic motivation to succeed or a fear of failure.

Completely immersive surgical simulation, far from lying within the realms of science fiction, looms before our eyes in the 21st century. Whether it is brought into reality will depend on the availability of financial backing, and on the collective will of the technological and surgical communities.

 

Submitted by Chan Hee Koh.

 

References

  1. LSC | Laparoscopic Skills Curriculum [Internet]. [cited 2016 Jan 10]. Available from: http://skills.eosurgical.com/
  2. Intuitive Surgical - da Vinci Si Surgical System - Skills Simulator [Internet]. [cited 2016 Jan 10]. Available from: http://www.intuitivesurgical.com/products/skills_simulator/
  3. Bjork S, Holopainen J. Patterns in Game Design. Charles River Media; 2005. 452 p.
  4. Microsoft HoloLens [Internet]. Microsoft HoloLens. [cited 2016 Jan 10]. Available from: https://www.microsoft.com/microsoft-hololens/en-us
  5. Long B, Seah SA, Carter T, Subramanian S. Rendering Volumetric Haptic Shapes in Mid-air Using Ultrasound. ACM Trans Graph. 2014 Nov;33(6):181:1–181:10.
  6. Gosselin F, Bouchigny S, Mégard C, Taha F, Delcampe P, d’Hauthuille C. Haptic systems for training sensorimotor skills: A use case in surgery. Robot Auton Syst. 2013 Apr;61(4):380–9.
  7. Anastasi G, Bramanti P, Di Bella P, Favaloro A, Trimarchi F, Magaudda L, et al. Volume rendering based on magnetic resonance imaging: advances in understanding the three-dimensional anatomy of the human knee. J Anat. 2007 Sep;211(3):399–406.
  8. Perandini S, Faccioli N, Zaccarella A, Re T, Mucelli RP. The diagnostic contribution of CT volumetric rendering techniques in routine practice. Indian J Radiol Imaging. 2010 May;20(2):92–7.
  9. Microsoft HoloLens: Partner Spotlight with Case Western Reserve University - YouTube [Internet]. [cited 2016 Jan 11]. Available from: https://www.youtube.com/watch?v=SKpKlh1-en0