Healthcare has been notoriously slow to harness the huge technological advancements we have witnessed in the last 25 years. Indeed, much of our current surgical practice remains unchanged in several decades. Whilst we have attempted to make surgical procedures less traumatic by using keyhole instruments where appropriate, the underlying technology behind these minimally-invasive procedures remains relatively rudimentary. And it has only been recently that we have begun to utilise so-called surgical ‘robots’ which allow surgeons to perform more complicated procedures with flexible and articulating instruments. However, with increasing collaboration between engineering, computer science and surgeons, we are beginning to catch a glimpse of an exciting future – the next surgical revolution.
The initial step will be to improve image resolution of the monitors used in surgery to match, at minimum, the quality offered by modern televisions, eg 2K/4K resolution. Globally, the current standard is equivalent to the television sets of the early 1980s. But beyond the clarity of the image, we can think about operating outside ‘white light’, extending our vision to the near-infrared spectrum of light. This will allow us to visualise anatomical and pathological structures with the help of fluorescent dyes and identify disease in a manner that will make subsequent surgery more precise with the promise of decreased collateral damage.
Advances in X-ray technology, in particular CT/MRI scans, has meant that we are able to meticulously plan treatment strategies. But this does not directly translate to the same precision surgically. If one views the goal of surgery, broadly speaking, to remove diseased tissue, not infrequently, either some diseased tissue is left behind, or an unnecessary amount of healthy tissue is removed along with the pathology. We have thus far been unable to take all the wealth of information obtained from these detailed scans and apply that during an operation…..until now! By using the power of augmented reality, we can develop ‘X-ray vision’ and see into our patients. Using patient-specific scans, 3-dimensional holograms can be generated and projected directly on to a patient that is undergoing surgery. This can be viewed using specialised headsets such as Microsoft HoloLens or on the operating screens –
AR opens up a whole new world where we are not limited by the naked eye. This has the potential to enhance and accelerate training of surgeons, limit collateral damage, and allow targeting of disease with far greater precision and accuracy.
In addition, as theatres becoming more technologically advanced, we will see more in-theatre imaging machines whereby patients would be able to be scanned whilst being operated on providing real-time imaging and direct navigation. We have begun to see such operating suites in research facilities and used in neurosurgery for example, but this will become more widespread. Alongside the potential use of imaging devices, the operating theatres will become ‘smarter’, incorporating the wealth of real-time information coming from the surgical instruments and the patients’ physiology to build up a data-mine. Understanding how to harness this data and transform it into meaningful improvements in patients’ outcome will be the next big challenge for the data scientists.
Academic hubs such as University College London (UCL) where multidisciplinary research groups are able to tackle these problems are best positioned to drive this innovation forward. I am delighted that my research team are pushing these boundaries and exploring the huge potential of image-guided surgery to allow future adoption for the wider medical community. We are already developing smart algorithms to make surgery more efficient and precise, cancer-specific dyes, high fidelity holograms and more. We are on the verge of another industrial revolution within surgery, and it is going to be an exciting decade.