January 2021, Volume 71, Issue 1

Review Article

The role of simulators in improving vitreoretinal surgery training — A systematic review

Authors: Taha Muneer Ahmed  ( Aga Khan University, Karachi, Pakistan. )
Muhammad Abdul Rehman Siddiqui  ( Department of Surgery, Aga Khan University Hospital, Karachi, Pakistan )

Abstract

Objective: To conduct an appraisal of current evidence regarding the effectiveness of EyeSi®-based training of vitreoretinal surgery.

Methods: The systematic review was conducted in July 2020, and comprised literature search on Cochrane Library, PubMed and Embase for articles regarding simulation training in vitreoretinal surgery. The shortlisted articles were subjected to qualitative analysis. Existing evidence was assessed, and predictions on how outcomes may be applied to improve vitreoretinal surgery training were made. The risk of bias of each study was calculated in line with the guidelines of the Cochrane Handbook.

Results: Of the 124 articles identified, 7(5.6%) were shortlisted; 5(71.4%) established construct validity; 1(14.3%) discriminate validity and 1(14.3%) concurrent validity. Analysis disclosed minimal bias in the selected studies.

Conclusion: Current evidence on simulation training in vitreoretinal surgery suggests it is a thoroughly validated training tool with minimal risk of bias. Vitreoretinal surgery training programmes should adopt and gauge the impact simulation training has on patient-related outcomes.

Keywords: Vitreoretinal surgery, Virtual reality, Training, Eyesi, Ophthalmic simulators, Surgical simulation.

 

Introduction

 

For centuries, surgical training has been conducted under the master-apprentice model of "see one, do one, teach one".1 The limitation of this model, however, has been its dependence on patients.2 This is a drawback that the use of simulators during training may entirely resolve. In the past 20 years, many specialities, including cardiothoracic,3 laparoscopic,4 and ophthalmic surgeries,5 have widely begun to embrace the singular role of simulation training in bridging this gap. When it comes to ophthalmology, the literature on simulator training is ample, with the existing articles on simulated vitreoretinal surgeries principally aiming to establish if there is any validity to the scoring metrics of the EyeSi®, and if skills acquired on the simulator transfer to real-life vitreoretinal surgery (VRS).6 A systematic review of existing literature is necessary to assess the extent to which simulation training can achieve these metrics and if there is a benefit to augmenting vitreoretinal training programmes with simulation training. The current systematic review was planned to assess available studies evaluating the use of simulators in VRS training.

 

Materials and Methods

 

The systematic review was conducted in July 2020, and comprised literature search on Cochrane Library, PubMed and Embase for articles regarding simulation training in vitreoretinal surgery. The keywords used were: "vitreoretinal surgery", "virtual reality", "EyeSi" and "training". Only studies providing qualitative results evaluating the impact of the EyeSi® simulator on training were included, while the rest were excluded.

The abstracts of all the included studies were evaluated by two authors. The entire texts were subsequently reviewed by both the authors. Articles mutually found to be appropriate were included.

All relevant data from the included studies was exported onto a worksheet. The date of publication, number and designation of participants, skills trained, and outcomes of each study were noted. Classification of the skills trained on the EyeSi® simulator was done according to inbuilt vitreoretinal EyeSi® modules consisting of navigation training, bimanual training, forceps training, laser photocoagulation, internal limiting membrane peeling (ILM-peel), vitrectomy, posterior hyaloid and retinal detachment. Outcomes were classified as operating time, skill assessment and skill acquisition. The risk of bias of each study was calculated using the guidelines of the Cochrane Handbook.7 The included studies were independently reviewed by both the authors and were categorised as unclear, low risk or high risk of bias.

 

Results

 

Of the 124 articles identified, 7(5.6%) were shortlisted (Figure-1).

None of the included studies had all bias items assessed as carrying low risk. The bias item that ranked poorest overall was blinding of outcome assessment. Of the studies, only Vergmann et al. included a protocol which allowed for assessment of reporting bias.8 Allocation bias and performance bias was unappreciable for 2(18.6%) studies as they were single-group studies (Figure-2).

The included studies were all published between 2004 and 2019. Collective attritubutes of the studies were summarised (Table-1).

Of the 7 studies, 5(71.4%) established construct validity,8-12 1(14.3%) established discriminate validity13 and 1(14.3%) established concurrent validity.14

Attributes of each individual study were also noted separately (Table-2).

Rossi et al.10 explored how simulated performance on the EyeSi® correlates with real-life VRS performance. It comprised 3 groups of students, residents and surgeons who were required to perform 3 separate intraocular navigation and ILM-peel tasks. The participants' completion times and scores on the EyeSi® performance curve were recorded. A statistically significant difference between the graded performance of students and surgeons (p=0.003) and between residents and students (p=0.05) in all 3 groups was found.

Vergmann et al.8 carried out a study evaluating if more real-life surgical experience was associated with better scores on the EyeSi® VRS simulator. A total of 35 participants were allocated into 3 experience-based cohorts of students, residents and surgeons. Each group then performed and was graded on 6 simulated VRS modules on the EyeSi®. The participants then received feedback and repeated all 6 modules. Measures of association between their simulator scores and the experience-group they fell under were determined. Results showed the surgeons' group had the highest overall scores, followed by the residents and then the students (p<0.01). Of note was the fact that the cohorts of surgeons with less experience showed greater improvement during the second attempt, while the more experienced surgeons did not.

Thomsen et al.11 correlated past cataract surgery training experience to scores on the VRS module of on the EyeSi®. The study recruited 12 residents; 6 with no past ophthalmic surgical experience, and 6 with past cataract VRS experience, alongside 3 surgeons. All participants completed the procedure and were graded on 11 VRS modules on the EyeSi®. There were significant differences in the mean test scores between the surgeons and the novice residents (p=0.023) and between the surgeons and experienced residents (p=0.003).

Solverson et al.12 evaluated the ability of the EyeSi® simulator to differentiate between novices and experienced vitreoretinal surgeons. The novice group consisted of 12 participants comprising residents, interns and ophthalmic staff. The expert group consisted of 7 experienced vitreoretinal surgeons. Both groups completed the procedure and were graded on the same navigational microdexterity module on the EyeSi® simulator. The total error score between the experienced and novice groups showed a statistically significant difference with it being 24.1 for the novices and 11.3 for the experts (p<0.05).

Cissé et al.19 conducted a study comparing the scores of 6 experienced vitreoretinal surgeons with >100 procedures per year, and 15 residents with no past VRS experience. Both groups completed the procedure and were graded on the same four modules on EyeSi®. Results showed that the surgeons achieved significantly better scores than residents on navigation (p=0.01), forceps (p<0.01), epiretinal membrane peeling-1 (p=0.02) and epiretinal peeling-2 modules (p=0.04). No difference was noted between the groups on the 2 vitrectomy modules (p=0.17 and p=0.26).

Deuchler et al.14 evaluated the efficacy of EyeSi®  simulator in preparing surgeons for performing VRS and the potential to predict the given surgeon's performance during the upcoming procedure. Four participating cataract surgeons performed 9 vitrectomies immediately following warmup on the EyeSi® vitreoretinal module. The same group of surgeons also performed 12 vitrectomies without any EyeSi® warmup. The warmups were graded on the simulator and the vitrectomies were recorded and graded according to the Global Rating Assessment of Skills in Intraocular Surgery (GRASIS) score by two masked observers. Results showed that a warmup period on the EyeSi® prior to surgery was associated with significantly improved subsequent surgical performance (p=0.0302). Furthermore, the surgical experience of each surgeon in years was found to be positively correlated with the surgeon's scores on EyeSi® (p=0.0003).

Mellum et al.13 examined if distracting factors had any impact on the surgical performance of 19 novice surgeons who completed a basic training programme on EyeSi® until a minimum eligibility score was reached. Once familiarised, the surgeons completed four vitreoretinal modules on the simulator without any distracting factors to determine a reference score. Next, the surgeons completed the same four modules under the influence of each of the four distracting factors: auditory distraction, fasting, interrupted sleep and 24-hour sleep deprivation. All distracting factors resulted in lower performance compared to the reference scores (p=0.0007).

 

Discussion

 

Prior to the widespread adoption of a new technology, its validity must be verified. Gallagher et al.15 defined a set of measures to gauge validity. These included face validity which is a subjective validation primarily aimed at determining if an instrument is capable of measuring what it is intended to measure. The next is content validity, which is also a subjective but more rigorous validation done by methodically examining test contents. Content and face validity do not carry significant weight.15 Then comes construct validity, which is the ability of an instrument to identify and discriminate between variables it measures. This is judged on the basis of the instrument to differentiate novices from experts. The concurrent validity judges how closely on-the-instrument scores correlate with scores on well-established gold standard instruments. The discriminate validity is the degree to which the scores generated by a instrument correlate with any and all factors with which they are expected to correlate. Finally, predictive validity assesses on the basis of evaluations made by the instrument being accurately predictive of actual performance.

One study has shown EyeSi® to have concurrent validity.14 Deuchler et al.14 found that GRASIS scores, the current gold standard for scoring ophthalmic surgeries, were strongly correlated with EyeSi®  proficiency scores for the ILM-peel and retinal detachment modules. Deuchler et al.14 also discovered that EyeSi® proficiency scores across these modules were also strongly linked with the total number of years of VRS experience.  Through these two metrics, the concurrent validity of the EyeSi® vitreoretinal simulator was established.

The majority of existing literature validating EyeSi® does so by establishing the construct validity of VRS simulator. Of the included studies, Cissé et al.,9 Vergmann et al.,8 Rossi et al.,10 Thomsen et al.11 and Solverson et al.,12 established construct validity of the simulator. The navigation training module was validated by all 5 studies. Navigation training is the most basic and central module of EyeSi® and is thought to underpin the validity of other modules. The validation of this module serves as a benchmark for how EyeSi® can accurately differentiate between novices and experts at the most fundamental level.

Deuchler et al.14 also established the predictive validity of the simulator by looking at how EyeSi® scores before surgery correlated with GRASIS performance during subsequent real surgeries.

Mellum et al.13 established the discriminate validity of EyeSi® by looking at how distracting factors that are well known to result in a poor surgical performance led to decrease in performance scores on the simulator. In essence, variables that are known to adversely affect surgical performance also negatively affected scores on the EyeSi®, indicating that the simulator has discriminate validity.

Through these studies, EyeSi® has effectively stand validated at all levels of Gallagher's 6 criteria for instrument validation.15 However, further studies on discriminate and predictive validity are necessary given the small sample sizes of the current studies investigating those outcomes. Moreover, while evidence regarding basic modules on the EyeSi®, such as navigational training, is robust, more evidence is needed regarding advanced procedural modules, such as retinal detachment and membrane peeling.

There exists an extensive body of evidence supporting the validity of this instrument in realistically simulating VRS scenarios. The next step in assessing the effectiveness of the EyeSi® would be integration into ophthalmic training programmes and comparing resident training programmes with access to EyeSi® against those without access. Notably, numerous cataract surgical training programmes have done this with EyeSi® and have reported positive outcomes with supplementing training programmes with EyeSi®.16 The most noteworthy finding made by these programmes has been the decreased complication rates of EyeSi®-augmented residents compared to EyeSi®-naïve residents.17 It would be interesting to see if similar benefits of EyeSi®-augmented cataract surgery training carry over to EyeSi®-augmented VRS.

The risk of bias evaluation indicated that of all the included studies, only 1 was fully bias-free. Bias primarily arose due to a lack of blinding of participants and outcome assessment. The automated nature of EyeSi® grading, however, diminishes the importance of potentially-biased outcome assessment. Consequently, on holistic examination, all studies generally ranked low in bias risk and adhered to acceptable reporting standards.

The obstacle that remains in the path to broader implementation of the EyeSi® simulator is the significant purchasing cost of ~£100,000 to £150,000, along with recurrent yearly maintenance costs of £5000 up to £10,000. These costs, however, may be diluted by sharing the cost amongst several hospitals in a region and designating one regional training centre where trainees may share access to EyeSi®. This is of particular benefit in countries with limited resources where the quality of healthcare training systems may vary vastly between different training centres. Having a designated training centre where trainees from different training programmes could jointly practice surgical skills may help bridge this gap. Standardisation improves process reliability and may aid in more consistent training across the region. Doing so may also encourage discourse and collaboration between regional training programmes and may lead to the development of a standardised curriculum to better align training programmes in developing countries with international standards. Furthermore, grading and assessment of trainees may also be conducted on the simulator in a more objective manner, free from the innate human error associated with current observational grading metrics, such as GRASIS.15 Current assessment methods are also limited in their dependence on the availability of a senior surgeon to manually assess each trainee, which is another human limitation that simulators may reduce.

These costs also need to be balanced against patient-safety benefit associated with current training models where novice surgeons partly develop their skills by operating on real patient's eyes. Literature shows that VRS performed by trainee surgeons is associated with greater complication rates.18 EyeSi® shifts this necessary yet dangerous period of a trainee's learning curve away from the eyes of vulnerable patients. The trainee gets the authentic training necessary and the patients do not get exposed to what, with the advent of EyeSi®, may be considered unnecessary operating risks.

 

Strengths and limitations

 

The strength of the current review is that it used well-established metrics of gauging simulator effectiveness, such as Gallagher's criteria,15 to standardise and allow for a level of stratification and generalisability between the findings of the various studies. Another strength is that is was conducted as per Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.19 A limitation of the review is that articles that were unpublished were not searched for. This might have contributed to publication bias.

 

Conclusion

 

EyeSi® simulator was found to have the ability to assess and predict VRS proficiency at all levels. A risk-of-bias analysis of the included studies showed no significant bias in study design or execution. As the vitreoretinal module of EyeSi® becomes more widely adopted in training programmes, further studies are needed to compare the effect of augmenting training with EyeSi® on patient outcomes as it will gauge the impact of EyeSi® on VRS training.

 

Disclaimer: None.

Conflict of Interest: None.

Source of Funding: None.

 

References

 

1.      Kotsis SV, Chung KC. Application of the "see one, do one, teach one" concept in surgical training. Plast Reconstr Surg 2013;131:1194-201. doi: 10.1097/PRS.0b013e318287a0b3.

2.      Li L, Yu F, Shi D, Shi J, Tian Z, Yang J, et al. Application of virtual reality technology in clinical medicine. Am J Transl Res 2017;9:3867-80.

3.      Shaharan S, Neary P. Evaluation of surgical training in the era of simulation. World J Gastrointest Endosc 2014;6:436-47. doi: 10.4253/wjge.v6.i9.436.

4.      Nagendran M, Gurusamy KS, Aggarwal R, Loizidou M, Davidson BR. Virtual reality training for surgical trainees in laparoscopic surgery. Cochrane Database Syst Rev 2013;2013:e006575. doi: 10.1002/14651858.

5.      Khalifa YM, Bogorad D, Gibson V, Peifer J, Nussbaum J. Virtual reality in ophthalmology training. Surv Ophthalmol 2006;51:259-73. doi: 10.1016/j.survophthal.2006.02.005.

6.      Thomsen AS, Subhi Y, Kiilgaard JF, la Cour M, Konge L. Update on simulation-based surgical training and assessment in ophthalmology: a systematic review. Ophthalmology 2015;122:1111-30.e1. doi: 10.1016/j.ophtha.2015.02.028.

7.      Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 2011;343:d5928. doi: 10.1136/bmj.d5928.

8.      Vergmann AS, Vestergaard AH, Grauslund J. Virtual vitreoretinal surgery: validation of a training programme. Acta Ophthalmol 2017;95:60-5. doi: 10.1111/aos.13209.

9.      Cissé C, Angioi K, Luc A, Berrod JP, Conart JB. EYESI surgical simulator: validity evidence of the vitreoretinal modules. Acta Ophthalmol 2019;97:e277-82. doi: 10.1111/aos.13910.

10.    Rossi JV, Verma D, Fujii GY, Lakhanpal RR, Wu SL, Humayun MS, et al. Virtual vitreoretinal surgical simulator as a training tool. Retina 2004;24:231-6. doi: 10.1097/00006982-200404000-00007.

11.    Thomsen ASS, Kiilgaard JF, la Cour M, Brydges R, Konge L. Is there inter-procedural transfer of skills in intraocular surgery? A randomized controlled trial. Acta Ophthalmol 2017;95:845-51. doi: 10.1111/aos.13434.

12.    Solverson DJ, Mazzoli RA, Raymond WR, Nelson ML, Hansen EA, Torres MF, et al. Virtual reality simulation in acquiring and differentiating basic ophthalmic microsurgical skills. Simul Healthc 2009;4:98-103. doi: 10.1097/SIH.0b013e318195419e.

13.    Mellum ML, Vestergaard AH, Grauslund J, Vergmann AS. Virtual vitreoretinal surgery: effect of distracting factors on surgical performance in medical students. Acta Ophthalmol 2020;98:378-83. doi: 10.1111/aos.14259.

14.    Deuchler S, Wagner C, Singh P, Müller M, Al-Dwairi R, Benjilali R, et al. Clinical Efficacy of Simulated Vitreoretinal Surgery to Prepare Surgeons for the Upcoming Intervention in the Operating Room. PLoS One 2016;11:e0150690. doi: 10.1371/journal.pone.0150690.

15.    Gallagher AG, Ritter EM, Satava RM. Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training. Surg Endosc 2003;17:1525-9. doi: 10.1007/s00464-003-0035-4.

16.    Ferris JD, Donachie PH, Johnston RL, Barnes B, Olaitan M, Sparrow JM. Royal College of Ophthalmologists' National Ophthalmology Database study of cataract surgery: report 6. The impact of EyeSi virtual reality training on complications rates of cataract surgery performed by first and second year trainees. Br J Ophthalmol 2020;104:324-29. doi: 10.1136/bjophthalmol-2018-313817.

17.    Ahmed TM, Hussain B, Siddiqui MAR. Can simulators be applied to improve cataract surgery training: a systematic review. BMJ Open Ophthalmol 2020;5:e000488. doi: 10.1136/bmjophth-2020-000488.

18.    Shah MA, Shah SM, Desai A. Visual Outcome of Cataract Surgery Complications Repair at a Cataract Training Centre of Western Central India. Ophthalmology Research: An International Journal 2020;12:14-20. DOI: 10.9734/OR/2020/v12i330148

19.    Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009;339:b2700. doi: 10.1136/bmj.b2700.

 

Journal of the Pakistan Medical Association has agreed to receive and publish manuscripts in accordance with the principles of the following committees: