search for




 

Comment on: “Deep learning-based surgical phase recognition in laparoscopic cholecystectomy”
Ann Hepatobiliary Pancreat Surg 2025 Feb;29(1):95-6
Published online February 28, 2025;  https://doi.org/10.14701/ahbps.24-149
Copyright © 2025 The Korean Association of Hepato-Biliary-Pancreatic Surgery.

Hinpetch Daungsupawong1, Viroj Wiwanitkit2

1Private Academic Consultant, Vientiane, Lao People’s Democratic Republic,
2Saveetha Medical College and Hospital, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, India
Correspondence to: Hinpetch Daungsupawong, PhD
Private Academic Consultant, Lak 52 Phonhong, Vientiane 10000, Lao People’s Democratic Republic
E-mail: hinpetchdaung@gmail.com
ORCID: https://orcid.org/0009-0002-5881-2709
Received July 30, 2024; Accepted August 19, 2024.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Body

Dear Editor:

We would like to discuss materials arising from a published article entitled “Deep learning-based surgical phase recognition in laparoscopic cholecystectomy [1].” This article described a deep learning model created to automatically identify surgical phases in laparoscopic cholecystectomy procedures using a combined dataset of 120 publicly available videos from the Cholec80 dataset and 40 institutionally recorded movies between July and December 2022. Researchers of that article divided the data into training and testing sets at a ratio of 2:1. The model was evaluated based on its ability to recognize various surgical stages without any pre- or post-processing, providing a clear assessment of how well the model performed using only training data. Their results show an overall accuracy rate of up to 91.2%.

One of that study’s main flaws was its limited dataset, specifically the 40 institutionally recorded videos, which might not adequately capture the diversity of surgical techniques and differences between surgeons, resulting in poor model performance in specific situations. A lack of generalizability to real-world applications was evident. Furthermore, while the lack of pre- or post-processing data could allow for a realistic assessment of the model’s performance, prospective enhancements to accuracy, such as noise reduction or frame stabilization, which are critical during surgical video recording, might be neglected. The evaluation criteria utilized, particularly accuracy and F1 score, may not adequately capture the performance across all surgical operations, indicating a need for additional measurements to better understand the nuances of surgical outcomes, including sensitivity rate complications, the variability inherent surgical procedures and the comprehensive assessment of patient recovery time.

To increase the model’s resilience and usefulness in multiple surgical contexts, future research should focus on expanding the dataset to include a broader range of laparoscopic surgical procedures from various institutions. Collaboration with other surgical centers to obtain a broader range of video data could increase the model’s generalizability and detection accuracy for improper phases such as clipping and trimming. Furthermore, using data augmentation techniques to generate fake training data may improve the model’s learning capabilities in less surgical motion.

Future research areas could include using transfer learning to exploit pre-trained models on relevant visual tasks, which could result in an improved initial performance on little data. Exploring unsupervised or semi-supervised learning methods could make use of the enormous amount of unannotated surgical data to improve model training without comprehensive manual annotation. Finally, the use of real-time assessment tools during surgery might provide rapid feedback to the surgical team, enabling better training, boosting the model’s predictive capabilities, and eventually improving surgical results.

ACKNOWLEDGEMENTS

AI declaration: the author use language editing computational tool in preparation of the article.

FUNDING

None.

CONFLICT OF INTEREST

No potential conflict of interest relevant to this article was reported.

AUTHOR CONTRIBUTIONS

Conceptualization: All authors. Data curation: All authors. Methodology: All authors. Visualization: All authors. Writing - original draft: HD. Writing - review & editing: All authors.

References
  1. Yang HY, Hong SS, Yoon J, Park B, Yoon Y, Han DH, et al. Deep learning-based surgical phase recognition in laparoscopic cholecystectomy. Ann Hepatobiliary Pancreat Surg 2024. https://doi.org/10.14701/ahbps.24-091 [in press].
    Pubmed KoreaMed CrossRef

 

February 2025, 29 (1)
Full Text(PDF) Free
PubMed
PubMed Central

Social Network Service

Services

Cited By Articles
  • CrossRef (0)

Author ORCID Information