Artificial intelligence-assisted detection of internal openings in anal fistulas using endorectal ultrasound: a YOLOv11-based diagnostic system
Highlight box
Key findings
• The YOLOv11n model demonstrated high precision (0.925) and perfect recall (1.0) for real-time detection of internal openings (IO) of anal fistulas (AF) in endorectal ultrasound images.
• The model achieved exceptional accuracy, with a mean average precision (mAP) of 0.99 at an intersection-over-union (IoU) threshold of 0.5, indicating superior performance at this threshold.
• Although the mAP across IoU thresholds from 0.50 to 0.95 was lower (0.602), the model reliably detected true positives at optimal confidence thresholds (0.4 to 0.6).
What is known and what is new?
• Endorectal ultrasound is recognized as essential for diagnosing AF, but precise identification of IO remains challenging due to operator-dependent variability.
• An artificial intelligence (AI) system based on YOLOv11 was developed to automatically detect and accurately localize IO of AF in real-time, thereby reducing subjective variability. The proposed system shows potential for integration into ultrasound devices, facilitating intraoperative navigation, and serving as a standardized educational tool to shorten the learning curve for radiologists and surgeons.
What is the implication, and what should change now?
• This AI system can enhance diagnostic efficiency and standardization, thereby supporting clinical decision-making among colorectal surgeons.
• Future research should include multi-center validation to improve generalizability and integration of the system into educational platforms to enhance surgical training and planning.
Introduction
Background
Anal fistula (AF) is a common anorectal disease characterized by significant recurrence and potential postoperative complications, such as fecal incontinence and anal stenosis (1). Accurate identification of the internal opening (IO) is essential for successful AF surgery. Clinicians frequently apply guidelines like the Goodsall rule to predict fistula trajectories and IO locations. However, the reliability of this rule has been questioned. One study reported an overall accuracy of 74.75% for determining IO locations, with higher predictive accuracy for posterior fistulas (73%) than anterior ones (52.4%) (2). Furthermore, its utility decreases markedly in complex cases, limiting clinical applicability.
Rationale and knowledge gap
Recent advances in imaging have improved AF diagnosis significantly. Techniques such as three-dimensional transperineal ultrasound (3D-TPUS) and its contrast-enhanced variant using SonoVue (SVE 3D-TPUS) enable highly accurate IO identification. The IO detection rate reaches 97.1% with SVE 3D-TPUS compared to 80.9% with conventional 3D-TPUS (3,4). Multi-slice spiral computed tomography (CT) combined with 3D reconstruction also provides high diagnostic accuracy. It effectively maps complex fistula anatomy and clearly visualizes fistula tracts (5). Nonetheless, these methods are operator-dependent, risk allergic reactions from contrast agents, increase medical costs, and expose patients to ionizing radiation.
Simultaneously, artificial intelligence (AI), particularly object detection algorithms based on the You Only Look Once (YOLO) architecture, has significantly advanced in medical imaging (6-10). For instance, YOLO-based methodologies have proven successful in detecting breast cancer (11) and thyroid nodule detection with malignant tumor classification (12). These developments illustrate the potential of YOLO models for rapid, automated, and accurate image analysis.
Despite progress in imaging technology and AI, no studies have applied YOLO-based models, including the recent YOLOv11 version, to detect IO in AF using ERUS or 3D-TPUS images. This knowledge gap is particularly critical because current diagnostic methods heavily depend on operator expertise and have significant variability. Thus, there is an urgent need for standardized, reliable AI-assisted tools to improve objectivity, reduce examiner dependency, and ultimately enhance diagnostic accuracy and surgical outcomes for AF.
Objective
The objective of this study is to develop an AI-enhanced ultrasound diagnostic system based on the YOLO architecture to accurately and rapidly locate the IO in AF. This system aims to support intraoperative navigation, provide dynamic annotations during clinical examinations, and lay the foundation for future AI-integrated training tools offering real-time feedback. This approach seeks to shift traditional experience-based training toward systematic, data-driven learning, ultimately standardizing ERUS interpretation and improving surgical planning and training outcomes. We present this article in accordance with the TRIPOD reporting checklist (available at https://tgh.amegroups.com/article/view/10.21037/tgh-25-122/rc).
Methods
Overall framework and real-time deployment
Endoanal ultrasound images were acquired in the long-axis view of the anal canal, emphasizing the IO of AF. These images underwent preprocessing for subsequent analysis.
An AI model (YOLOv11n) was developed and trained using the preprocessed ERUS images to detect the IO of AF in the long-axis view. Model parameters were fine-tuned throughout the training process to optimize performance. The goal was to ensure high accuracy and robustness in clinical practice by enabling the model to effectively learn data patterns associated with the IO of AF.
For real-time clinical deployment, a client-server architecture was adopted. During ultrasound examinations, the video stream was securely transmitted over a local network to a dedicated workstation hosting the trained model. Frame-by-frame inference was performed by this workstation, displaying results with an end-to-end latency below 300 ms, thus providing immediate feedback without needing direct integration into the proprietary ultrasound console.
Materials
Dataset
This study analyzed ERUS images collected from The Fifth People’s Hospital of Chengdu between January 2023 and May 2025. The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The Ethics Committee of The Fifth People’s Hospital of Chengdu approved the study protocol (approval No. 2025-012 (Branch)-01), and informed consent was obtained from all individual participants. The ultrasound team systematically captured long-axis views of the anal canal showing the IO of AF using linear array mode endoanal ultrasound. After meticulous screening, 238 high-quality images were selected. The final dataset was complete, with no missing data for images or their annotations. Two physicians, each with over five years of experience in anorectal ultrasound, independently annotated these images according to the established criteria in the Chinese Expert Consensus on Anorectal Ultrasonography (2024 Edition) (13). Additionally, participant demographics were recorded, including 25 male and 15 female individuals, with a mean age of 34.2 years.
The annotated dataset was divided into training, validation, and test sets at an 85%:10%:5% ratio. The training set was used to optimize parameters and improve the model’s ability to recognize anatomical features of IO in AF. The validation set allowed monitoring and fine-tuning of the model during training. The test set assessed the model’s generalization ability on unseen data. Evaluation of the test set occurred under a blinded protocol, with the model’s predictions compared automatically against reference annotations without human intervention, thereby eliminating potential bias.
Data augmentation
During experiments, various randomized data augmentation techniques were applied using the Ultralytics platform’s built-in RandAugment functionality. This approach aimed to increase the diversity of training data and enhance the model’s generalizability.
AI model
The YOLO model is an advanced, deep learning-based object detection framework capable of localizing and classifying objects simultaneously in a single forward pass. Compared with traditional multi-stage methods, this single-stage approach significantly enhances detection efficiency, making it particularly suitable for real-time applications in medical imaging. YOLO analyzes complete image data, effectively integrating contextual cues essential for accurate identification and localization of anatomical structures and pathological features. The combination of speed and accuracy makes YOLO especially effective for lesion detection, organ segmentation, and real-time diagnostic guidance.
This study employed the YOLOv11 algorithm, a recent iteration of the YOLO series, to detect the IO of AF in ERUS images. YOLOv11 provides improved accuracy, speed, and computational efficiency, making it highly suited for rapid analysis in medical imaging applications. YOLOv11 was selected over YOLOv12 because training parameters for YOLOv12 were not publicly available on the Ultralytics platform (14). The enhanced network architecture of YOLOv11 is illustrated in Figure 1.
YOLOv11 was chosen due to its superior feature extraction and multi-scale fusion capabilities, which are crucial for identifying subtle anatomical structures in medical images. The model consists of three main components: Backbone, Neck, and Head. The Backbone employs convolutional layers and the C3k2 module, a key architectural improvement in YOLOv11, to efficiently extract multi-level features from input images. The Neck integrates a Spatial Pyramid Pooling Fast (SPPF) layer and an optimized C2f module to enhance feature fusion across multiple scales. Finally, the Head performs object classification and bounding box regression to localize the IO of AF. This comprehensive design balances computational efficiency with representational power, making it particularly effective for detecting small, low-contrast targets in complex ultrasound environments. For comparison, we summarize the core architectures of YOLOv8 through YOLOv11 in Table S1.
Training parameters and setting
Training environment
The model was trained using high-performance hardware and supporting software. The hardware configuration included an Intel® Xeon® Platinum 8352V processor (base frequency: 2.10 GHz, 12 virtual CPU cores) and two RTX 3080 GPUs (20 GB memory each). The software environment included PyTorch 2.5.1, Python 3.12 (Ubuntu 22.04), and CUDA 12.4. Detailed hyperparameters utilized during training are provided in Table 1.
Table 1
| Category | Configuration |
|---|---|
| CPU | 12 vCPUs, Intel Xeon Platinum 8352V CPU @ 2.10GHz |
| GPU | 2 RTX 3080s (20GB each) |
| System environment | PyTorch 2.5.1, Python 3.12 (Ubuntu 22.04), CUDA 12.4 |
| Training platform | Ultralytics 8.3.49 |
The table was reused from an open access article from Yang et al. (12) under the terms of the Creative Commons Attribution 4.0 International License. CPU, central processing unit; GPU, graphics processing unit.
Throughout training, images were processed at a resolution of 640×640 pixels using a batch size of 120 over 100 epochs. Preliminary experiments indicated performance stabilization at this point, with early stopping and regularization methods employed to reduce overfitting. Model performance was evaluated on the validation set immediately following each epoch to enable continuous monitoring. Upon completing training, the test set was used to comprehensively assess model generalization and effectiveness.
Transfer learning
Due to limited availability of ultrasound images, transfer learning was employed to enhance model performance and prevent overfitting from training a deep model from scratch. A pre-trained YOLOv11n.pt model from the Ultralytics platform, featuring an architecture with SPPF layers and C2f modules, was adopted. This pre-trained model, originally trained on a large dataset, exhibited strong feature extraction capabilities, providing a robust foundation for our specific application.
Transfer learning adapted the pre-trained model to identify critical structures in endoanal ultrasound images, thereby improving model stability and generalization. Fine-tuning involved updating only higher-level feature layers, reducing hardware and time requirements while improving classification accuracy for the IO of AF. This approach aligns with previous research highlighting the effectiveness of transfer learning in medical image classification tasks with limited data availability.
Model evaluation metrics
The YOLOv11n model’s performance was evaluated using key metrics including precision (P), recall (R), and mean average precision (mAP). Specifically, mAP@0.5 represents average precision calculated at an intersection-over-union (IoU) threshold of 0.5, a widely accepted standard for object detection assessment. Additionally, mAP@[0.50:0.95] provides a comprehensive evaluation by averaging mAP across IoU thresholds ranging from 0.5 to 0.95, thereby rigorously measuring model accuracy at various detection levels.
The F1-score, P, R, and precision-recall (PR) curves were analyzed to thoroughly evaluate model performance. The F1-score represents a balanced measure between precision and recall. The precision curve illustrates the model’s capacity to avoid false positives, while the recall curve reflects its ability to identify relevant targets. The PR curve delineates the trade-off between precision and recall at varying confidence thresholds. Together, these metrics provide a detailed understanding of the model’s strengths and weaknesses, ensuring reliable clinical performance across multiple detection scenarios.
Statistical analysis
Statistical analysis and data visualization were performed using Python 3.12 with the scikit-learn 1.3.2 and matplotlib 3.8.0 libraries. The primary performance metrics—including P, R, mAP@0.5, and mAP@[0.50:0.95]—were calculated descriptively based on the model’s output on the independent test set.
Results
The YOLOv11n model demonstrated excellent detection of IO of AF in ERUS images. As shown in Table 2, it achieved a precision of 0.925 and recall of 1.0 on the validation set, signifying high accuracy and comprehensive true-positive identification. Additionally, the model reached a mAP@0.5 of 0.99, indicating exceptional detection accuracy at the 50% IoU threshold. Although the mAP@[0.50:0.95] was lower (0.602), it still indicated strong performance across a broader range of IoU criteria. The model attained a peak F1-score of 0.963 [95% confidence interval (CI): 0.83–1.0] within confidence thresholds from 0.4 to 0.6 (Figure 2), reflecting an optimal balance between accuracy and sensitivity.
Table 2
| Model | Precision | Recall | mAP@0.5 | mAP@50–95 |
|---|---|---|---|---|
| YOLO11n.pt | 0.925 | 1 | 0.99 | 0.602 |
mAP, mean average precision.
Analysis of the impact of confidence thresholds showed consistently high precision between 0.2 and 0.8 (Figure 3), underscoring the model’s low false-positive rate. In contrast, recall declined sharply beyond a threshold of 0.4 (Figure 4), demonstrating reduced sensitivity at higher confidence levels. The PR curve (Figure 5) further confirmed stable performance across varying recall values, which is crucial for clinical utility.
Figure 6 illustrates a steady reduction in loss metrics accompanied by an increase in precision and recall over epochs, indicating effective model learning and generalization without overfitting. Figure 7 provides representative detection examples, displaying ultrasound images with predicted bounding boxes and confidence scores, offering intuitive insights into the model’s practical effectiveness. Furthermore, we submitted a video demonstrating the model’s ability to autonomously and accurately detect the IO of AF in real-time ultrasound scans (Video 1). This video provides compelling evidence of the model’s practical utility in a clinical setting.
Discussion
Key findings
This study presents a YOLOv11-based model designed for real-time identification of IO of AF in ERUS images. The model achieved exceptional performance, with recall of 1.0, precision of 0.925, and mAP@0.5 of 0.99. Architectural enhancements, including the SPPF layer for integrating multi-scale features, the C2f module for inter-stage feature interaction, and the single-stage detection method, collectively contributed to efficient and accurate identification of intricate anatomical structures, fulfilling criteria for real-time clinical use.
Strengths and limitations
This model facilitates real-time, automated detection of IO in AF, effectively minimizing missed and incorrect diagnoses, as evidenced by its perfect R rate of 1.0 and high P of 0.925. These functionalities support easy integration into ultrasound systems for intraoperative guidance, aiding comprehensive surgical removal of the fistula tract and potentially lowering recurrence rates. However, the current study has several limitations. First, it utilized a single-center dataset with a relatively small sample size, acquired from only one type of ultrasound device and probe. Second, the study population was relatively homogeneous, potentially causing selection bias and limiting the model’s applicability to more diverse populations or complex fistula types. Consequently, the generalizability of the findings may be limited. Variations in image resolution, gain, contrast settings, and hardware-specific artifacts across different institutions and equipment, coupled with differences in patient demographics and disease complexity, may significantly reduce model performance. Regarding image annotations, while performed by two experts, the study did not formally assess inter-reader variability, an important aspect in evaluating reference standard consistency and its impact on model training. Furthermore, the model’s clinical utility is inherently constrained by its singular focus on localizing the IO. Although this targeted approach addresses a critical step in surgical planning, it does not provide comprehensive anatomical assessment. The model cannot identify other surgically relevant structures, such as the full fistula tract course and its relationship to the sphincter complex. These structures are essential for classifying fistula types and guiding precise sphincter-preserving procedures. Subsequent research should therefore include large-scale multicenter datasets using diverse ultrasound systems and probes. Such efforts could enhance clinical applicability and generalizability, ultimately developing integrated AI systems that map the entire fistula anatomy.
Comparison with similar research
This study demonstrates that the YOLOv11-based approach significantly enhances the accuracy of IO identification in AF compared to conventional imaging methods and existing AI models. The findings, indicated by a mAP of 0.99 at an IoU threshold of 0.5 (mAP@0.5) and a R of 1.0, surpass traditional ultrasound performance [approximately 87% to 97.2% (15-17)] and outperform the CNN model by Han et al. (18) using CT images (P=0.98, R=0.87), the CNN model by Yang et al. (19) using magnetic resonance imaging (MRI) images (accuracy of 92%), and the transfer learning approach employing an MRI-based ResNet-34 model (sensitivity of 96.97%, specificity of 94.94%) (20). Although 3D-EAUS has shown higher accuracy than conventional MRI in IO identification (98.2% vs. 94.6%) (21), it does not achieve the performance level observed in the present model. Moreover, the single-stage detection architecture used here not only improves recognition accuracy but also provides precise imaging support for real-time intraoperative guidance, enhancing clinical decision-making.
Explanations of findings
The high mAP@0.5 and R indicate the model’s strong capability for IO detection. However, the mAP@[0.50:0.95] score of 0.602 highlights challenges associated with precise pixel-level localization under stringent IoU thresholds. Difficulties arise from inherent uncertainties in defining orifice boundaries via ultrasound, as well as variations in pathological manifestations, including differences in inflammation and fistula types. Nonetheless, the model demonstrates stable and reliable detection capabilities across various localization strictness levels.
Implications and actions needed
The model provides reliable imaging support for prompt identification of anatomical landmarks, aiding surgical decision-making. Beyond decision support, its integration into the real-time clinical workflow is crucial. This system could function as an intraoperative guidance tool by providing real-time overlay annotations on live ultrasound feeds within operating rooms. Such functionality would help surgeons precisely locate the IO and understand its spatial relationship to the sphincter complex during procedures, translating imaging findings directly into surgical action. Its high P holds significant promise in medical education, specifically for enhancing 3D anatomical understanding, dynamically visualizing pathological progression, and providing a standardized closed-loop training system with immediate feedback. To realize these potentials, future efforts should establish a clear validation roadmap, beginning with a pilot external validation study using data from other centers and ultrasound systems to initially assess performance variability and robustness. Subsequently, large-scale multicenter validation across diverse clinical settings and equipment should be conducted to improve model generalizability. Finally, prospective clinical testing must be performed to evaluate the model’s real-time performance and clinical utility. Integration of this AI system into clinical ultrasound platforms for intraoperative guidance and simulation-based training will enhance educational outcomes and surgical preparation. Successful integration requires collaboration with manufacturers to develop compatible software and ensure seamless integration with hospital systems. A user-centered design approach is also crucial for optimizing interface clarity and intuitiveness, ensuring smooth adoption into clinical practice.
Conclusions
The AI system developed in this study, based on the YOLOv11 architecture, achieves accurate, real-time detection of AF IO in ERUS images (mAP@0.5 =0.99, R=1.0), providing clinicians with reliable diagnostic support. This research validates the clinical efficacy of this technology for AF diagnosis. Its exceptional accuracy and real-time capabilities establish the foundation for future applications in medical education. Future studies will focus on developing a comprehensive AI-integrated training module for AF ERUS education. This module will integrate three core components: a multi-subtype AF ERUS case library (including typical and atypical IO cases and imaging artifacts), a real-time feedback system (comparing trainee interpretations against the YOLOv11 model’s results and senior physician annotations), and a graded assessment module (evaluating IO recognition accuracy and its relevance to surgical planning). Designed specifically for radiologists and colorectal surgeons, this module will further standardize ERUS training and bridge AI technology with clinical education requirements.
Acknowledgments
None.
Footnote
Reporting Checklist: The authors have completed the TRIPOD reporting checklist. Available at https://tgh.amegroups.com/article/view/10.21037/tgh-25-122/rc
Data Sharing Statement: Available at https://tgh.amegroups.com/article/view/10.21037/tgh-25-122/dss
Peer Review File: Available at https://tgh.amegroups.com/article/view/10.21037/tgh-25-122/prf
Funding: This study received funding from
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://tgh.amegroups.com/article/view/10.21037/tgh-25-122/coif). Z.L. is from Glory Wireless Co. Ltd., Chengdu, China. The other authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study protocol was approved by the Ethics Committee of The Fifth People’s Hospital of Chengdu [approval No. 2025-012 (Branch)-01], and informed consent was obtained from all individual participants.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Gaertner WB, Burgess PL, Davids JS, et al. The American Society of Colon and Rectal Surgeons Clinical Practice Guidelines for the Management of Anorectal Abscess, Fistula-in-Ano, and Rectovaginal Fistula. Dis Colon Rectum 2022;65:964-85. [Crossref] [PubMed]
- El-Gazzaz G, Hull TL, Mignanelli E, et al. Obstetric and cryptoglandular rectovaginal fistulas: long-term surgical outcome; quality of life; and sexual function. J Gastrointest Surg 2010;14:1758-63. [Crossref] [PubMed]
- Wen Y, Tong X. Transperineal Anal Three-dimensional US. Radiology 2022;304:527-8. [Crossref] [PubMed]
- Yang J, Li Q, Li H, et al. Preoperative assessment of fistula-in-ano using SonoVue enhancement during three-dimensional transperineal ultrasound. Gastroenterol Rep (Oxf) 2024;12:goae002. [Crossref] [PubMed]
- Huang JG, Zhang ZY, Li L, et al. Multi-slice spiral computed tomography in diagnosing unstable pelvic fractures in elderly and effect of less invasive stabilization. World J Clin Cases 2022;10:4470-9. [Crossref] [PubMed]
- Liu Z, Wei L, Song T. Optimized YOLOv11 model for lung nodule detection. Biomed Signal Process Control 2025;107:107830.
- Wu H, Xu Q, He X, et al. SPE-YOLO: A deep learning model focusing on small pulmonary embolism detection. Comput Biol Med 2025;184:109402. [Crossref] [PubMed]
- Yao Z, Jin T, Mao B, et al. Construction and Multicenter Diagnostic Verification of Intelligent Recognition System for Endoscopic Images From Early Gastric Cancer Based on YOLO-V3 Algorithm. Front Oncol 2022;12:815951. [Crossref] [PubMed]
- Huang H, Balaji S, Aslan B, et al. Quantitative Susceptibility Mapping MRI with Computer Vision Metrics to Reduce Scan Time for Brain Hemorrhage Assessment. Int J Imaging Syst Technol 2025;35:e70070. [Crossref] [PubMed]
- Mayya AM, Alkayem NF. Correction: A novel ensemble learning approach for grouping the state-of-the-art YOLOV10 and YOLOV11 models for kidney stone detection in CT and Ultrasound images. J Imaging Inform Med 2025; Epub ahead of print. [Crossref]
- Aly GH, Marey M, El-Sayed SA, et al. YOLO Based Breast Masses Detection and Classification in Full-Field Digital Mammograms. Comput Methods Programs Biomed 2021;200:105823. [Crossref] [PubMed]
- Yang J, Luo Z, Wen Y, et al. Artificial intelligence-enhanced ultrasound imaging for thyroid nodule detection and malignancy classification: a study on YOLOv11. Quant Imaging Med Surg 2025;15:7964-76. [Crossref] [PubMed]
- Chinese expert consensus of anorectal ultrasound examination (2024 edition). Chinese Journal of Ultrasonography 2024;33:829-42.
- Cheng C, Cheng X, Li D, et al. Drill pipe detection and counting based on improved YOLOv11 and Savitzky-Golay. Sci Rep 2025;15:16779. [Crossref] [PubMed]
- Emile SH, Magdy A, Youssef M, et al. Utility of Endoanal Ultrasonography in Assessment of Primary and Recurrent Anal Fistulas and for Detection of Associated Anal Sphincter Defects. J Gastrointest Surg 2017;21:1879-87. [Crossref] [PubMed]
- Li J, Chen SN, Lin YY, et al. Diagnostic Accuracy of Three-Dimensional Endoanal Ultrasound for Anal Fistula: A Systematic Review and Meta-analysis. Turk J Gastroenterol 2021;32:913-22. [Crossref] [PubMed]
- Zhang MM, Jin Y, Chen ZJ. The application value of preoperative intracavitary ultrasound measurement in the diagnosis of anal fistula. Zhejiang J Trauma Surg 2020;25:352-4.
- Han L, Chen Y, Cheng W, et al. Deep Learning-Based CT Image Characteristics and Postoperative Anal Function Restoration for Patients with Complex Anal Fistula. J Healthc Eng 2021;2021:1730158. [Crossref] [PubMed]
- Yang J, Han S, Xu J. Deep Learning-Based Magnetic Resonance Imaging Features in Diagnosis of Perianal Abscess and Fistula Formation. Contrast Media Mol Imaging 2021;2021:9066128. [Crossref] [PubMed]
- Brillantino A, Iacobellis F, Reginelli A, et al. Preoperative assessment of simple and complex anorectal fistulas: Tridimensional endoanal ultrasound? Magnetic resonance? Both? Radiol Med 2019;124:339-49. [Crossref] [PubMed]
- Yuan J, Chen XY, Chang SX, et al. Feasibility study of artificial intelligence algorithm in diagnosis of internal orifice of anal fistula based on T1 enhanced imaging. Anhui Medical and Pharmaceutical Journal 2023;27:447-52.
Cite this article as: Wang X, Ma X, Tong X, Luo Z, Liu T, Yang J, Liu X, Wen Y. Artificial intelligence-assisted detection of internal openings in anal fistulas using endorectal ultrasound: a YOLOv11-based diagnostic system. Transl Gastroenterol Hepatol 2026;11:46.

