Review Chapter 8 via Natural or Artificial Opening With Percutaneous Endoscopic Assistance
EBioMedicine. 2021 Mar; 65: 103238.
A deep learning-based arrangement for bile duct notation and station recognition in linear endoscopic ultrasound
Liwen Yao
aDepartment of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
bFundamental Laboratory of Hubei Province for Digestive Organisation Disease, Renmin Hospital of Wuhan University, Wuhan, China
cHubei Provincial Clinical Research Middle for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, Cathay
Jun Zhang
aSection of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
bKey Laboratory of Hubei Province for Digestive Organization Disease, Renmin Hospital of Wuhan Academy, Wuhan, Prc
cHubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, Mainland china
Jun Liu
eDepartment of Gastroenterology, Wuhan Marriage Hospital, Huazhong University of Scientific discipline and Engineering, Wuhan, Prc
Liangru Zhu
eDepartment of Gastroenterology, Wuhan Union Hospital, Huazhong University of Science and Technology, Wuhan, Cathay
Xiangwu Ding
fWuhan Puai Hospital, Wuhan, Red china
Di Chen
aDepartment of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
bFundamental Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, People's republic of china
cHubei Provincial Clinical Enquiry Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
Huiling Wu
aDepartment of Gastroenterology, Renmin Infirmary of Wuhan Academy, Wuhan, Communist china
bKey Laboratory of Hubei Province for Digestive Organisation Disease, Renmin Hospital of Wuhan University, Wuhan, China
cHubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
Zihua Lu
aSection of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Mainland china
bPrimal Laboratory of Hubei Province for Digestive Arrangement Illness, Renmin Hospital of Wuhan University, Wuhan, China
cHubei Provincial Clinical Enquiry Centre for Digestive Affliction Minimally Invasive Incision, Renmin Hospital of Wuhan Academy, Wuhan, China
Wei Zhou
aDepartment of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Prc
bKey Laboratory of Hubei Province for Digestive Arrangement Disease, Renmin Hospital of Wuhan University, Wuhan, China
cHubei Provincial Clinical Research Middle for Digestive Illness Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
Lihui Zhang
aDepartment of Gastroenterology, Renmin Hospital of Wuhan Academy, Wuhan, China
bFundamental Laboratory of Hubei Province for Digestive Organization Disease, Renmin Hospital of Wuhan Academy, Wuhan, China
cHubei Provincial Clinical Inquiry Center for Digestive Disease Minimally Invasive Incision, Renmin Infirmary of Wuhan University, Wuhan, China
Bo Xu
fWuhan Puai Hospital, Wuhan, China
Find articles past Bo Xu
Shan Hu
dWuhan EndoAngel Medical Technology Company, Wuhan, Prc
Biqing Zheng
dWuhan EndoAngel Medical Technology Visitor, Wuhan, Prc
Yanning Yang
gDepartment of Ophthalmology, Renmin Hospital of Wuhan University, 99 Zhangzhidong Road, Wuhan 430060, Hubei Province, China
Honggang Yu
aSection of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
bFundamental Laboratory of Hubei Province for Digestive Arrangement Disease, Renmin Hospital of Wuhan Academy, Wuhan, China
cHubei Provincial Clinical Inquiry Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, Red china
Received 2020 Oct 12; Revised 2021 January 22; Accustomed 2021 January 25.
Summary
Background
Detailed evaluation of bile duct (BD) is main focus during endoscopic ultrasound (EUS). The aim of this study was to develop a system for EUS BD scanning augmentation.
Methods
The scanning was divided into 4 stations. We adult a station classification model and a BD segmentation model with 10681 images and 2529 images, respectively. 1704 images and 667 images were practical to classification and segmentation internal validation. For classification and segmentation video validation, 264 and 517 videos clips were used. For man-machine contest, an independent information set contained 120 images was applied. 799 images from other two hospitals were used for external validation. A crossover study was conducted to evaluate the organisation issue on reducing difficulty in ultrasound images interpretation.
Findings
For nomenclature, the model achieved an accuracy of 93.iii% in image set up and 90.1% in video set. For partition, the model had a dice of 0.77 in image gear up, sensitivity of 89.48% and specificity of 82.three% in video set. For external validation, the model accomplished 82.half-dozen% accuracy in classification. In human-motorcar contest, the models achieved 88.iii% accuracy in classification and 0.72 dice in BD division, which is comparable to that of practiced. In the crossover study, trainees' accuracy improved from lx.8% to 76.3% (P < 0.01, 95% C.I. 20.9–27.2).
Interpretation
We adult a deep learning-based augmentation system for EUS BD scanning augmentation.
Funding
Hubei Provincial Clinical Inquiry Centre for Digestive Illness Minimally Invasive Incision, Hubei Province Major Science and Technology Innovation Project, National Natural Scientific discipline Foundation of China.
Keywords: Endoscopic ultrasound, Biliary tract, Preparation, Deep learning
Abbreviations: BD, bile duct; EUS, endoscopic ultrasound; DCNN, deep convolutional neural network; ERCP, Endoscopic Retrograde Cholangiopancreatography
Research in context
Evidence earlier this study
We searched PubMed for papers published between Jan 1, 2001, and March i, 2020, with the keywords "machine learning", "artificial intelligence" OR"deep learning" AND "endoscopic ultrasound". No restrictions on written report type or linguistic communication were implemented. Our search retrieved studies on the utilise of deep learning for computer aided diagnosis of pancreas lesions but no studies to improve the ultrasonographics estimation in biliary organization.
Added value of this study
Nosotros constructed a deep learning-based system, BP Chief, for real-time endoscopic ultrasound biliary scanning augmentation. This system was followed by internal- and external validation in images or videos, and subsequently compared with the performance of endoscopists. The effect of the system on eliminating the difficulty of ultrasonographic interpretation was evaluated amid trainees in prospectively collected endoscopic ultrasound videos. Our written report confirmed the feasibility of using deep learning for endoscopic ultrasound biliary augmentation.
Implications of all the available evidence
Endoscopic ultrasound provides improved imaging functions and has provided multiple treatment method in biliary disease, but the endoscopic ultrasound systems have been hesitantly adopted past some gastroenterologists due to its steep learning curve and relying too much on the operator. Our report shows that the BP Master organisation can recognize the standard station for bile duct scanning and prompt the physicians for the corresponding performance education. Moreover, the system can likewise segment the bile duct with high precision and automatically measure the bile duct diameter. With the organisation'southward augmentation, the trainees improved their accuracy of station recognition. The BP MASTER organization has potential to play an important role in endoscopic ultrasound biliary scanning augmentation.
Alt-text: Unlabelled box
1. Introduction
Endoscopic ultrasound (EUS) has excellent performance for the diagnosis of biliary disease, such as choledocholithiasis, bile duct (BD) obstacle, ampullary carcinoma and common BD carcinoma. In BD evaluation, EUS is closest to endoscopic retrograde cholangio pancreaticography (ERCP), which is the gilt standard [1,ii]. For choledocholithiasis diagnosis, the sensitivity was 0.97 for EUS and 0.0.87 fro magnetic resonance cholangiopancreatography [iii].
Multi-station imaging techniques is the standard scanning procedure in EUS-BD evaluation. The stations comprise anatomical landmarks which could be used to locate the transducer and to place areas that have non been scanned. EUS of the BD can be done from the stations as follows: Station 1: the fundus of stomach (liver); Station 2: body of stomach (and antrum); Station 3: duodenal bulb; Station 4: descending duodenum [iv], [five], [vi]. Comprehensive evaluation of BD is frequently the main focus of imaging during EUS and in such situations, multi-station imaging is necessary to scan the whole BD [vii].
However, EUS is one of the most challenging endoscopic procedures to learn and requires integration of both cerebral and endoscopic skills [8,ix]. The cerebral portion of the procedure is exceedingly difficult to learn. Most experienced endosonographers believe that the central to acquiring competence in both components of the EUS procedure is pattern recognition obtained through repetitive examinations. Such experience can be acquired only at a preparation heart performing a large volume of cases. Considering few centers provide this experience, other training options are needed [10]. Therefore, an augmentation system is very needed when performing EUS BD exam and training. Ideally, a station recognition model could provide the data of transducer location besides as the operation pedagogy. A BD note model could assist endosonographers to visualize the BD.
EUS-guided biliary puncture is an emerging technique that combines the advantages of the endoscopic and percutaneous approaches, without the inconveniences and discomfort of an indwelling external catheter [11]. Puncture route selection is critical in successful BD puncture cases [12]. The option amid dissimilar routes is mainly based on the degree and location of the duct dilation [13,14]. Iii routes were proposed for BD puncture. The outset route is transmural puncture of the intrahepatic BD by transesophageal and transgastric puncture. The 2nd and third methods are transduodenal puncture of the extrahepatic BD via the proximal duodenum and the second portion of the duodenum, respectively. The prerequisite for choosing a suitable puncture route is to make up one's mind the obstruction position and degree of the duct dilation. Station recognition and BD annotation augmentation has potential to better the comprehensive evaluation of BD.
In recent years, deep learning has made tremendous progress in the field of digestive endoscopy [15]. Previous piece of work from our group showed that Deep Convolutional Neural Networks (DCNNs), one of the representative algorithms of deep learning, could accurately recognize the stations of EUS pancreas in real-time manner [16]. However, the role of deep learning in EUS biliary scanning remains unknown.
In our current written report, nosotros constructed a deep learning-based system, BP MASTER, for real-fourth dimension stations recognition and BD notation in linear EUS. There were ii reasons why the radial images were non utilized: Outset, linear EUS was superior in the delineation of the area from the hepatic portal region to the superior BD [17]. 2nd, EUS-guided biliary puncture is conducted by linear EUS while the radial EUS can only applied for diagnosis. For station recognition, deep learning-based image nomenclature model was constructed. For BD notation, a BD segmentation model was constructed to observe the BD within the digital image from the endoscopy processor and output the BD boundary every bit a greenish line. This system was followed by internal- and external validation in images or videos, and subsequently compared with the operation of EUS endoscopists. The effect of the system on eliminating the difficulty of ultrasonographic interpretation was evaluated among trainees in prospectively nerveless EUS videos. The purpose of this report is to explore the part of deep learning in linear EUS BD scanning.
ii. Method
2.1. System framework
Four DCNN models were incorporated into BP MASTER system to achieve two primary functions: First, to position the station where the transducer is located and provide the respective functioning instructions. Second, to annotate the CBD and provide diameter measurement when endoscopists froze the frames. DCNN1 was practical to filter out white light images and input the ultrasound images to DCNN2. DCNN2 was applied to classify ultrasound images into standard and non-standard categories, and activate DCNN3 with standard images. DCNN3 was used to recognize BD stations. DCNN4 was used to segment and comment BD (Fig. one). The STARD 2015 reporting guidelines was followed when writing this work.
BP MASTER system framework: DCNN1 was applied to filter out white low-cal images and input the ultrasound images to DCNN2. DCNN2 was practical to allocate ultrasound images into standard and non-standard categories, and activate DCNN3 with standard images. DCNN3 was used to recognize stations. DCNN4 was used to segment and annotate bile duct.
2.2. Datasets and preprocessing
For DCNN1 training and validation, 2000 white light images of gastroscopy and 2000 EUS images were applied at a nine to 1 ratio. For unqualified images filtering, 10001 standard station and 17335 unqualified EUS images were used to train, 1735 standard station and 1412 unqualified EUS images were used to exam DCNN2. The criteria for unqualified images were jointly negotiated by two EUS experts, including: obscure, large lesions, kidney, spleen, abdominal aorta, elastography, and extremely dilated bile/pancreatic duct. Representative images of the unqualified images were shown in Fig. S1.
Five data sets were used for training, internal validation and external validation of BP MASTER system:
-
(1)
10681 images from 443 EUS procedures were used to railroad train the model for BD station recognize (DCNN1). 2529 images contained complete and clear BD from the aforementioned procedures were practical to train the model for BD notation. The average age of the patients is 55 years onetime (standard deviation is 12.6). The proportion of men in this dataset is 49.7% (220/443). All the images were from Wuhan Renmin Hospital during Dec 2016–July 2020.
-
(2)
1704 images from 44 EUS procedures from Renmin Hospital of Wuhan University during October 2019–December 2019 were used for internal validation. 264 video clips contained 33280 frames from the same procedures were applied for station recognition video validation. 251 positive video clips contained 13751 frames with each frame contained BD and 300 negative video clips independent 12771 frames without BD were used to exam the operation of DCNN4. The average age of the patients is fifty.6 years one-time (standard deviation is 13.8). The proportion of men in this dataset is 47.vii% (21/44). All the images were from Wuhan Renmin Infirmary during October 2019–December 2019.
-
(3)
120 images from 44 EUS procedures from Renmin Hospital of Wuhan University during September 2019 - May 2020 were used to compare the performance of DCNN3 and DCNN4 with that of EUS experts (man-machine competition). The average age of the patients is 56 years old (standard departure is 12.ane). The proportion of men in this dataset is 61.4% (27/44).
-
(4)
For the external validation, an external testing data set contained 799 images from twenty examinations (Wuhan Union Infirmary) and 89 examinations (Wuhan Puai hospital) were collected. The average age of the patients is 59 years old (standard deviation is 10.3). The proportion of men in this dataset is 31.two% (34/109).
The sample distribution for each data set was shown in Tabular array one. The 4 stations and its representative images predicted by the DCNN models were shown in Fig. 2. Images from the same person were not split among the data sets. The procedures were performed by Olympus EU-ME1 and EU-ME2 (Olympus Medical Systems Co., Tokyo, Japan) processors and adapted endoscopes.
Flowchart of the study development and validation.
Table 1
Baseline data.
| Patient (due north) | Station 1 | Station two | Station 3 | Station four | Total | |
|---|---|---|---|---|---|---|
| DCNN3 grooming set (frames) | 443 | 1518 | 5768 | 1071 | 2324 | 10681 |
| DCNN4 training gear up (frames) | 443 | 360 | 692 | 799 | 678 | 2529 |
| DCNN3 internal validation set (frames) | 44 | 312 | 619 | 333 | 440 | 1704 |
| DCNN3 video validation set (clips/frames) | 44 | 42/4295 | 76/10498 | 96/10134 | 50/8281 | 264 /33208 |
| DCNN4 internal validation gear up (frames) | 44 | 72 | 160 | 283 | 206 | 721 |
| DCNN4 video validation set (clips/frames) | 44 | 69/4762 | 76/4484 | 43/1931 | 63/2576 | 251 /13753 |
| Man-machine competition gear up (frames) | 44 | 30 | xxx | 30 | xxx | 120 |
| External validation set (frames) | 109 | 335 | 148 | 204 | 112 | 799 |
| Crossover study set (videos) | 29 | 22 | 29 | 29 | 23 | 29 |
A schematic illustration of the stations well-nigh the visualization of bile duct in linear EUS and its representative images predicted past the DCNN3.
2.3. Annotation
Two EUS experts A and B from Wuhan Renmin Hospital labeled each images and video clips with negotiation. Their labels were used as gilded standard for all the training and validation.
For man-machine contest, proficient C, senior endoscopists D, East and F were required to classified each prototype in the comparison data set and then, annotate the BD based on the classification results. Both endoscopists and model results were compared with basis truth annotated by expert A and B.
For annotators level of expertise, expert endoscopists were defined equally who had at least 10 years while senior endoscopists were divers equally v years of experience in performing EUS examination and treatment.
2.four. Preparation of DCNN models
We used ResNet for epitome nomenclature and Unet++ for paradigm segmentation. Both networks were trained on an NIVIDIA GeForce GTX 2080. The technical details and neural network architecture were illustrated in supplementary. For DCNN1, 2 and iii, ResNet-fifty, a mature DCNN architectures pretrained by data from ImageNet (one.28 meg images from yard object classes), were used to railroad train DCNN1, 2 and 3. We replaced the final nomenclature layer with another fully connected layer using transfer learning, retrained them using our datasets, and fine-tuned the parameters to fit our needs. The dataset was randomly divided into v subsets and one subset was validated individually with the remaining for grooming in Google'south TensorFlow [xviii]. Three method, dropout [nineteen], data augmentation [xx] and early stopping [21], were used to minimize the overfitting chance.
For BD annotation, UNet++, a novel and powerful architecture for medical image sectionalization was implemented to develop DCNN4 [22,23]. With the original EUS image every bit the input, the resolution is 512 × 512, and the skilful-marked map as the output, UNet ++ is used to train and test DCNN4 in paradigm-to-image style in Keras. According to the result of internal validation, we become the all-time segmentation threshold by increasing two each fourth dimension and the threshold was set as 0.55 (Fig. S5) Too BD annotation, DCNN4 too provide bore measurement result when endoscopists froze the videos. The details of how to identify whether the videos were frozen and algorithm of diameter measurement were provided in supplementary.
2.5. Structure of BP Main Arrangement
For station recognition prediction, we used the Random Forest Classifier model [24] and the dominion of 'output results only when three of the v consecutive images show a same result' to smoothen noises. The FPS (frames per second) for running the organisation in videos was 4.78 on a GPU. The speed of the DCNN in the clinical setting to output a prediction per frame in the endoscopy middle of Renmin Hospital of Wuhan University was 200-300ms, including fourth dimension consumed in the client (image capture, image re-sizing, and rendering images based on predicted results), network communication, and the server (reading and loading images, running the three networks, and saving images). For BD annotation, the organization was prepare to segment and output the upshot at 15 FPS. All the models were trained and ran on a server with a GPU NVIDIA RTX2080Ti (with viii GB GPU memory).
2.six. Ultrasonographics interpretation study
2.6.1. Prospective data gear up drove
To evaluate the effect of BP MASTER, nosotros prospectively consecutively enrolled patients undergoing EUS examinations and their corresponding videos were collected between July 2020 to Baronial 2020. The study was approved by the Ideals Committee of Renmin Hospital of Wuhan University (WDRY2019-K091) and under trail registration number ChiCTR1900028648 of the Principal Registries of the WHO Registry Network. Informed consents were obtained from each participant. Patient with lower gastrointestinal EUS, radial EUS or no standard station scanned were excluded.
2.6.2. Report procedure
With the prospectively collected videos, a crossover study was performed to evaluate BP MASTER effect in improving trainees station recognition and BD segmentation. viii primary trainees and four advanced trainees were included in this study. The chief trainees who participated had already more than i-twelvemonth gastroenterology fellowship experience and none had whatever prior experience or training in EUS while the avant-garde trainees had handled at least 100 training EUS. All the trainees were required to read the reference of CBD multi-station scanning and were provided xx typical images of each station for learning a calendar week in advance [4].
Clinicians were provided videos and were requested to record the time point of first recognizing each station. Using a crossover design, the trainees were randomly and as divided into 2 groups. The randomization was generated by a random grouping software. Group A first read the videos and images without BP MASTER augmentation, and group B first read with BP MASTER augmentation. Afterward a washout menses of two weeks, the arrangement was reversed such that grouping A performed read with augmentation and grouping B read the videos without (Fig. 4). With the model augmentation, readers had the selection to have information technology into consideration or disregard it based on judgment. The time point and accuracy at which the BP Main beginning recognized each station as well every bit the partition result were also recorded.
The crossover study design: a. Report design. 12 trainees were divided into 2 groups to perform reads with and without model augmentation in random lodge, with a ii-week washout menstruum between. b. Unaugmented read, with original EUS videos. c. Augmented read, videos with model labeled. EUS, endoscopic ultrasound.
two.seven. Statistical analysis
For station classification evaluation, we used accuracy as metric, which defined as the number of correctly classified images divided by the total number of images. Similarly, per frame accuracy was defined as the correctly classified frames divided past the full number of frames.
The standard difference was calculated as:
For segmentation evaluation, intersection over marriage (IOU) was defined as the relative surface area of overlap betwixt the predicted bounding box (A) and the ground-truth (B) bounding box. The basis truth was labeled past the Good A and B:
When the IOU>threshold, the prediction is true positive; When the IOU<threshold, the prediction is simulated positive; When the model segmentation area = 0, it is false negative.
Inter-observer agreement of the endoscopists and the DCNN were evaluated using Cohen's kappa coefficient.
For the crossover written report, we compared the time betoken accuracy for each trainee with or without augmentation.
To assess whether the trainees achieved significant increases in performance with model augmentation, McNemar's test was performed on the differences in same metric across all 12 trainees. P < 0.05 was considered statistically significant. All calculations were performed using SPSS 23 (IBM, Chicago, Illinois, USA).
ii.eight. Part of the funding source
The funder had no function in study blueprint, data collection, data analysis, data interpretation, or writing of the report. The respective author had full access to all the information in the study and had final responsibility for the decision to submit for publication.
3. Results
3.i. Internal and external validation
For white light and ultrasound images classification, DCNN1 accomplished an accuracy of 100%. In the standard and not-standard images nomenclature, the DCNN2 achieved an accuracy of 87.4%. The confusion matrixes of DCNN1 and 2 were shown in Fig. S2 and S3, respectively.
For DCNN3, the model had an accuracy of 93.three% in image validation set (Fig. S4). In video validation prepare, the model had a per-frame accuracy of 90.1% (Table 2). As for the BD sectionalisation operation, DCNN4 had a Die of 0.77. The recall and precision at 50% IOU were 85% and 98.2%. In video validation ready, the sensitivity among positive video clips was 89.5% and the specificity among negative video clips was 82.iii% (Table 3).
Tabular array two
DCNN3 station recognition accurateness in internal, external and video validation.
| Internal validation | External validation | Video validation | |
|---|---|---|---|
| Station one (%) | 86.3 | 83.six | 82.v |
| Station 2 (%) | 99.5 | 82.iv | 95.nine |
| Station 3 (%) | 93.vii | 76.5 | 91.4 |
| Station four (%) | 89.eight | 91.1 | 87.i |
| Full | 93.3 | 83.ix | ninety.1 |
Table three
DCNN4 sectionalisation performance.
| Internal validation | Video validation | ||||||
|---|---|---|---|---|---|---|---|
| Dice | Precision>0.5 | Recall>0.5 | Precision>0.3 | Call up>0.3 | Sensitivity (%) | Specificity (%) | |
| Station 1 | 0.83 | 0.94 | 0.99 | 0.98 | 0.99 | 89.5 | – |
| Station 2 | 0.72 | 0.75 | 0.98 | 0.84 | 0.98 | 89 | – |
| Station 3 | 0.76 | 0.81 | 0.98 | 0.9 | 0.98 | 90.2 | – |
| Station 4 | 0.69 | 0.87 | 0.69 | 0.93 | 0.85 | 82.6 | – |
| Total | 0.77 | 0.85 | 0.95 | 0.92 | 0.97 | 88.ane | 82.thirty% |
Amid the retrospectively collected images from Wuhan Puai Hospital and Wuhan Union Infirmary, the DCNN3 achieved an accuracy of 82.6%.
3.2. Homo-car competition
In the testing dataset for homo-automobile contest, DCNN3 correctly classified the BD stations with an accuracy of 88.three%. The accuracy for expert C, endoscopists D, E and F was 90%, 85.viii%, 74.2% and 84.2%, respectively. For the BD annotation, the model had a dice of 0.59. Among the images that independent BD, the Die was 0.72 for models and 0.74, 0.65, 0.67, 0.65 for endoscopists, respectively (Fig. five). Among all images, the dice for expert C, endoscopists D, E and F was 0.61, 0.54, 0.55 and 0.54 (Tabular array S1). The inter-observer agreement betwixt DCNN3 and experts was shown in Table S2.
The accuracy, Dice, recall and precision in the homo-auto contest. In the man-machine contest, the accuracy for DCNN 3, expert C, endoscopists D, E and F was 88.iii%, 90%, 85.8%, 74.2% and 84.2%, respectively. Among the images with bile duct, the die for DCNN four, skilful C, endoscopists D, E and F was 0.72, 0.74, 0.65, 0.67, 0.65, respectively.
three.3. Crossover report
In the crossover study, trainees achieved a fourth dimension bespeak accuracy of sixty.8% without augmentation every bit a group. With augmentation, the accuracy significantly improved from 60.viii% to 76.3% (P < 0.01, 95% C.I. 20.9–27.2). The underlying model had an accurateness of 86.2%. The performances of individual trainees were reported in the Tabular array 4. All the 12 trainees have made meaning improvement with the augmentation.
Table 4
Trainees' station recognition accurateness with and without augmentation.
| Without augmentation (%) | With augmentation (%) | Increase (%)(95% C.I.) | P value | ||
|---|---|---|---|---|---|
| Model | - | 86.2 | – | – | |
| Group A | Trainee A | 72.4 | 83.half dozen | 11.ii (v.ii–21.half-dozen) | <0.01 |
| Trainee B | 43.i | 73.iii | thirty.2 (17.six–41.4) | <0.01 | |
| Trainee C | 45.7 | 62.9 | 17.2 (4.42–29.three) | <0.01 | |
| Trainee D | 56 | 77.6 | 21.6 (nine.46–32.8) | <0.01 | |
| Trainee E* | 70 | 78.3 | 8.3 (−2.7 to xix.half-dozen) | <0.01 | |
| Trainee F* | 65.8 | 77.5 | eleven.vii (0.5–23.3) | <0.01 | |
| Group B | Trainee G | 57.8 | 73.3 | fifteen.v (3.iii–27.1) | <0.01 |
| Trainee H | 69.eight | 83.6 | 13.8 (2.9–24.3) | <0.01 | |
| Trainee I | 63.8 | 74.i | 10.3 (−1.six to 21.9) | <0.01 | |
| Trainee J | 42.2 | 62.1 | 19.eight (7.0–31.viii) | <0.01 | |
| Trainee Thousand* | 69.2 | 82.v | thirteen.3 (two.8–24.iv) | <0.01 | |
| Trainee L* | 73.3 | 86.vii | 13.four (2.6–23.0) | <0.01 | |
| Total | sixty.viii | 76.3 | 15.five (20.9–27.2) | <0.01 |
4. Word
In this report, we constructed an artificial intelligence-assisted linear EUS organization, which can recognize the standard station for BD scanning and prompt the physicians for the corresponding functioning instruction. Moreover, the system can as well segment the BD with high precision and automatically measure the BD diameter, which could simplify the md'south operation. In the comparison with endoscopists, the DCNN1 accurateness was ameliorate than that of senior EUS endoscopists and was comparable with EUS expert.
EUS provides improved imaging functions and has been bachelor on the market place since the 1980s, just the EUS systems have been hesitantly adopted by some gastroenterologists due to its steep learning curve and relying too much on the operator [25]. Although efforts to shorten the EUS learning bend take been ongoing, and some specific centers use computer-based simulators or live animal models to amend the learning curve, EUS is still not fully applied globally [26], [27], [28]. In particular, although EUS-BD has a meaning clinical bear upon on the handling of patients, the operation of EUS-BD is still limited to tertiary referral centers [29]. Since EUS strongly relies on the training, skills and experience of endoscopists, the evolution of real-time ultrasonographics interpretation organisation is essential for the widespread adoption of EUS.
The standard stations contain specific anatomical landmarks and represent the precise location where the transducer was scanning. Therefore, the stations could serve as navigation marks under ultrasonographics. Among the stations, there are specific operating techniques. The doctor tin complete ultrasound endoscopic scanning by identifying the standard station. On the other hand, unlike parts of BD tin exist observed from each station and the station recognition tin can remind the office that the endoscopists have missed. Therefore, in recent years, the EUS training has gradually focused on standard station scanning educational activity. Wani et al adult a scoring tool to evaluate the learning curve of avant-garde ultrasound endoscopy trainees [thirty,31]. The tool utilizes a iv-betoken scoring system: 1 (superior) = achieves independently, two (advanced) = achieves with minimal verbal pedagogy, iii (intermediate) = achieves with multiple verbal instructions or easily-on assistance, and 4 (novice) = unable to complete requiring trainer to take over. The tool is scored based on the scanning performance of the advanced students at each station. If the endoscopists can obtain the positioning information and the respective performance method from a real-time augmentation arrangement, the endoscopists can accomplish the competence in EUS in a shorter fourth dimension. In our crossover written report, the augmentation from our system has significantly improved the accuracy of the station recognition and BD division by the endoscopists. The results from the crossover written report indicated that the system has the potential to shorten the learning curved in the hereafter.
Since the initial report on the use of BD puncture afterward failed ERCP in 2004, several studies have reported BD puncture every bit an constructive salvage technique for achieving biliary cannulation after failed ERCP [32], [33], [34], [35]. The BD puncture techniques comprise iii methods that are based on the approach route: TG, from the second portion of the duodenum in a short endoscopic position, and from the bulb of the duodenum in a long endoscopic position. Though there is no formal consensus for how to decide between intrahepatic or extrahepatic approach, studies accept suggested that endoscopists should choose the approach according to the bile duct beefcake. Therefore, comprehensive BD scanning is critical for the routes selection. The system in our current study tin contribute to a comprehensive BD scanning and thus, tin can contribute to the dilation and stricture evaluation. Moreover, the office of automatically mensurate the diameter of the BD can further improve the diagnostic sensitivity of BD dilation.
For the popularity of the system, it can ensure stable and shine operation on a figurer with an RTX3070 graphics card. The price of such a computer configuration is about $2000 which is totally affordable for a practicing gastroenterologist in a individual practice. The organization could run totally automatically and giving real-time instruction for endoscopists. Therefore, this system will be like shooting fish in a barrel to spread amidst practicing gastroenterologists in private practice.
At that place are several limitations of our study. First, though the accuracy of this organisation has been fully validated, the event of this system was simply tested in an augmentation reading study. In the hereafter, a randomized report on evaluating the effect of the system was needed. Secondly, lesion identification part has non been added to this arrangement. That is because though EUS can evaluate the nature of the stenosis and dilatation of BD, its accuracy is non as good every bit that of spyglass and the function of EUS in the biliary organisation is mainly focus on screening and treatment. However, the lesions identification arrangement is under development in our unpublished written report.
In decision, we constructed an EUS BD scanning augmentation organisation based on deep learning. The accurateness of this organization has been validated both internal and external. In the future, this system has potential to play an important role in EUS grooming and quality control.
Contributors
YNY and HGY conceived and designed the report; LHZ, ZHL, HLW, XWD, LRZ, BX, WZ, DC collected the images; SH and BQZ trained and tested the models; LWY, JL and JL contributed to images annotation, data assay and manuscript writing; all authors have reviewed and approved the last manuscript for submission. All authors read and approved the terminal version of the manuscript. All authors contribute to critical revision of the manuscript.
Information sharing
Individual de-identified participant data that underlie the results reported in this article and written report protocol will be shared for investigators whose proposed utilise of the information has been approved past an independent review committee. Data can only be used to achieve aims in the approved proposal. Data disclosure begins ix months afterward and ends 36 months after article publication. To gain admission, data requesters volition need to sign a data admission agreement. Proposals should be directed to the corresponding author.
Declaration of Competing Interest
None.
Acknowledgments
Project of Hubei Provincial Clinical Research Center for Digestive Illness Minimally Invasive Incision (grant no. 2018BCC337); Hubei Province Major Scientific discipline and Technology Innovation Project (grant no. 2018-916-000-008); National Natural Science Foundation of Red china (grant no. 81770899).
Footnotes
Appendix. Supplementary materials
References
one. Giljaca V, Gurusamy KS, Takwoingi Y, Higgie D, Poropat G, Štimac D. Endoscopic ultrasound versus magnetic resonance cholangiopancreatography for common bile duct stones. Cochrane Database Syst Rev. 2015;(two) [PMC free article] [PubMed] [Google Scholar]
two. Polkowski M, Regula J, Tilszer A, Butruk E. Endoscopic ultrasound versus endoscopic retrograde cholangiography for patients with intermediate probability of bile duct stones: a randomized trial comparing two management strategies. Endoscopy. 2007;39(4):296–303. [PubMed] [Google Scholar]
3. Meeralam Y, Al-Shammari K, Yaghoobi M. Diagnostic accuracy of EUS compared with MRCP in detecting choledocholithiasis: a meta-assay of diagnostic test accuracy in head-to-head studies. Gastrointest Endosc. 2017;86(6):986–993. [PubMed] [Google Scholar]
4. Sharma M, Pathak A, Shoukat A, Rameshbabu CS, Ajmera A, Wani ZA. Imaging of mutual bile duct by linear endoscopic ultrasound. World J Gastrointest Endosc. 2015;7(xv):1170–1180. [PMC free article] [PubMed] [Google Scholar]
5. Irisawa A, Yamao Chiliad. Curved linear array EUS technique in the pancreas and biliary tree: focusing on the stations. Gastrointest Endosc. 2009;69(2 Suppl):S84–SS9. [PubMed] [Google Scholar]
half-dozen. Committee EFS, Yamao K, Irisawa A, Inoue H, Matsuda K, Kida M. Standard imaging techniques of endoscopic ultrasound-guided fine-needle aspiration using a curved linear array echoendoscope. Digest Endosc. 2007;19:S180–S205. [Google Scholar]
7. Domagk D, Oppong KW, Aabakken L, Czakó Fifty, Gyökeres T, Manes G. Performance measures for ERCP and endoscopic ultrasound: a European Society of Gastrointestinal Endoscopy (ESGE) quality improvement initiative. Endoscopy. 2018;50(11):1116–1127. [PubMed] [Google Scholar]
eight. Wani Due south, Keswani R, Hall M, Han S, Ali MA, Brauer B. A prospective multicenter written report evaluating learning curves and competence in endoscopic ultrasound and endoscopic retrograde cholangiopancreatography amidst avant-garde endoscopy trainees: the rapid assessment of trainee endoscopy skills report. Clin Gastroenterol Hepatol. 2017;15(11) [PMC free article] [PubMed] [Google Scholar]
ix. Shahidi N, Ou K, Lam E, Enns R, Telford J. When trainees reach competency in performing endoscopic ultrasound: a systematic review. Endosc Int Open. 2017;5(iv):E239–EE43. [PMC free article] [PubMed] [Google Scholar]
10. Kim GH, Bang SJ, Hwang JH. Learning models for endoscopic ultrasonography in gastrointestinal endoscopy. World J Gastroenterol. 2015;21(17):5176–5182. [PMC complimentary article] [PubMed] [Google Scholar]
11. Tsuchiya T, Itoi T, Sofuni A, Tonozuka R, Mukai S. Endoscopic ultrasonography-guided rendezvous technique. Assimilate Endosc. 2016;28:96–101. [PubMed] [Google Scholar]
12. Okuno North, Hara Chiliad, Mizuno N, Hijioka Due south, Tajika Thousand, Tanaka T. Endoscopic ultrasound-guided rendezvous technique after failed endoscopic retrograde cholangiopancreatography: which approach route is the best? Intern Med. 2017;56(23):3135–3143. [PMC free article] [PubMed] [Google Scholar]
13. Sarkaria S, Sundararajan Due south, Kahaleh M. Endoscopic ultrasonographic access and drainage of the common bile duct. Gastrointest Endosc Clin. 2013;23(two):435–452. [PubMed] [Google Scholar]
xiv. Gupta K, Perez-Miranda M, Kahaleh M, Artifon EL, Itoi T, Freeman ML. Endoscopic ultrasound-assisted bile duct access and drainage: multicenter, long-term analysis of arroyo, outcomes, and complications of a technique in development. J Clin Gastroenterol. 2014;48(1):80–87. [PubMed] [Google Scholar]
15. Topol EJ. High-functioning medicine: the convergence of homo and artificial intelligence. Nat Med. 2019;25(one):44–56. [PubMed] [Google Scholar]
xvi. Zhang J, Zhu L, Yao Fifty, Ding X, Chen D, Wu H. Deep-learning–based pancreas segmentation and station recognition organization in EUS: development and validation of a useful training tool (with video) Gastrointest Endosc. 2020 [PubMed] [Google Scholar]
17. Kaneko M, Katanuma A, Maguchi H, Takahashi K, Osanai Thou, Yane Yard. Prospective, randomized, comparative written report of delineation capability of radial scanning and curved linear array endoscopic ultrasound for the pancreaticobiliary region. Endosc Int Open. 2014;2(3):E160. [PMC complimentary article] [PubMed] [Google Scholar]
18. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, editors. 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16) 2016. Tensorflow: a organization for large-scale machine learning. [Google Scholar]
xx. Mikołajczyk A, Grochowski M, editors. 2018 international interdisciplinary PhD workshop (IIPhDW) IEEE; 2018. Data augmentation for improving deep learning in image classification trouble. [Google Scholar]
21. Duvenaud D, Maclaurin D, Adams R, editors. Early stopping every bit nonparametric variational inference. Bogus Intelligence and Statistics; 2016. [Google Scholar]
22. Deng J, Dong W, Socher R, Li Fifty-J, Kai L, Li F-F. 2009 IEEE briefing on calculator vision and design recognition. 2009. ImageNet: a big-scale hierarchical paradigm database; pp. 248–255. [Google Scholar]
23. Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. Springer; 2018. Unet++: A nested u-net compages for medical image segmentation. Deep learning in medical paradigm assay and multimodal learning for clinical decision support; pp. 3–11. [PMC free article] [PubMed] [Google Scholar]
24. He Thousand, Zhang X, Ren S, Sun J, editors. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. Deep rest learning for paradigm recognition. [Google Scholar]
25. Wani S, Coté GA, Keswani R, Mullady D, Azar R, Murad F. Learning curves for EUS by using cumulative sum assay: implications for American Lodge for Gastrointestinal Endoscopy recommendations for training. Gastrointest Endosc. 2013;77(4):558–565. [PubMed] [Google Scholar]
26. Bhutani MS, Wong RF, Hoffman BJ. Training facilities in gastrointestinal endoscopy: an animal model as an aid to learning endoscopic ultrasound. Endoscopy. 2006;38(9):932–934. [PubMed] [Google Scholar]
27. Fritscher-Ravens A, Cuming T, Dhar Due south, Parupudi SVJ, Patel M, Ghanbari A. Endoscopic ultrasound-guided fine needle aspiration training: evaluation of a new porcine lymphadenopathy model for in vivo hands-on teaching and training, and review of the literature. Endoscopy. 2013;45(ii):114–120. [PubMed] [Google Scholar]
28. Konge L, Clementsen PF, Ringsted C, Minddal V, Larsen KR, Annema JT. Simulator training for endobronchial ultrasound: a randomised controlled trial. Eur Respir J. 2015;46(four):1140–1149. [PubMed] [Google Scholar]
29. Eisen GM, Dominitz JA, Faigel Exercise, Goldstein JA, Petersen BT, Raddawi HM. Guidelines for credentialing and granting privileges for endoscopic ultrasound. Gastrointest Endosc. 2001;54(6):811–814. [PubMed] [Google Scholar]
thirty. Wani Southward, Han S, Simon V, Hall M, Early D, Aagaard East. Setting minimum standards for training in EUS and ERCP: results from a prospective multicenter study evaluating learning curves and competence among advanced endoscopy trainees. Gastrointest Endosc. 2019;89(6) [PMC gratis article] [PubMed] [Google Scholar]
31. Wani S, Hall G, Keswani RN, Aslanian HR, Casey B, Burbridge R. Variation in aptitude of trainees in endoscopic ultrasonography, based on cumulative sum analysis. Clin Gastroenterol Hepatol. 2015;13(seven) 1318-25. e2. [PMC free article] [PubMed] [Google Scholar]
32. Brauer BC, Chen YK, Fukami Northward, Shah RJ. Single-operator EUS-guided cholangiopancreatography for difficult pancreaticobiliary admission (with video) Gastrointest Endosc. 2009;70(three):471–479. [PubMed] [Google Scholar]
33. Maranki J, Hernandez A, Arslan B, Jaffan A, Angle J, Shami V. Interventional endoscopic ultrasound-guided cholangiography: long-term experience of an emerging alternative to percutaneous transhepatic cholangiography. Endoscopy. 2009;41(06):532–538. [PubMed] [Google Scholar]
34. Kim Y, Gupta Yard, Mallery Southward, Li R, Kinney T, Freeman ML. Endoscopic ultrasound rendezvous for bile duct admission using a transduodenal approach: cumulative experience at a single centre. A case series. Endoscopy. 2010;42(06):496–502. [PubMed] [Google Scholar]
35. Shah JN, Marson F, Weilert F, Bhat YM, Nguyen-Tang T, Shaw RE. Single-operator, single-session EUS-guided anterograde cholangiopancreatography in failed ERCP or inaccessible papilla. Gastrointest Endosc. 2012;75(1):56–64. [PubMed] [Google Scholar]
Manufactures from EBioMedicine are provided here courtesy of Elsevier
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7921468/
0 Response to "Review Chapter 8 via Natural or Artificial Opening With Percutaneous Endoscopic Assistance"
Post a Comment