<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          【論文解讀】醫(yī)學AI論文解讀 | 超聲心動圖在臨床中的自動化檢測 | Circulation | 2018 | 中英雙語

          共 20640字,需瀏覽 42分鐘

           ·

          2020-11-21 05:19

          參考目錄:
          • 0 論文

          • 1 概述

          • 2 pipeline

          • 3 技術細節(jié)

            • 3.1 預處理

            • 3.2 卷積網(wǎng)絡

            • 3.3 VGG分類網(wǎng)絡結構

            • 3.4 圖像分割

          • 4 遇到的問題

          0 論文

          論文是2018年的,發(fā)表在醫(yī)學期刊《Circulation》(影響因子20+)的一篇文章《Fully Automated Echocardiogram Interpretation in Clinical Practice》 (超聲心動圖在臨床中的自動化檢測)。現(xiàn)在對于整體的學習做一個回顧,可以當成導讀:整個文章的算法方面不難,分類模型用的VGG,分割模型用的Unet,損失函數(shù)中規(guī)中矩,圖片處理中規(guī)中矩,算是一個老方法在醫(yī)學領域的一個使用。本文包含三個部分,英文的論文原文內容,宋體的百度翻譯內容,以及加粗字體的我的理解與精煉的內容。

          1 概述

          Using 14 035 echocardiograms spanning a 10-year period, we trained and evaluated convolutional neural network models for multiple tasks, including automated identification of 23 viewpoints and segmentation of cardiac chambers across 5 common views. The segmentation output was used to quantify chamber volumes and left ventricular mass, determine ejection fraction, and facilitate automated determination of longitudinal strain through speckle tracking. Results were evaluated through comparison to manual segmentation and measurements from 8666 echocardiograms obtained during the routine clinical workflow. Finally, we developed models to detect 3 diseases: hypertrophic cardiomyopathy, cardiac amyloid, and pulmonary arterial hypertension.

          我們使用了10年的14035張超聲心動圖,訓練和評估了用于多個任務的卷積神經(jīng)網(wǎng)絡模型,包括23個視點的自動識別和5個常見視圖的心腔分割。分割輸出用于量化腔容積和左心室質量,確定射血分數(shù),并通過散斑跟蹤自動確定縱向應變。通過與手工分割和常規(guī)臨床工作流程中獲得的8666張超聲心動圖的測量結果進行比較來評估結果。最后,我們建立了三種疾病的模型:肥厚型心肌病、心臟淀粉樣蛋白和肺動脈高壓。

          Convolutional neural networks accurately identified views (eg, 96% for parasternal long axis), including flagging partially obscured cardiac chambers, and enabled the segmentation of individual cardiac chambers. The resulting cardiac structure measurements agreed with study report values (eg, median absolute deviations of 15% to 17% of observed values for left ventricular mass, left ventricular diastolic volume, and left atrial volume). In terms of function, we computed automated ejection fraction and longitudinal strain measurements (within 2 cohorts), which agreed with commercial software-derived values (for ejection fraction, median absolute deviation=9.7% of observed, N=6407 studies; for strain, median absolute deviation=7.5%, n=419, and 9.0%, n=110) and demonstrated applicability to serial monitoring of patients with breast cancer for trastuzumab cardiotoxicity. Overall, we found automated measurements to be comparable or superior to manual measurements across 11 internal consistency metrics (eg, the correlation of left atrial and ventricular volumes). Finally, we trained convolutional neural networks to detect hypertrophic cardiomyopathy, cardiac amyloidosis, and pulmonary arterial hypertension with C statistics of 0.93, 0.87, and 0.85, respectively.

          卷積神經(jīng)網(wǎng)絡能準確識別視野(例如胸骨旁長軸為96%),包括部分模糊的心腔,并能分割單個的心腔。左心室容積測量值與左心室容積絕對值的17%一致。在功能方面,我們計算了自動射血分數(shù)和縱向應變測量值(在兩個隊列內),這與商業(yè)軟件得出的值一致(射血分數(shù),中值絕對偏差=觀察值的9.7%,N=6407研究;對于應變,中值絕對偏差=7.5%,N=419,和9.0%,n=110),并證明適用于對乳腺癌患者進行曲妥珠單抗心臟毒性的連續(xù)監(jiān)測。總的來說,我們發(fā)現(xiàn)在11個內部一致性指標(例如,左心房和心室容積的相關性)中,自動測量與手動測量相當或優(yōu)于人工測量。最后,我們訓練了卷積神經(jīng)網(wǎng)絡來檢測肥厚性心肌病、心臟淀粉樣變性和肺動脈高壓,其C統(tǒng)計量分別為0.93、0.87和0.85。

          2 pipeline

          先對數(shù)據(jù)進行分類,然后再做分割。

          Preprocessing entailed automated downloading of echocardiograms in Digital Imaging and Communications in Medicine format, separating videos from still images, extracting metadata (eg, frame rate, heart rate), converting them into numeric arrays for matrix computations, and deidentifying images by overwriting patient health information. We next used convolutional neu- ral networks (described later) for automatically determining echocardiographic views. Based on the identified views, videos were routed to specific segmentation models (parasternal long axis [PLAX], parasternal short axis, apical 2-chamber [A2c], api- cal 3-chamber, and apical 4-chamber [A4c]), and the output was used to derive chamber measurements, including lengths, areas, volumes, and mass estimates. Next, we generated 2 commonly used automated measures of left ventricular (LV) function: ejection fraction and longitudinal strain. Finally, we derived models to detect 3 diseases: hypertrophic cardiomyop- athy, pulmonary arterial hypertension, and cardiac amyloidosis.

          預處理需要自動下載醫(yī)學格式的數(shù)字成像和通信中的超聲心動圖,從靜態(tài)圖像中分離視頻,提取元數(shù)據(jù)(例如幀速率、心率),將其轉換為矩陣計算的數(shù)字數(shù)組,以及通過覆蓋患者健康信息來消除圖像的標識。下一步我們使用卷積神經(jīng)網(wǎng)絡(稍后描述)來自動確定超聲心動圖視圖。【這一步應該就是對圖進行分類】,根據(jù)確定的視圖,視頻被路由到特定的分割模型(胸骨旁長軸[PLAX]、胸骨旁短軸、心尖2腔[A2c]、api-cal3腔和心尖4腔[A4c])【這里知道應該是一個5分類的任務】,輸出用于推導腔室測量值,包括長度、面積、體積和質量估計值。接下來,我們生成了兩種常用的左心室功能自動測量方法:射血分數(shù)和縱向應變。最后,我們建立了檢測3種疾病的模型:肥厚型心肌病、肺動脈高壓和心臟淀粉樣變性。

          3 技術細節(jié)

          Specifically, 277 echocardiograms col- lected over a 10-year period were used to derive a view clas- sification model (Table II in the online-only Data Supplement). The image segmentation model was trained from 791 images divided over 5 separate views (Table III in the online-only Data Supplement). Comparison of automated and manual mea- surements was made against 8666 echocardiograms, with the majority of measurements made from 2014 to 2017 (Table IV in the online-only Data Supplement). For this pur- pose, we used all studies where these measurements were available (ie, there was no selection bias). The number of images used for training the different segmentation models was not planned in advance, and models were retrained as more data accrued over time. From initial testing, we rec- ognized that at least 60 images would be needed, and we allocated more training data and resources to A2c and A4c views because these were more central to measurements for both structure and function.

          具體而言,在10年的時間里收集了277張超聲心動圖,用于推導視圖分類模型(僅在線數(shù)據(jù)補充中的表II)。圖像分割模型是從791張圖像中訓練出來的,這些圖像分為5個獨立的視圖(表3,僅在線數(shù)據(jù)補充)。將自動和手動測量與8666個超聲心動圖進行了比較,大多數(shù)測量是在2014年至2017年進行的(僅在線數(shù)據(jù)補充中的表IV)。對于這個目標,我們使用了所有這些測量數(shù)據(jù)可用的研究(即,沒有選擇偏差)。用于訓練不同分割模型的圖像數(shù)量沒有事先計劃好,隨著時間的推移積累更多的數(shù)據(jù),模型被重新訓練。從最初的測試中,我們發(fā)現(xiàn)至少需要60幅圖像,并且我們?yōu)锳2c和A4c視圖分配了更多的訓練數(shù)據(jù)和資源,因為這兩個視圖對結構和功能的測量更為重要。

          從中可以知道:【訓練的時候使用的樣本只有幾百個就足夠了】

          3.1 預處理

          We identified 260 patients at UCSF who met guideline-based criteria for hypertrophic cardiomyopathy: “unexplained left ventricular (LV) hypertrophy (maximal LV wall thickness ≥ 15 mm) associated with nondilated ventricular chambers in the absence of another cardiac or systemic disease that itself would be capable of producing the magnitude of hypertro- phy evident in a given patient.”9 These patients were selected from 2 sources: the UCSF Familial Cardiomyopathy Clinic and the database of clinical echocardiograms. Patients had a variety of thickening patterns, including upper septal hyper- trophy, concentric hypertrophy, and predominantly apical hypertrophy. A subset of patients underwent genetic testing. Overall, 18% of all patients had pathogenic or likely patho- genic mutations. We downloaded all echocardiograms within the UCSF database corresponding to these patients and confirmed evidence of hypertrophy. We excluded bicycle, treadmill, and dobutamine stress echocardiograms because these tend to include slightly modified views or image anno- tations that could have confounding effects on models trained for disease detection. We also excluded studies of patients conducted after septal myectomy or alcohol septal ablation and studies of patients with pacemakers or implantable defibrillators. Control patients were also selected from the UCSF echocardiographic database. For each hypertrophic cardiomyopathy (HCM) case study, ≤5 matched control studies were selected, with matching by age (in 10-year bins), sex, year of study, ultrasound device manufacturer, and model. This process was simplified by organizing all of our studies in a nested format in a python dictionary so we can look up studies by these characteris- tics. Given that the marginal cost of analyzing additional samples is minimal in our automated system, we did not perform a greedy search for matched controls. Case, con- trol, and study characteristics are described in Table V in the online-only Data Supplement. We did not require that cases were disease-free, only that they did not have HCM.

          我們在加州大學舊金山分校發(fā)現(xiàn)了260名符合肥厚性心肌病指南標準的患者:“未解釋的左心室(LV)肥大(最大左室壁厚≥15 mm)與非擴張性心室室有關,而另一種心臟或系統(tǒng)性疾病本身能夠產(chǎn)生這些患者選自2個來源:加州大學舊金山分校家族性心肌病診所和臨床超聲心動圖數(shù)據(jù)庫?;颊哂懈鞣N各樣的增厚模式,包括上中隔高營養(yǎng)型,向心性肥大和以心尖肥大為主。一部分病人接受了基因檢測??偟膩碚f,18%的患者有致病性或可能的致病性突變。我們下載了加州大學舊金山分校數(shù)據(jù)庫中與這些患者相對應的所有超聲心動圖,并確認了肥大的證據(jù)。我們排除了bicycle、treadmill和多巴酚丁胺負荷超聲心動圖,因為這些超聲心動圖往往包括稍微修改的視圖或圖像注釋,可能會對模型產(chǎn)生混淆的影響 接受疾病檢測訓練。我們也排除了對間隔肌切除術或酒精性間隔消融術后患者的研究,以及對使用起搏器或植入式除顫器的患者的研究。對照組患者也從加州大學舊金山分校超聲心動圖數(shù)據(jù)庫中選擇。對于每一個肥厚型心肌?。℉CM)病例研究,選擇≤5個匹配的對照研究,按年齡(以10年為單位)、性別、研究年份、超聲設備制造商和型號進行匹配。通過在python字典中以嵌套格式組織我們的所有研究,我們可以通過這些特征來查找研究,從而簡化了這個過程??紤]到分析額外樣本的邊際成本在我們的自動化系統(tǒng)中是最小的,我們沒有執(zhí)行貪婪的搜索匹配的控制。病例、對照和研究特點在僅在線數(shù)據(jù)補充中的表V中描述。我們沒有要求病例是無病的,只是他們沒有HCM。

          (補充資料)Additionally, each echocardiogram contains periphery information unique to different output settings on ultrasound machines used to collect the data. This periphery information details additional details collected (i.e. electrocardiogram, blood pressure, etc.). To improve generalizability across institutions, we wanted the classification of views to use ultrasound data and not metadata presented in the periphery. To address this issue, every image is randomly cropped between 0-20 pixels from each edge and resized to 224x224 during training. This provides variation in the periphery information, which guides the network to target more relevant features and improves the overall robustness of our view classification models.

          此外,每個超聲心動圖都包含用于收集數(shù)據(jù)的超聲機器上不同輸出設置的外圍信息。此外圍信息詳細說明了收集到的其他詳細信息(即心電圖、血壓等)。為了提高跨機構的概括性,我們希望視圖的分類使用超聲數(shù)據(jù),而不是外圍顯示的元數(shù)據(jù)。為了解決這個問題,每個圖像從每個邊緣隨機裁剪0-20像素,并在訓練期間調整為224x224。提高了我們的分類模型的魯棒性,為我們的網(wǎng)絡分類提供了更多的相關信息。

          (補充資料)Training data comprised of 10 random frames from each manually labeled echocardiographic video. We trained our network on approximately 70,000 pre -processed images. For stochastic optimization, we used the ADAM optimizer2 with an initial learning rate of 1e-5 and mini-batch size of 64. For regularization, we applied a weight decay of 1e-8 on all network weights and dropout with probability 0.5 on the fully connected layers. We ran our tests for 20 epochs or ~20,000 iterations, which takes ~3.5 hours on a Nvidia GTX 1080. Runtime per video was 600 ms on average. Accuracy was assessed by 5-fold cross-validation at the individual image level. When deploying the model, we would average the prediction probabilities for 10 randomly selected images from each video.

          訓練數(shù)據(jù)由10個隨機幀組成,來自每個手動標記的超聲心動圖視頻。我們在大約70000張預處理圖像上訓練了我們的網(wǎng)絡。對于隨機優(yōu)化,我們使用ADAM優(yōu)化器2,初始學習率為1e-5,最小批量為64。對于正則化,我們對所有網(wǎng)絡權重和完全連接層的概率為0.5的脫落。我們測試了20個時代或20000次迭代,在nvidiagtx1080上需要大約3.5小時。每段視頻的運行時間平均為600毫秒。在5倍圖像水平上對個體準確性進行評估。在部署該模型時,我們將平均每個視頻中隨機選擇的10個圖像的預測概率。

          【論文中做的預處理】

          1. 我們排除了bicycle、treadmill和多巴酚丁胺負荷超聲心動圖,因為這些超聲心動圖往往包括稍微修改的視圖或圖像注釋,可能會對模型產(chǎn)生混淆的影響;
          2. 我們也排除了對間隔肌切除術或酒精性間隔消融術后患者的研究,以及對使用起搏器或植入式除顫器的患者的研究。
          3. 對照組患者也從加州大學舊金山分校超聲心動圖數(shù)據(jù)庫中選擇。
          4. 為了提高跨機的魯棒性,從每個圖片的每個邊緣隨機剪裁0到20個像素,并在訓練期間調整成224x224大小。
          5. 一個標注的視頻中抽取10個視頻幀作為訓練的輸入,所以右7W個輸入,這個卷積也是2D的卷積,在推理階段把10個幀的預測值的均值作為視頻的預測值

          3.2 卷積網(wǎng)絡

          We first developed a model for view classification. Typical echocardiograms consist of ≥70 separate videos representing multiple viewpoints. Furthermore, with rotation and adjust- ment of the zoom level of the ultrasound probe, sonogra- phers actively focus on substructures within an image, thus creating many variations of these views. Unfortunately, none of these views is labeled explicitly. Thus, the first learning step involves teaching the machine to recognize individual echo- cardiographic views. Models are trained using manual labels assigned to indi- vidual images. Using the 277 studies described earlier, we assigned 1 of 30 labels to each video (eg, parasternal long axis or subcostal view focusing on the abdominal aorta). Because discrimination of all views (subcostal, hepatic vein versus subcostal, inferior vena cava) was not necessary for our downstream analyses, we ultimately used only 23 view classes for our final model (Table IX in the online-only Data Supplement). The training data consisted of 7168 individually labeled videos.

          我們首先開發(fā)了一個視圖分類模型。典型的超聲心動圖包括≥70個獨立的視頻,代表多個視點。此外,隨著超聲探頭的旋轉和縮放水平的調整,超聲工作者會主動聚焦于圖像中的子結構,從而產(chǎn)生許多不同的視圖。不幸的是,這些視圖都沒有明確標記。因此,第一個學習步驟包括教機器識別個別的心臟回聲圖視圖。使用分配給單個圖像的手動標簽來訓練模型。利用前面描述的277項研究,我們?yōu)槊總€視頻指定了30個標簽中的一個(例如胸骨旁長軸或肋下視野聚焦于腹主動脈)。因為我們的下游分析不需要區(qū)分所有視圖(肋下、肝靜脈與肋下、下腔靜脈),我們最終只使用了23個視圖類作為最終模型(僅在線數(shù)據(jù)補充中的表IX)。訓練數(shù)據(jù)包括7168個單獨標記的視頻。

          3.3 VGG分類網(wǎng)絡結構

          【簡單總結一下】:就是超聲心動圖數(shù)據(jù)是包含不同的視角的,所以需要先對視角進行分類,這里分成了30類,手動標準了277個圖。然后訓練好分類模型,選取30類中的23個類作為下一階段的模型的數(shù)據(jù)。所以還剩下7168個視頻

          The VGG network1 takes a fixed-sized input of grayscale images with dimensions 224x224 pixels (we use scikit-image to resize by linear interpolation). Each image is passed through ten convolution layers, five max-pool layers, and three fully connected layers. (We experimented with a larger number of convolution layers but saw no improvement for our task). All co nvolutional layers consist of 3x3 filters with stride 1 and all max-pooling is applied over a 2x2 window with stride 2. The convolution layers consist of 5 groups of 2 convolution layers, which are each followed by 1 max pool layer. The stack of convolutions is followed by two fully connected layers, each with 4096 hidden units, and a final fully connected layer with 23 output units. The output is fed into a 23-way softmax layer to represent 23 different echocardiographic views. This final step represents a standard multinomial logistic regression with 23 mutually exclusive classes. The predictors in this model are the output nodes of the neural network. The view with the highest probability was selected as the predicted view.

          VGG network1采用尺寸為224x224像素的固定大小的灰度圖像輸入(我們使用scikit圖像通過線性插值調整大?。?。每個圖像通過十個卷積層、五個最大池層和三個完全連接的層。(我們嘗試了大量的卷積層,但沒有發(fā)現(xiàn)我們的任務有任何改進)。所有共決層由3x3過濾器組成,步長為1,所有max池應用于步長為2的2x2窗口上。卷積層由5組2個卷積層組成,每個卷積層后面有1個最大池層。卷積之后是兩個完全連接的層,每個層有4096個隱藏單元,最后一個完全連接層有23個輸出單元。輸出被送入23路softmax層,以表示23種不同的超聲心動圖視圖。最后一步是標準的多項式logistic回歸,有23個互斥類。該模型中的預測因子是神經(jīng)網(wǎng)絡的輸出節(jié)點。選擇概率最大的視圖作為預測視圖。

          3.4 圖像分割

          To train image segmentation models, we derived a CNN based on the U-net architecture described by Ronneberger et al3. The U-net-based network we used accepts a 384x384 pixel fixed-sized image as input, and is composed of a contracting path and an expanding path with a total of 23 convolutional layers. The contracting path is composed of twelve convolutional layers with 3x3 filters followed by a rectified linear unit and four max pool layers each using a 2x2 window with stride 2 for down-sampling. The expanding path is composed of ten convolutional layers with 3x3 filters followed by a rectified linear unit, and four 2x2 up-convolution layers. Every up- convolution in the expansion path is concatenated with a feature map from the contracting path with same dimension. This is performed to recover the loss of pixel and feature locality due to downsampling images, which in turn enables pixel-level classification. The final layer uses a 1x1 convolution to map each feature vector to the output classes. Separate U-net CNN networks were trained to perform segmentation on images from PLAX, PSAX (at the level of the papillary muscle), A4c, A3c, and A2c views. Training data was derived for each class of echocardiographic view via manual segmentation. We performed data augmentation techniques including cropping and blacking out random areas of the echocardiographic image in order to improve model performance in the setting of a limited amount of training data. The rationale is that models that are robust to such variation are likely to generalize better to unseen data. Training data underwent varying degrees of cropp ing (or no cropping) at random amounts for each edge of the image. Similarly, circular areas of random size set at random locations in the echocardiographic image were set to 0-pixel intensity to achieve ''blackout''.This U-net architecture and the data augmentationtechniques enabled highly efficient training, achieving accurate segmentation from a relatively low number of training examples. Finally, in addition to pixelwise cross-entropy loss, we included a distance-based loss penalty for misclassified pixels. The loss function was based on the distance from the closest pixel with the same misclassified class in the ground truth image. This helped mitigate erroneous pixel predictions across the images. We used an Intersection Over Union (IoU) metric for assessment of results. The IoU takes the number of pixels which overlap between the ground truth and automated segmentation (for a given class, such as left atrial blood pool) and divides them by the total number of pixels assigned to that class by either method. It ranges between 0 and 100.

          為了訓練圖像分割模型,我們推導了一個基于Ronneberger等人描述的U-net結構的CNN。我們使用的基于U-net的網(wǎng)絡接受384x384像素的固定尺寸圖像作為輸入,由收縮路徑和擴展路徑組成,共有23個卷積層。收縮路徑由12個帶3x3濾波器的卷積層和4個最大池層組成,每個層使用2x2窗口和步長2進行下采樣。擴展路徑由10個帶3x3濾波器的卷積層和4個2x2向上卷積層組成。擴展路徑中的每一個上卷積都與來自相同維數(shù)收縮路徑的特征映射相連接。執(zhí)行此操作是為了恢復由于圖像的下采樣而丟失的像素和特征局部性,這反過來又支持像素級分類。最后一層使用1x1卷積將每個特征向量映射到輸出類。訓練獨立的U-netcnn網(wǎng)絡對PLAX、PSAX(乳頭肌水平)、A4c、A3c和A2c視圖的圖像進行分割。訓練數(shù)據(jù)是通過人工分割得到的每一類超聲心動圖視圖。為了在有限的訓練數(shù)據(jù)環(huán)境下提高模型的性能,我們采用了數(shù)據(jù)增強技術,包括裁剪和去除超聲心動圖圖像的隨機區(qū)域。其基本原理是,對這種變化具有魯棒性的模型很可能會更好地概括為看不見的數(shù)據(jù)。訓練數(shù)據(jù)經(jīng)歷了不同程度的裁剪(或不裁剪),對圖像的每個邊緣進行隨機數(shù)量的裁剪。同樣,在超聲心動圖圖像中隨機位置設置的隨機大小的圓形區(qū)域被設置為0像素強度,以實現(xiàn)“斷電”。這種U-net結構和數(shù)據(jù)增強技術實現(xiàn)了高效的訓練,從相對較少的訓練樣本中實現(xiàn)了精確的分割。最后,除了像素交叉熵損失,我們還包括了基于距離的損失懲罰 錯誤分類的像素。損失函數(shù)是基于距離地面真實圖像中同一類錯誤分類的最近像素的距離。這有助于減少圖像中錯誤的像素預測。我們使用了一個相交于聯(lián)合(IoU)度量來評估結果。IoU取基本真實值和自動分割(對于給定的類別,如左心房血池)之間重疊的像素數(shù),并除以通過任一方法分配給該類別的像素總數(shù)。范圍在0到100之間。

          【簡單的解讀】

          使用的模型是Unet,然后輸入數(shù)據(jù)是2D圖像384x384大小,然后網(wǎng)絡結構就是23個卷積層,比較常規(guī),由收縮路徑和擴展路徑組成,共有23個卷積層。收縮路徑由12個帶3x3濾波器的卷積層和4個最大池層組成,每個層使用2x2窗口和步長2進行下采樣。擴展路徑由10個帶3x3濾波器的卷積層和4個2x2向上卷積層組成。

          因為數(shù)據(jù)有限,所以使用了剪裁(之前提到的對邊緣進行剪裁)、去除隨機區(qū)域(模擬斷電現(xiàn)象)。損失函數(shù)除了交叉熵,還有對于分類錯誤的像素有基于距離的損失,這個距離是指與其他同樣被預測錯誤類別的像素之間的距離。我覺得應該是距離越遠,懲罰越大,距離越近懲罰越小,保證像素盡可能的聚成一團。

          算法的衡量標準是IoU指標。

          4 遇到的問題

          During the training process, we found that our CNN models readily segmented the LV across a wide range of videos from hundreds of studies, and we were thus interested in understanding the origin of the extreme outliers in our Bland-Altman plots (Figure 4). We under- took a formal analysis of the 20 outlier cases where the discrepancy between manual and automated measure- ments for LV end diastolic volume was highest (>99.5th percentile). This included 10 studies where the auto- mated value was estimated to be much higher than manual (DiscordHI) and 10 where the reverse was seen (DiscordLO). For each study, we repeated the manual LV end diastolic volume measurement. For every 1 of the 10 studies in DiscordHI, we de- termined that the automated result was in fact cor- rect (median absolute deviation=8.6% of the repeat manual value), whereas the prior manual measure- ment was markedly inaccurate (median absolute devia- tion=70%). It is unclear why these incorrect values had been entered into our clinical database. For DiscordLO (ie, much lower automated value), the results were mixed. For 2 of the 10 studies, the automated value was correct and the previous manual value erroneous; for 3 of the 10, the repeated value was intermediate between automated and manual. For 5 of the 10 stud- ies in DiscordLO, there were clear problems with the au- tomated segmentation. In 2 of the 5, intravenous con- trast had been used in the study, but the segmentation algorithm, which had not been trained on these types of data, attempted to locate a black blood pool. The third poorly segmented study involved a patient with complex congenital heart disease with a double out- let right ventricle and membranous ventricular septal defect. The fourth study involved a mechanical mitral valve with strong acoustic shadowing and reverbera- tion artifact. Finally, the fifth poorly segmented study had a prominent calcified false tendon in the LV com- bined with a moderately sized pericardial effusion. This outlier analysis thus highlighted the presence of inac- curacies in our clinical database as well as the types of studies that remain challenging for our automated segmentation algorithms.

          在培訓過程中,我們發(fā)現(xiàn)我們的CNN模型很容易將LV分割成來自數(shù)百個研究的大量視頻,因此我們對理解我們平淡無奇的Altman圖中極端異常值的來源很感興趣(圖4)。我們對20例異常病例進行了正式分析,其中左室舒張末期容積的手動和自動測量之間的差異最大(>99.5%)。這包括10項研究,其中自動匹配值估計遠高于手動(DiscordHI)和10項發(fā)現(xiàn)相反(DiscordLO)的研究。對于每項研究,我們重復手動左室舒張末期容積測量。對于DiscordHI的10項研究中的每一項,我們確定自動結果實際上是正確的(中值絕對偏差=重復手動值的8.6%),而先前的手動測量明顯不準確(絕對偏差中值=70%)。目前尚不清楚為什么這些不正確的數(shù)值被輸入到我們的臨床數(shù)據(jù)庫中。對于DiscordLO(即自動化值低得多),結果是混合的。在10項研究中,有2項的自動值是正確的,以前的手動值是錯誤的;在10項研究中,有3項重復值介于自動和手動之間。在不和諧的10個種馬中,有5個是明顯的問題- 自動分割。在這5項研究中,有2項采用了靜脈注射法,但分割算法,沒有經(jīng)過訓練,這些類型的數(shù)據(jù),試圖定位一個黑色血泊。第三個分段較差的研究涉及一個復雜的先天性心臟病患者,右心室雙出口和膜性室間隔缺損。第四項研究涉及一個機械二尖瓣與強聲影和混響偽影。最后,第五個分段不良的研究發(fā)現(xiàn)左室有明顯鈣化假腱,并伴有中等大小的心包積液。因此,這種異常值分析突出了我們的臨床數(shù)據(jù)庫中存在的不精確性,以及對我們的自動分割算法仍然具有挑戰(zhàn)性的研究類型。

          【個人的理解就是】訓練過程中,CNN模型是有能力從大量的視頻中分辨出LV左心室的位置的,因此他們對一些分割錯誤的異常值非常的好奇,最后的結果是發(fā)現(xiàn)少部分的異常值的標注是錯誤的,一部分的預測值是錯誤的。而錯誤的樣本中,有一半是因為出現(xiàn)了患者使用了靜脈注射的方法,而這類的方法事先并沒有訓練模型;另外一半是因為患者的心臟是非常特殊的(合并右心室雙出口和膜性室間隔缺損的復雜先天性心臟病患者)。

          往期精彩回顧





          獲取本站知識星球優(yōu)惠券,復制鏈接直接打開:

          https://t.zsxq.com/y7uvZF6

          本站qq群704220115。

          加入微信群請掃碼:

          瀏覽 52
          點贊
          評論
          收藏
          分享

          手機掃一掃分享

          分享
          舉報
          評論
          圖片
          表情
          推薦
          點贊
          評論
          收藏
          分享

          手機掃一掃分享

          分享
          舉報
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  精品黄色 | 操操操网 | 青青草综合视频 | 一区二区三区视频杨思敏 | 国产精品激情综合网 |