Defect inspection is critical in semiconductor manufacturing for product quality improvement at reduced production costs.A whole new manufacturing process is often associated with a new set of defects that can cause s...Defect inspection is critical in semiconductor manufacturing for product quality improvement at reduced production costs.A whole new manufacturing process is often associated with a new set of defects that can cause serious damage to the manufacturing system.Therefore,classifying existing defects and new defects provides crucial clues to fix the issue in the newly introduced manufacturing process.We present a multi-task hybrid transformer(MT-former)that distinguishes novel defects from the known defects in electron microscope images of semiconductors.MT-former consists of upstream and downstream training stages.In the upstream stage,an encoder of a hybrid transformer is trained by solving both classification and reconstruction tasks for the existing defects.In the downstream stage,the shared encoder is fine-tuned by simultaneously learning the classification as well as a deep support vector domain description(Deep-SVDD)to detect the new defects among the existing ones.With focal loss,we also design a hybrid-transformer using convolutional and an efficient self-attention module.Our model is evaluated on real-world data from SK Hynix and on publicly available data from magnetic tile defects and HAM10000.For SK Hynix data,MT-former achieved higher AUC as compared with a Deep-SVDD model,by 8.19%for anomaly detection and by 9.59%for classifying the existing classes.Furthermore,the best AUC(magnetic tile defect 67.9%,HAM1000070.73%)on the public dataset achieved with the proposed model implies that MT-former would be a useful model for classifying the new types of defects from the existing ones.展开更多
In pathological diagnostics,histological images highlight the oncological features of excised specimens,but they require laborious and costly staining procedures.Despite recent innovations in label-free microscopy tha...In pathological diagnostics,histological images highlight the oncological features of excised specimens,but they require laborious and costly staining procedures.Despite recent innovations in label-free microscopy that simplify complex staining procedures,technical limitations and inadequate histological visualization are still problems in clinical settings.Here,we demonstrate an interconnected deep learning(DL)-based framework for performing automated virtual staining,segmentation,and classification in label-free photoacoustic histology(PAH)of human specimens.The framework comprises three components:(1)an explainable contrastive unpaired translation(E-CUT)method for virtual H&E(VHE)staining,(2)an U-net architecture for feature segmentation,and(3)a DL-based stepwise feature fusion method(StepFF)for classification.The framework demonstrates promising performance at each step of its application to human liver cancers.In virtual staining,the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm,making VHE images highly similar to real H&E ones.In segmentation,various features(e.g.,the cell area,number of cells,and the distance between cell nuclei)have been successfully segmented in VHE images.Finally,by using deep feature vectors from PAH,VHE,and segmented images,StepFF has achieved a 98.00%classification accuracy,compared to the 94.80%accuracy of conventional PAH classification.In particular,StepFF’s classification reached a sensitivity of 100%based on the evaluation of three pathologists,demonstrating its applicability in real clinical settings.This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.展开更多
基金supported by SK Hynix AICC(P23.03)by the National Research Foundation of Korea(NRF)grant funded by the Ministry of Science and ICT(2023R1A2C3004880)+4 种基金the Ministry of Education(2020R1A6A1A03047902 and 2022R1A6A1A03052954)by Basic Science Research Program through the NRF funded by the Ministry of Education(RS-2024-00415450)by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2019-II191906,Artificial Intelligence Graduate School Program(POSTECH))by the BK21 FOUR projectby Glocal University 30 projects.
文摘Defect inspection is critical in semiconductor manufacturing for product quality improvement at reduced production costs.A whole new manufacturing process is often associated with a new set of defects that can cause serious damage to the manufacturing system.Therefore,classifying existing defects and new defects provides crucial clues to fix the issue in the newly introduced manufacturing process.We present a multi-task hybrid transformer(MT-former)that distinguishes novel defects from the known defects in electron microscope images of semiconductors.MT-former consists of upstream and downstream training stages.In the upstream stage,an encoder of a hybrid transformer is trained by solving both classification and reconstruction tasks for the existing defects.In the downstream stage,the shared encoder is fine-tuned by simultaneously learning the classification as well as a deep support vector domain description(Deep-SVDD)to detect the new defects among the existing ones.With focal loss,we also design a hybrid-transformer using convolutional and an efficient self-attention module.Our model is evaluated on real-world data from SK Hynix and on publicly available data from magnetic tile defects and HAM10000.For SK Hynix data,MT-former achieved higher AUC as compared with a Deep-SVDD model,by 8.19%for anomaly detection and by 9.59%for classifying the existing classes.Furthermore,the best AUC(magnetic tile defect 67.9%,HAM1000070.73%)on the public dataset achieved with the proposed model implies that MT-former would be a useful model for classifying the new types of defects from the existing ones.
基金supported by the following sources:Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2020R1A6A1A03047902)NRF grant funded by the Ministry of Science and ICT(MSIT)(2023R1A2C3004880,2021M3C1C3097624)+2 种基金Korea Medical Device Development Fund grant funded by the Korea government(MSIT,the Ministry of Trade,Industry and Energy,the Ministry of Health&Welfare,the Ministry of Food and Drug Safety)(Project Number:1711195277,RS-2020-KD000008,1711196475,RS-2023-00243633)Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2019-II191906,Artificial Intelligence Graduate School Program(POSTECH))BK21 FOUR program.
文摘In pathological diagnostics,histological images highlight the oncological features of excised specimens,but they require laborious and costly staining procedures.Despite recent innovations in label-free microscopy that simplify complex staining procedures,technical limitations and inadequate histological visualization are still problems in clinical settings.Here,we demonstrate an interconnected deep learning(DL)-based framework for performing automated virtual staining,segmentation,and classification in label-free photoacoustic histology(PAH)of human specimens.The framework comprises three components:(1)an explainable contrastive unpaired translation(E-CUT)method for virtual H&E(VHE)staining,(2)an U-net architecture for feature segmentation,and(3)a DL-based stepwise feature fusion method(StepFF)for classification.The framework demonstrates promising performance at each step of its application to human liver cancers.In virtual staining,the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm,making VHE images highly similar to real H&E ones.In segmentation,various features(e.g.,the cell area,number of cells,and the distance between cell nuclei)have been successfully segmented in VHE images.Finally,by using deep feature vectors from PAH,VHE,and segmented images,StepFF has achieved a 98.00%classification accuracy,compared to the 94.80%accuracy of conventional PAH classification.In particular,StepFF’s classification reached a sensitivity of 100%based on the evaluation of three pathologists,demonstrating its applicability in real clinical settings.This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.