Traditional methods for measuring single-cell mechanical characteristics face several challenges,including lengthy measurement times,low throughput,and a requirement for advanced technical skills.To overcome these cha...Traditional methods for measuring single-cell mechanical characteristics face several challenges,including lengthy measurement times,low throughput,and a requirement for advanced technical skills.To overcome these challenges,a novel machine learning(ML)approach is implemented based on the convolutional neural networks(CNNs),aiming at predicting cells'elastic modulus and constitutive equations from their deformations while passing through micro-constriction channels.In the present study,the computational fluid dynamics technology is used to generate a dataset within the range of the cell elastic modulus,incorporating three widely-used constitutive models that characterize the cellular mechanical behavior,i.e.,the Mooney-Rivlin(M-R),Neo-Hookean(N-H),and Kelvin-Voigt(K-V)models.Utilizing this dataset,a multi-input convolutional neural network(MI-CNN)algorithm is developed by incorporating cellular deformation data as well as the time and positional information.This approach accurately predicts the cell elastic modulus,with a coefficient of determination R^(2)of 0.999,a root mean square error of 0.218,and a mean absolute percentage error of 1.089%.The model consistently achieves high-precision predictions of the cellular elastic modulus with a maximum R^(2)of 0.99,even when the stochastic noise is added to the simulated data.One significant feature of the present model is that it has the ability to effectively classify the three types of constitutive equations we applied.The model accurately and reliably predicts single-cell mechanical properties,showcasing a robust ability to generalize.We demonstrate that incorporating deformation features at multiple time points can enhance the algorithm's accuracy and generalization.This algorithm presents a possibility for high-throughput,highly automated,real-time,and precise characterization of single-cell mechanical properties.展开更多
This study aimed to explore the value of deep learning(DL)-assisted quantitative susceptibility mapping(QSM)in glioma grading and molecular subtyping.Forty-two patients with gliomas,who underwent preoperative T2 fluid...This study aimed to explore the value of deep learning(DL)-assisted quantitative susceptibility mapping(QSM)in glioma grading and molecular subtyping.Forty-two patients with gliomas,who underwent preoperative T2 fluid-attenuated inversion recovery(T2 FLAIR),contrast-enhanced T1-weighted imaging(T1WI+C),and QSM scanning at 3.0T magnetic resonance imaging(MRI)were included in this study.Histopathology and immunohistochemistry staining were used to determine glioma grades,and isocitrate dehydrogenase(IDH)1 and alpha thalassemia/mental retardation syndrome X-linked gene(ATRX)subtypes.Tumor segmentation was performed manually using Insight Toolkit-SNAP program(www.itksnap.org).An inception convolutional neural network(CNN)with a subsequent linear layer was employed as the training encoder to capture multi-scale features from MRI slices.Fivefold cross-validation was utilized as the training strategy(seven samples for each fold),and the ratio of sample size of the training,validation,and test dataset was 4:1:1.The performance was evalu-ated by the accuracy and area under the curve(AUC).With the inception CNN,single modal of QSM showed better perfor-mance in differentiating glioblastomas(GBM)and other grade gliomas(OGG,grade II–III),and predicting IDH1 mutation and ATRX loss(accuracy:0.80,0.77,0.60)than either T2 FLAIR(0.69,0.57,0.54)or T1WI+C(0.74,0.57,0.46).When combining three modalities,compared with any single modality,the best AUC/accuracy/F1-scores were reached in grading gliomas(OGG and GBM:0.91/0.89/0.87,low-grade and high-grade gliomas:0.83/0.86/0.81),predicting IDH1 mutation(0.88/0.89/0.85),and predicting ATRX loss(0.78/0.71/0.67).As a supplement to conventional MRI,DL-assisted QSM is a promising molecular imaging method to evaluate glioma grades,IDH1 mutation,and ATRX loss.展开更多
Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation.However,existing methods often fall into...Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation.However,existing methods often fall into what we call interactive misunderstanding,the essence of which is the dilemma in trading off short-and long-term interaction information.To better use the interaction information at various timescales,we propose an interactive segmentation framework,called interactive MEdical image segmentation with self-adaptive Confidence CAlibration(MECCA),which combines action-based confidence learning and multi-agent reinforcement learning.A novel confidence network is learned by predicting the alignment level of the action with short-term interaction information.A confidence-based reward-shaping mechanism is then proposed to explicitly incorporate confidence in the policy gradient calculation,thus directly correcting the model’s interactive misunderstanding.MECCA also enables user-friendly interactions by reducing the interaction intensity and difficulty via label generation and interaction guidance,respectively.Numerical experiments on different segmentation tasks show that MECCA can significantly improve short-and long-term interaction information utilization efficiency with remarkably fewer labeled samples.The demo video is available at https://bit.ly/mecca-demo-video.展开更多
基金Project supported by the National Natural Science Foundation of China(Nos.12332016,12172209,and 12202258)the Shanghai Gaofeng Project for University Academic Program Development。
文摘Traditional methods for measuring single-cell mechanical characteristics face several challenges,including lengthy measurement times,low throughput,and a requirement for advanced technical skills.To overcome these challenges,a novel machine learning(ML)approach is implemented based on the convolutional neural networks(CNNs),aiming at predicting cells'elastic modulus and constitutive equations from their deformations while passing through micro-constriction channels.In the present study,the computational fluid dynamics technology is used to generate a dataset within the range of the cell elastic modulus,incorporating three widely-used constitutive models that characterize the cellular mechanical behavior,i.e.,the Mooney-Rivlin(M-R),Neo-Hookean(N-H),and Kelvin-Voigt(K-V)models.Utilizing this dataset,a multi-input convolutional neural network(MI-CNN)algorithm is developed by incorporating cellular deformation data as well as the time and positional information.This approach accurately predicts the cell elastic modulus,with a coefficient of determination R^(2)of 0.999,a root mean square error of 0.218,and a mean absolute percentage error of 1.089%.The model consistently achieves high-precision predictions of the cellular elastic modulus with a maximum R^(2)of 0.99,even when the stochastic noise is added to the simulated data.One significant feature of the present model is that it has the ability to effectively classify the three types of constitutive equations we applied.The model accurately and reliably predicts single-cell mechanical properties,showcasing a robust ability to generalize.We demonstrate that incorporating deformation features at multiple time points can enhance the algorithm's accuracy and generalization.This algorithm presents a possibility for high-throughput,highly automated,real-time,and precise characterization of single-cell mechanical properties.
基金supported in part by Science and Technology Commission of Shanghai Municipality(grant number 18411967300,20ZR1407800)Shanghai Municipal Science and Technology Major Project(2018SHZDZX01)the National Natural Science Foundation of China(81873893).
文摘This study aimed to explore the value of deep learning(DL)-assisted quantitative susceptibility mapping(QSM)in glioma grading and molecular subtyping.Forty-two patients with gliomas,who underwent preoperative T2 fluid-attenuated inversion recovery(T2 FLAIR),contrast-enhanced T1-weighted imaging(T1WI+C),and QSM scanning at 3.0T magnetic resonance imaging(MRI)were included in this study.Histopathology and immunohistochemistry staining were used to determine glioma grades,and isocitrate dehydrogenase(IDH)1 and alpha thalassemia/mental retardation syndrome X-linked gene(ATRX)subtypes.Tumor segmentation was performed manually using Insight Toolkit-SNAP program(www.itksnap.org).An inception convolutional neural network(CNN)with a subsequent linear layer was employed as the training encoder to capture multi-scale features from MRI slices.Fivefold cross-validation was utilized as the training strategy(seven samples for each fold),and the ratio of sample size of the training,validation,and test dataset was 4:1:1.The performance was evalu-ated by the accuracy and area under the curve(AUC).With the inception CNN,single modal of QSM showed better perfor-mance in differentiating glioblastomas(GBM)and other grade gliomas(OGG,grade II–III),and predicting IDH1 mutation and ATRX loss(accuracy:0.80,0.77,0.60)than either T2 FLAIR(0.69,0.57,0.54)or T1WI+C(0.74,0.57,0.46).When combining three modalities,compared with any single modality,the best AUC/accuracy/F1-scores were reached in grading gliomas(OGG and GBM:0.91/0.89/0.87,low-grade and high-grade gliomas:0.83/0.86/0.81),predicting IDH1 mutation(0.88/0.89/0.85),and predicting ATRX loss(0.78/0.71/0.67).As a supplement to conventional MRI,DL-assisted QSM is a promising molecular imaging method to evaluate glioma grades,IDH1 mutation,and ATRX loss.
基金Project supported by the Science and Technology Commission of Shanghai Municipality,China(No.22511106004)the Postdoctoral Science Foundation of China(No.2022M723039)+1 种基金the National Natural Science Foundation of China(No.12071145)the Shanghai Trusted Industry Internet Software Collaborative Innovation Center,China。
文摘Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation.However,existing methods often fall into what we call interactive misunderstanding,the essence of which is the dilemma in trading off short-and long-term interaction information.To better use the interaction information at various timescales,we propose an interactive segmentation framework,called interactive MEdical image segmentation with self-adaptive Confidence CAlibration(MECCA),which combines action-based confidence learning and multi-agent reinforcement learning.A novel confidence network is learned by predicting the alignment level of the action with short-term interaction information.A confidence-based reward-shaping mechanism is then proposed to explicitly incorporate confidence in the policy gradient calculation,thus directly correcting the model’s interactive misunderstanding.MECCA also enables user-friendly interactions by reducing the interaction intensity and difficulty via label generation and interaction guidance,respectively.Numerical experiments on different segmentation tasks show that MECCA can significantly improve short-and long-term interaction information utilization efficiency with remarkably fewer labeled samples.The demo video is available at https://bit.ly/mecca-demo-video.