Digital twin technology is revolutionizing personalized healthcare by creating dynamic virtual replicas of individual patients.This paper presents a novel multi-modal architecture leveraging digital twins to enhance p...Digital twin technology is revolutionizing personalized healthcare by creating dynamic virtual replicas of individual patients.This paper presents a novel multi-modal architecture leveraging digital twins to enhance precision in predictive diagnostics and treatment planning of phoneme labeling.By integrating real-time images,electronic health records,and genomic information,the system enables personalized simulations for disease progression modeling,treatment response prediction,and preventive care strategies.In dysarthric speech,which is characterized by articulation imprecision,temporal misalignments,and phoneme distortions,existing models struggle to capture these irregularities.Traditional approaches,often relying solely on audio features,fail to address the full complexity of phoneme variations,leading to increased phoneme error rates(PER)and word error rates(WER).To overcome these challenges,we propose a novel multi-modal architecture that integrates both audio and articulatory data through a combination of Temporal Convolutional Networks(TCNs),Graph Convolutional Networks(GCNs),Transformer Encoders,and a cross-modal attention mechanism.The audio branch of the model utilizes TCNs and Transformer Encoders to capture both short-and long-term dependencies in the audio signal,while the articulatory branch leverages GCNs to model spatial relationships between articulators,such as the lips,jaw,and tongue,allowing the model to detect subtle articulatory imprecisions.A cross-modal attention mechanism fuses the encoded audio and articulatory features,enabling dynamic adjustment of the model’s focus depending on input quality,which significantly improves phoneme labeling accuracy.The proposed model consistently outperforms existing methods,achieving lower Phoneme Error Rates(PER),Word Error Rates(WER),and Articulatory Feature Misclassification Rates(AFMR).Specifically,across all datasets,the model achieves an average PER of 13.43%,an average WER of 21.67%,and an average AFMR of 12.73%.By capturing both the acoustic and articulatory intricacies of speech,this comprehensive approach not only improves phoneme labeling precision but also marks substantial progress in speech recognition technology for individuals with dysarthria.展开更多
Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecas...Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecasting.However,existing deep learning models frequently overlook the selective utilization of information from other production wells,resulting in suboptimal performance in long-term production forecasting across multiple wells.To achieve accurate long-term petroleum production forecast,we propose a spatial-geological perception graph convolutional neural network(SGP-GCN)that accounts for the temporal,spatial,and geological dependencies inherent in petroleum production.Utilizing the attention mechanism,the SGP-GCN effectively captures intricate correlations within production and geological data,forming the representations of each production well.Based on the spatial distances and geological feature correlations,we construct a spatial-geological matrix as the weight matrix to enable differential utilization of information from other wells.Additionally,a matrix sparsification algorithm based on production clustering(SPC)is also proposed to optimize the weight distribution within the spatial-geological matrix,thereby enhancing long-term forecasting performance.Empirical evaluations have shown that the SGP-GCN outperforms existing deep learning models,such as CNN-LSTM-SA,in long-term petroleum production forecasting.This demonstrates the potential of the SGP-GCN as a valuable tool for long-term petroleum production forecasting across multiple wells.展开更多
General control non-derepressible 2(GCN2)属于一种压力应答丝氨酸/苏氨酸激酶,在整合应激反应(ISR)中负责感受氨基酸缺乏应激后产生一系列反应。GCN2的激活对于细胞的氧化应激、增殖、自噬、凋亡、免疫、蛋白质毒性和血管生成等均有...General control non-derepressible 2(GCN2)属于一种压力应答丝氨酸/苏氨酸激酶,在整合应激反应(ISR)中负责感受氨基酸缺乏应激后产生一系列反应。GCN2的激活对于细胞的氧化应激、增殖、自噬、凋亡、免疫、蛋白质毒性和血管生成等均有关键的调节作用,与肿瘤、心肌损伤、肺纤维化等的发生发展有一定的相关性。综述GCN2的生物学功能、结构特征、作用机制和疾病关联性,并总结分析GCN2抑制剂或激动剂的研发现状,重点阐述GCN2抑制剂或激动剂在抗肿瘤方向的临床应用潜力,为靶向GCN2激酶的新药开发提供参考。展开更多
癌症是全球范围内导致死亡的主要疾病之一,尤其是对晚期或发生转移的癌症治疗依然面临巨大的挑战。癌症的精准分期在临床上对治疗方案的选择和患者预后评估至关重要。传统的分期方法主要依赖影像学和临床检查数据,然而随着基因组学和分...癌症是全球范围内导致死亡的主要疾病之一,尤其是对晚期或发生转移的癌症治疗依然面临巨大的挑战。癌症的精准分期在临床上对治疗方案的选择和患者预后评估至关重要。传统的分期方法主要依赖影像学和临床检查数据,然而随着基因组学和分子生物学技术的飞速发展,利用多组学数据进行癌症的早期诊断和分期变得越来越重要。为了提高癌症分类和分期的准确性,本研究提出了一种新的多组学数据分析框架MOGCWMLP。该框架基于图卷积网络(GCN)对不同组学数据进行特征学习,结合加权多层感知机(MLP)网络进行分类决策。具体来说,MOGCWMLP框架集成了RNA-seq、miRNA和lncRNA等三种不同类型的组学数据,通过学习每种数据的特征并进行加权融合,最大化不同组学数据的互补信息。实验结果表明,MOGCWMLP模型在肺鳞癌(LUSC)数据集上的分类精度显著优于现有的单组学模型和多组学模型,尤其是在多组学数据整合的情况下,分类性能得到显著提升。此外,采用可学习的加权融合机制,能够动态调整各视图的贡献,从而进一步优化模型的分类效果。该研究为癌症精准诊断和个性化治疗提供了有效的工具,也为多组学数据的整合提供了新的思路。Cancer remains one of the leading causes of mortality worldwide, particularly in advanced or metastatic cases, where treatment remains a significant challenge. Accurate cancer staging is critical in clinical practice for determining optimal treatment strategies and assessing patient prognosis. Traditional staging methods primarily rely on imaging and clinical examination data. However, with rapid advancements in genomics and molecular biology, lever aging multi-omics data for early cancer diagnosis and staging has become increasingly important. To enhance the accuracy of cancer classification and staging, this study proposes an ovel multi-omics data analysis framework, MOGCWMLP. This framework utilizes graph convolutional networks (GCN) for feature learning across different omics data types and incorporates a weighted multilayer perceptron (MLP) for classification decision-making. Specifically, MOGCWMLP integrates three distinct types of omics data—mRNA, miRNA, and lncRNA—by extracting and fusing their features through a weighted mechanism, there by maximizing the complementary information among different omics modalities. Experimental results demonstrate that the MOGCWMLP model achieves significantly higher classification accuracy on the lung squamous cell carcinoma (LUSC) dataset compared to existing single-omics and multi-omics models. Notably, the integration of multi-omics data leads to substantial improvements in classification performance. Furthermore, the incorporation of a learnable weighted fusion mechanism enables the dynamic adjustment of each modality’s contribution, further optimizing the model’s classification effectiveness. This study provides an effective tool for precise cancer diagnosis and personalized treatment, while also offering new insights into the integration of multi-omics data.展开更多
基金funded by the Ongoing Research Funding program(ORF-2025-867),King Saud University,Riyadh,Saudi Arabia.
文摘Digital twin technology is revolutionizing personalized healthcare by creating dynamic virtual replicas of individual patients.This paper presents a novel multi-modal architecture leveraging digital twins to enhance precision in predictive diagnostics and treatment planning of phoneme labeling.By integrating real-time images,electronic health records,and genomic information,the system enables personalized simulations for disease progression modeling,treatment response prediction,and preventive care strategies.In dysarthric speech,which is characterized by articulation imprecision,temporal misalignments,and phoneme distortions,existing models struggle to capture these irregularities.Traditional approaches,often relying solely on audio features,fail to address the full complexity of phoneme variations,leading to increased phoneme error rates(PER)and word error rates(WER).To overcome these challenges,we propose a novel multi-modal architecture that integrates both audio and articulatory data through a combination of Temporal Convolutional Networks(TCNs),Graph Convolutional Networks(GCNs),Transformer Encoders,and a cross-modal attention mechanism.The audio branch of the model utilizes TCNs and Transformer Encoders to capture both short-and long-term dependencies in the audio signal,while the articulatory branch leverages GCNs to model spatial relationships between articulators,such as the lips,jaw,and tongue,allowing the model to detect subtle articulatory imprecisions.A cross-modal attention mechanism fuses the encoded audio and articulatory features,enabling dynamic adjustment of the model’s focus depending on input quality,which significantly improves phoneme labeling accuracy.The proposed model consistently outperforms existing methods,achieving lower Phoneme Error Rates(PER),Word Error Rates(WER),and Articulatory Feature Misclassification Rates(AFMR).Specifically,across all datasets,the model achieves an average PER of 13.43%,an average WER of 21.67%,and an average AFMR of 12.73%.By capturing both the acoustic and articulatory intricacies of speech,this comprehensive approach not only improves phoneme labeling precision but also marks substantial progress in speech recognition technology for individuals with dysarthria.
基金funded by National Natural Science Foundation of China,grant number 62071491.
文摘Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecasting.However,existing deep learning models frequently overlook the selective utilization of information from other production wells,resulting in suboptimal performance in long-term production forecasting across multiple wells.To achieve accurate long-term petroleum production forecast,we propose a spatial-geological perception graph convolutional neural network(SGP-GCN)that accounts for the temporal,spatial,and geological dependencies inherent in petroleum production.Utilizing the attention mechanism,the SGP-GCN effectively captures intricate correlations within production and geological data,forming the representations of each production well.Based on the spatial distances and geological feature correlations,we construct a spatial-geological matrix as the weight matrix to enable differential utilization of information from other wells.Additionally,a matrix sparsification algorithm based on production clustering(SPC)is also proposed to optimize the weight distribution within the spatial-geological matrix,thereby enhancing long-term forecasting performance.Empirical evaluations have shown that the SGP-GCN outperforms existing deep learning models,such as CNN-LSTM-SA,in long-term petroleum production forecasting.This demonstrates the potential of the SGP-GCN as a valuable tool for long-term petroleum production forecasting across multiple wells.
文摘General control non-derepressible 2(GCN2)属于一种压力应答丝氨酸/苏氨酸激酶,在整合应激反应(ISR)中负责感受氨基酸缺乏应激后产生一系列反应。GCN2的激活对于细胞的氧化应激、增殖、自噬、凋亡、免疫、蛋白质毒性和血管生成等均有关键的调节作用,与肿瘤、心肌损伤、肺纤维化等的发生发展有一定的相关性。综述GCN2的生物学功能、结构特征、作用机制和疾病关联性,并总结分析GCN2抑制剂或激动剂的研发现状,重点阐述GCN2抑制剂或激动剂在抗肿瘤方向的临床应用潜力,为靶向GCN2激酶的新药开发提供参考。
文摘癌症是全球范围内导致死亡的主要疾病之一,尤其是对晚期或发生转移的癌症治疗依然面临巨大的挑战。癌症的精准分期在临床上对治疗方案的选择和患者预后评估至关重要。传统的分期方法主要依赖影像学和临床检查数据,然而随着基因组学和分子生物学技术的飞速发展,利用多组学数据进行癌症的早期诊断和分期变得越来越重要。为了提高癌症分类和分期的准确性,本研究提出了一种新的多组学数据分析框架MOGCWMLP。该框架基于图卷积网络(GCN)对不同组学数据进行特征学习,结合加权多层感知机(MLP)网络进行分类决策。具体来说,MOGCWMLP框架集成了RNA-seq、miRNA和lncRNA等三种不同类型的组学数据,通过学习每种数据的特征并进行加权融合,最大化不同组学数据的互补信息。实验结果表明,MOGCWMLP模型在肺鳞癌(LUSC)数据集上的分类精度显著优于现有的单组学模型和多组学模型,尤其是在多组学数据整合的情况下,分类性能得到显著提升。此外,采用可学习的加权融合机制,能够动态调整各视图的贡献,从而进一步优化模型的分类效果。该研究为癌症精准诊断和个性化治疗提供了有效的工具,也为多组学数据的整合提供了新的思路。Cancer remains one of the leading causes of mortality worldwide, particularly in advanced or metastatic cases, where treatment remains a significant challenge. Accurate cancer staging is critical in clinical practice for determining optimal treatment strategies and assessing patient prognosis. Traditional staging methods primarily rely on imaging and clinical examination data. However, with rapid advancements in genomics and molecular biology, lever aging multi-omics data for early cancer diagnosis and staging has become increasingly important. To enhance the accuracy of cancer classification and staging, this study proposes an ovel multi-omics data analysis framework, MOGCWMLP. This framework utilizes graph convolutional networks (GCN) for feature learning across different omics data types and incorporates a weighted multilayer perceptron (MLP) for classification decision-making. Specifically, MOGCWMLP integrates three distinct types of omics data—mRNA, miRNA, and lncRNA—by extracting and fusing their features through a weighted mechanism, there by maximizing the complementary information among different omics modalities. Experimental results demonstrate that the MOGCWMLP model achieves significantly higher classification accuracy on the lung squamous cell carcinoma (LUSC) dataset compared to existing single-omics and multi-omics models. Notably, the integration of multi-omics data leads to substantial improvements in classification performance. Furthermore, the incorporation of a learnable weighted fusion mechanism enables the dynamic adjustment of each modality’s contribution, further optimizing the model’s classification effectiveness. This study provides an effective tool for precise cancer diagnosis and personalized treatment, while also offering new insights into the integration of multi-omics data.