期刊文献+
共找到223篇文章
< 1 2 12 >
每页显示 20 50 100
A Modified CycleGAN for Multi-Organ Ultrasound Image Enhancement via Unpaired Pre-Training
1
作者 Haonan Han Bingyu Yang +2 位作者 Weihang Zhang Dongwei Li Huiqi Li 《Journal of Beijing Institute of Technology》 EI CAS 2024年第3期194-203,共10页
Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image qual... Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices. 展开更多
关键词 ultrasound image enhancement handheld devices unpaired images pre-train and finetune cycleGAN
在线阅读 下载PDF
GeoNER:Geological Named Entity Recognition with Enriched Domain Pre-Training Model and Adversarial Training
2
作者 MA Kai HU Xinxin +4 位作者 TIAN Miao TAN Yongjian ZHENG Shuai TAO Liufeng QIU Qinjun 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2024年第5期1404-1417,共14页
As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate unders... As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate understanding of geological reports guided by domain knowledge.While generic named entity recognition models/tools can be utilized for the processing of geoscience reports/documents,their effectiveness is hampered by a dearth of domain-specific knowledge,which in turn leads to a pronounced decline in recognition accuracy.This study summarizes six types of typical geological entities,with reference to the ontological system of geological domains and builds a high quality corpus for the task of geological named entity recognition(GNER).In addition,Geo Wo BERT-adv BGP(Geological Word-base BERTadversarial training Bi-directional Long Short-Term Memory Global Pointer)is proposed to address the issues of ambiguity,diversity and nested entities for the geological entities.The model first uses the fine-tuned word granularitybased pre-training model Geo Wo BERT(Geological Word-base BERT)and combines the text features that are extracted using the Bi LSTM(Bi-directional Long Short-Term Memory),followed by an adversarial training algorithm to improve the robustness of the model and enhance its resistance to interference,the decoding finally being performed using a global association pointer algorithm.The experimental results show that the proposed model for the constructed dataset achieves high performance and is capable of mining the rich geological information. 展开更多
关键词 geological named entity recognition geological report adversarial training confrontation training global pointer pre-training model
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
3
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
A Survey of Cooperative Multi-agent Reinforcement Learning for Multi-task Scenarios 被引量:1
4
作者 Jiajun CHAI Zijie ZHAO +1 位作者 Yuanheng ZHU Dongbin ZHAO 《Artificial Intelligence Science and Engineering》 2025年第2期98-121,共24页
Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-... Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world. 展开更多
关键词 multi-task multi-agent reinforcement learning large language models
在线阅读 下载PDF
Explainable AI Based Multi-Task Learning Method for Stroke Prognosis
5
作者 Nan Ding Xingyu Zeng +1 位作者 Jianping Wu Liutao Zhao 《Computers, Materials & Continua》 2025年第9期5299-5315,共17页
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio... Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency. 展开更多
关键词 Explainable AI stroke prognosis multi-task learning AUC optimization
在线阅读 下载PDF
Short-Term Rolling Prediction of Tropical Cyclone Intensity Based on Multi-Task Learning with Fusion of Deviation-Angle Variance and Satellite Imagery
6
作者 Wei TIAN Ping SONG +5 位作者 Yuanyuan CHEN Yonghong ZHANG Liguang WU Haikun ZHAO Kenny Thiam Choy LIM KAM SIAN Chunyi XIANG 《Advances in Atmospheric Sciences》 2025年第1期111-128,共18页
Tropical cyclones(TCs)are one of the most serious types of natural disasters,and accurate TC activity predictions are key to disaster prevention and mitigation.Recently,TC track predictions have made significant progr... Tropical cyclones(TCs)are one of the most serious types of natural disasters,and accurate TC activity predictions are key to disaster prevention and mitigation.Recently,TC track predictions have made significant progress,but the ability to predict their intensity is obviously lagging behind.At present,research on TC intensity prediction takes atmospheric reanalysis data as the research object and mines the relationship between TC-related environmental factors and intensity through deep learning.However,reanalysis data are non-real-time in nature,which does not meet the requirements for operational forecasting applications.Therefore,a TC intensity prediction model named TC-Rolling is proposed,which can simultaneously extract the degree of symmetry for strong TC convective cloud and convection intensity,and fuse the deviation-angle variance with satellite images to construct the correlation between TC convection structure and intensity.For TCs'complex dynamic processes,a convolutional neural network(CNN)is used to learn their temporal and spatial features.For real-time intensity estimation,multi-task learning acts as an implicit time-series enhancement.The model is designed with a rolling strategy that aims to moderate the long-term dependent decay problem and improve accuracy for short-term intensity predictions.Since multiple tasks are correlated,the loss function of 12 h and 24 h are corrected.After testing on a sample of TCs in the Northwest Pacific,with a 4.48 kt root-mean-square error(RMSE)of 6 h intensity prediction,5.78 kt for 12 h,and 13.94 kt for 24 h,TC records from official agencies are used to assess the validity of TC-Rolling. 展开更多
关键词 tropical cyclone INTENSITY structure rolling prediction multi-task
在线阅读 下载PDF
MAMGBR: Group-Buying Recommendation Model Based on Multi-Head Attention Mechanism and Multi-Task Learning
7
作者 Zongzhe Xu Ming Yu 《Computers, Materials & Continua》 2025年第8期2805-2826,共22页
As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as... As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as a new challenge in the field of recommendation systems.This paper introduces a group-buying recommendation model based on multi-head attention mechanisms and multi-task learning,termed the Multi-head Attention Mechanisms and Multi-task Learning Group-Buying Recommendation(MAMGBR)model,specifically designed to optimize group-buying recommendations on e-commerce platforms.The core dataset of this study comes from the Chinese maternal and infant e-commerce platform“Beibei,”encompassing approximately 430,000 successful groupbuying actions and over 120,000 users.Themodel focuses on twomain tasks:recommending items for group organizers(Task Ⅰ)and recommending participants for a given group-buying event(Task Ⅱ).In model evaluation,MAMGBR achieves an MRR@10 of 0.7696 for Task I,marking a 20.23%improvement over baseline models.Furthermore,in Task II,where complex interaction patterns prevail,MAMGBR utilizes auxiliary loss functions to effectively model the multifaceted roles of users,items,and participants,leading to a 24.08%increase in MRR@100 under a 1:99 sample ratio.Experimental results show that compared to benchmark models,such as NGCF and EATNN,MAMGBR’s integration ofmulti-head attentionmechanisms,expert networks,and gating mechanisms enables more accurate modeling of user preferences and social associations within group-buying scenarios,significantly enhancing recommendation accuracy and platform group-buying success rates. 展开更多
关键词 Group-buying recommendation multi-head attention mechanism multi-task learning
在线阅读 下载PDF
DPCIPI: A pre-trained deep learning model for predicting cross-immunity between drifted strains of Influenza A/H3N2
8
作者 Yiming Du Zhuotian Li +8 位作者 Qian He Thomas Wetere Tulu Kei Hang Katie Chan Lin Wang Sen Pei Zhanwei Du Zhen Wang Xiao-Ke Xu Xiao Fan Liu 《Journal of Automation and Intelligence》 2025年第2期115-124,共10页
Predicting cross-immunity between viral strains is vital for public health surveillance and vaccine development.Traditional neural network methods,such as BiLSTM,could be ineffective due to the lack of lab data for mo... Predicting cross-immunity between viral strains is vital for public health surveillance and vaccine development.Traditional neural network methods,such as BiLSTM,could be ineffective due to the lack of lab data for model training and the overshadowing of crucial features within sequence concatenation.The current work proposes a less data-consuming model incorporating a pre-trained gene sequence model and a mutual information inference operator.Our methodology utilizes gene alignment and deduplication algorithms to preprocess gene sequences,enhancing the model’s capacity to discern and focus on distinctions among input gene pairs.The model,i.e.,DNA Pretrained Cross-Immunity Protection Inference model(DPCIPI),outperforms state-of-theart(SOTA)models in predicting hemagglutination inhibition titer from influenza viral gene sequences only.Improvement in binary cross-immunity prediction is 1.58%in F1,2.34%in precision,1.57%in recall,and 1.57%in Accuracy.For multilevel cross-immunity improvements,the improvement is 2.12%in F1,3.50%in precision,2.19%in recall,and 2.19%in Accuracy.Our study showcases the potential of pre-trained gene models to improve predictions of antigenic variation and cross-immunity.With expanding gene data and advancements in pre-trained models,this approach promises significant impacts on vaccine development and public health. 展开更多
关键词 Cross-immunity prediction pre-trained model Deep learning Influenza strains Hemagglutination inhibition
在线阅读 下载PDF
KitWaSor:Pioneering pre-trained model for kitchen waste sorting with an innovative million-level benchmark dataset
9
作者 Leyuan Fang Shuaiyu Ding +3 位作者 Hao Feng Junwu Yu Lin Tang Pedram Ghamisi 《CAAI Transactions on Intelligence Technology》 2025年第1期94-114,共21页
Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective... Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective way of sorting.Owing to significant domain gaps between natural images and kitchen waste images,it is difficult to reflect the characteristics of diverse scales and dense distribution in kitchen waste based on an ImageNet pre-trained model,leading to poor generalisation.In this article,the authors propose the first pre-trained model for kitchen waste sorting called KitWaSor,which combines both contrastive learning(CL)and masked image modelling(MIM)through self-supervised learning(SSL).First,to address the issue of diverse scales,the authors propose a mixed masking strategy by introducing an incomplete masking branch based on the original random masking branch.It prevents the complete loss of small-scale objects while avoiding excessive leakage of large-scale object pixels.Second,to address the issue of dense distribution,the authors introduce semantic consistency constraints on the basis of the mixed masking strategy.That is,object semantic reasoning is performed through semantic consistency constraints to compensate for the lack of contextual information.To train KitWaSor,the authors construct the first million-level kitchen waste dataset across seasonal and regional distributions,named KWD-Million.Extensive experiments show that KitWaSor achieves state-of-the-art(SOTA)performance on the two most relevant downstream tasks for kitchen waste sorting(i.e.image classification and object detection),demonstrating the effectiveness of the proposed KitWaSor. 展开更多
关键词 contrastive learning kitchen waste masked image modeling pre-trained model self-supervised learning
在线阅读 下载PDF
Joint Retrieval of PM_(2.5) Concentration and Aerosol Optical Depth over China Using Multi-Task Learning on FY-4A AGRI
10
作者 Bo LI Disong FU +4 位作者 Ling YANG Xuehua FAN Dazhi YANG Hongrong SHI Xiang’ao XIA 《Advances in Atmospheric Sciences》 2025年第1期94-110,共17页
Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–... Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–PM_(2.5)and the limitations of existing algorithms pose a significant challenge in realizing the accurate joint retrieval of these two parameters at the same location.On this point,a multi-task learning(MTL)model,which enables the joint retrieval of PM_(2.5)concentration and AOD,is proposed and applied on the top-of-the-atmosphere reflectance data gathered by the Fengyun-4A Advanced Geosynchronous Radiation Imager(FY-4A AGRI),and compared to that of two single-task learning models—namely,Random Forest(RF)and Deep Neural Network(DNN).Specifically,MTL achieves a coefficient of determination(R^(2))of 0.88 and a root-mean-square error(RMSE)of 0.10 in AOD retrieval.In comparison to RF,the R^(2)increases by 0.04,the RMSE decreases by 0.02,and the percentage of retrieval results falling within the expected error range(Within-EE)rises by 5.55%.The R^(2)and RMSE of PM_(2.5)retrieval by MTL are 0.84 and 13.76μg m~(-3)respectively.Compared with RF,the R^(2)increases by 0.06,the RMSE decreases by 4.55μg m~(-3),and the Within-EE increases by 7.28%.Additionally,compared to DNN,MTL shows an increase of 0.01 in R^(2)and a decrease of 0.02 in RMSE in AOD retrieval,with a corresponding increase of 2.89%in Within-EE.For PM_(2.5)retrieval,MTL exhibits an increase of 0.05 in R^(2),a decrease of 1.76μg m~(-3)in RMSE,and an increase of 6.83%in Within-EE.The evaluation suggests that MTL is able to provide simultaneously improved AOD and PM_(2.5)retrievals,demonstrating a significant advantage in efficiently capturing the spatial distribution of PM_(2.5)concentration and AOD. 展开更多
关键词 AOD PM_(2.5) FY-4A multi-task learning joint retrieval
在线阅读 下载PDF
Skillful bias correction of offshore near-surface wind field forecasting based on a multi-task machine learning model
11
作者 Qiyang Liu Anboyu Guo +5 位作者 Fengxue Qiao Xinjian Ma Yan-An Liu Yong Huang Rui Wang Chunyan Sheng 《Atmospheric and Oceanic Science Letters》 2025年第5期28-35,共8页
Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecas... Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecasts Integrated Forecasting System global model(ECMWF-IFS)over 14 offshore weather stations along the coast of Shandong Province,this study introduces a multi-task learning(MTL)model(TabNet-MTL),which significantly improves the forecast bias of near-surface wind direction and speed simultaneously.TabNet-MTL adopts the feature engineering method,utilizes mean square error as the loss function,and employs the 5-fold cross validation method to ensure the generalization ability of the trained model.It demonstrates superior skills in wind field correction across different forecast lead times over all stations compared to its single-task version(TabNet-STL)and three other popular single-task learning models(Random Forest,LightGBM,and XGBoost).Results show that it significantly reduces root mean square error of the ECMWF-IFS wind speed forecast from 2.20 to 1.25 m s−1,and increases the forecast accuracy of wind direction from 50%to 65%.As an explainable deep learning model,the weather stations and long-term temporal statistics of near-surface wind speed are identified as the most influential variables for TabNet-MTL in constructing its feature engineering. 展开更多
关键词 Forecast bias correction Wind field multi-task learning Feature engineering Explainable AI
在线阅读 下载PDF
Big Texture Dataset Synthesized Based on Gradient and Convolution Kernels Using Pre-Trained Deep Neural Networks
12
作者 Farhan A.Alenizi Faten Khalid Karim +1 位作者 Alaa R.Al-Shamasneh Mohammad Hossein Shakoor 《Computer Modeling in Engineering & Sciences》 2025年第8期1793-1829,共37页
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t... Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting. 展开更多
关键词 Big texture dataset data generation pre-trained deep neural network
在线阅读 下载PDF
MolP-PC:a multi-view fusion and multi-task learning framework for drug ADMET property prediction
13
作者 Sishu Li Jing Fan +2 位作者 Haiyang He Ruifeng Zhou Jun Liao 《Chinese Journal of Natural Medicines》 2025年第11期1293-1300,共8页
The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches... The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches face challenges with data sparsity and information loss due to single-molecule representation limitations and isolated predictive tasks.This research proposes molecular properties prediction with parallel-view and collaborative learning(MolP-PC),a multi-view fusion and multi-task deep learning framework that integrates 1D molecular fingerprints(MFs),2D molecular graphs,and 3D geometric representations,incorporating an attention-gated fusion mechanism and multi-task adaptive learning strategy for precise ADMET property predictions.Experimental results demonstrate that MolP-PC achieves optimal performance in 27 of 54 tasks,with its multi-task learning(MTL)mechanism significantly enhancing predictive performance on small-scale datasets and surpassing single-task models in 41 of 54 tasks.Additional ablation studies and interpretability analyses confirm the significance of multi-view fusion in capturing multi-dimensional molecular information and enhancing model generalization.A case study examining the anticancer compound Oroxylin A demonstrates MolP-PC’s effective generalization in predicting key pharmacokinetic parameters such as half-life(T0.5)and clearance(CL),indicating its practical utility in drug modeling.However,the model exhibits a tendency to underestimate volume of distribution(VD),indicating potential for improvement in analyzing compounds with high tissue distribution.This study presents an efficient and interpretable approach for ADMET property prediction,establishing a novel framework for molecular optimization and risk assessment in drug development. 展开更多
关键词 Molecular ADMET prediction Multi-view fusion Attention mechanism multi-task deep learning
原文传递
Multilingual Text Summarization in Healthcare Using Pre-Trained Transformer-Based Language Models
14
作者 Josua Käser Thomas Nagy +1 位作者 Patrick Stirnemann Thomas Hanne 《Computers, Materials & Continua》 2025年第4期201-217,共17页
We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of t... We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains. 展开更多
关键词 Text summarization pre-trained transformer-based language models large language models technical healthcare texts natural language processing
在线阅读 下载PDF
A multi-task learning method for blast furnace gas forecasting based on coupling correlation analysis and inverted transformer
15
作者 Sheng Xie Jing-shu Zhang +2 位作者 Da-tao Shi Yang Guo Qi Zhang 《Journal of Iron and Steel Research International》 2025年第10期3280-3297,共18页
Accurate forecasting of blast furnace gas(BFG)production is an essential prerequisite for reasonable energy scheduling and management to reduce carbon emissions.Coupling forecasting between BFG generation and consumpt... Accurate forecasting of blast furnace gas(BFG)production is an essential prerequisite for reasonable energy scheduling and management to reduce carbon emissions.Coupling forecasting between BFG generation and consumption dynamics was taken as the research object.A multi-task learning(MTL)method for BFG forecasting was proposed,which integrated a coupling correlation coefficient(CCC)and an inverted transformer structure.The CCC method could enhance key information extraction by establishing relationships between multiple prediction targets and relevant factors,while MTL effectively captured the inherent correlations between BFG generation and consumption.Finally,a real-world case study was conducted to compare the proposed model with four benchmark models.Results indicated significant reductions in average mean absolute percentage error by 33.37%,achieving 1.92%,with a computational time of 76 s.The sensitivity analysis of hyperparameters such as learning rate,batch size,and units of the long short-term memory layer highlights the importance of hyperparameter tuning. 展开更多
关键词 Byproduct gases forecasting Coupling correlation coefficient multi-task learning Inverted transformer Bi-directional long short-term memory Blast furnace gas
原文传递
Effective distributed convolutional neural network architecture for remote sensing images target classification with a pre-training approach 被引量:3
16
作者 LI Binquan HU Xiaohui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第2期238-244,共7页
How to recognize targets with similar appearances from remote sensing images(RSIs) effectively and efficiently has become a big challenge. Recently, convolutional neural network(CNN) is preferred in the target classif... How to recognize targets with similar appearances from remote sensing images(RSIs) effectively and efficiently has become a big challenge. Recently, convolutional neural network(CNN) is preferred in the target classification due to the powerful feature representation ability and better performance. However,the training and testing of CNN mainly rely on single machine.Single machine has its natural limitation and bottleneck in processing RSIs due to limited hardware resources and huge time consuming. Besides, overfitting is a challenge for the CNN model due to the unbalance between RSIs data and the model structure.When a model is complex or the training data is relatively small,overfitting occurs and leads to a poor predictive performance. To address these problems, a distributed CNN architecture for RSIs target classification is proposed, which dramatically increases the training speed of CNN and system scalability. It improves the storage ability and processing efficiency of RSIs. Furthermore,Bayesian regularization approach is utilized in order to initialize the weights of the CNN extractor, which increases the robustness and flexibility of the CNN model. It helps prevent the overfitting and avoid the local optima caused by limited RSI training images or the inappropriate CNN structure. In addition, considering the efficiency of the Na¨?ve Bayes classifier, a distributed Na¨?ve Bayes classifier is designed to reduce the training cost. Compared with other algorithms, the proposed system and method perform the best and increase the recognition accuracy. The results show that the distributed system framework and the proposed algorithms are suitable for RSIs target classification tasks. 展开更多
关键词 convolutional NEURAL network (CNN) DISTRIBUTED architecture REMOTE SENSING images (RSIs) TARGET classification pre-training
在线阅读 下载PDF
Knowledge Enhanced Pre-Training Model for Vision-Language-Navigation Task 被引量:1
17
作者 HUANG Jitao ZENG Guohui +3 位作者 HUANG Bo GAO Yongbin LIU Jin SHI Zhicai 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2021年第2期147-155,共9页
Vision-Language-Navigation(VLN) task is a cross-modality task that combines natural language processing and computer vision. This task requires the agent to automatically move to the destination according to the natur... Vision-Language-Navigation(VLN) task is a cross-modality task that combines natural language processing and computer vision. This task requires the agent to automatically move to the destination according to the natural language instruction and the observed surrounding visual information. To make the best decision, in every step during the navigation, the agent should pay more attention to understanding the objects, the object attributes, and the object relationships. But most current methods process all received textual and visual information equally. Therefore, this paper integrates more detailed semantic connections between visual and textual information through three pre-training tasks(object prediction, object attributes prediction, and object relationship prediction). The model will learn better fusion representation and alignment between these two types of information to improve the success rate(SR) and generalization. The experiments show that compared with the former baseline models, the SR on the unseen validation set(Val Unseen) increased by 7%, and the SR weighted by path length(SPL) increased by 7%;the SR on the test set(Test) increased 4%, SPL increased by 3%. 展开更多
关键词 pre-training cross-modality deep learning scene graph
原文传递
Pre-training Assessment Through the Web
18
作者 Kenneth Wong Reggie Kwan Jimmy SF Chan 《厦门大学学报(自然科学版)》 CAS CSCD 北大核心 2002年第S1期297-,共1页
Web-based training is growing quickly in popularit y for professionals in industrial organizations and large enterprises. The savings in cost and time are significant. The instructor-led trainings are bounded by time ... Web-based training is growing quickly in popularit y for professionals in industrial organizations and large enterprises. The savings in cost and time are significant. The instructor-led trainings are bounded by time and place, not to mention the cost involved in traveling, accommodation and training venue. However, in the most online training courses, all trainees are given same training materials and teaching paradigms. The problem of differentia ting the trainees’ abilities is the main concern. We need a pre-training test t o identify and classify of the weaknesses and strengths of differentiate trainee s so as to devise an appropriate training programs for the trainees. Adaptation of a Web-based Computer adaptive Test (CAT) for the pre-training test make the web-based training more efficient. The advantages of CAT are self-pacing, eff iciency, time and cost saving, immediate scoring and feedback, accuracy and secu rity, etc (Rudner, 1998; UMN, 1999; Novell, 2000; Linacre, 2000; Windowsglore, 2 000). Moreover, Web-based CAT also gives greater flexibility and convenience. T his paper describes how this CAT tool is built, how it helps instructor identify the strengths and weaknesses of trainees, and how to assure quality on the CAT system. 展开更多
关键词 CAT TEST pre-training Assessment Through the Web
在线阅读 下载PDF
Fu-Rec:Multi-Task Learning Recommendation Model Fusing Neighbor-Discrimination and Self-Discrimination
19
作者 ZHENG Sirui HUANG Bo +4 位作者 LIU Jin ZENG Guohui YIN Ling LI Zhi SUN Tie 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2024年第2期134-144,共11页
In recent years,self-supervised learning has achieved great success in areas such as computer vision and natural language processing because it can mine supervised signals from unlabeled data and reduce the reliance o... In recent years,self-supervised learning has achieved great success in areas such as computer vision and natural language processing because it can mine supervised signals from unlabeled data and reduce the reliance on manual labels.However,the currently generated self-supervised signals are either neighbor discrimination or self-discrimination,and there is no model to integrate neighbor discrimination and self-discrimination.Based on this,this paper proposes Fu-Rec that integrates neighbor-discrimination contrastive learning and self-discrimination contrastive learning,which consists of three modules:(1)neighbor-discrimination contrastive learning,(2)selfdiscrimination contrastive learning,and(3)recommendation module.The neighbor-discrimination contrastive learning and selfdiscrimination contrastive learning tasks are used as auxiliary tasks to assist the recommendation task.The Fu-Rec model effectively utilizes the respective advantages of neighbor-discrimination and self-discrimination to consider the information of the user’s neighbors as well as the user and the item itself for the recommendation,which results in better performance of the recommendation module.Experimental results on several public datasets demonstrate the effectiveness of the Fu-Rec proposed in this paper. 展开更多
关键词 self-supervised learning recommendation system contrastive learning multi-task learning
原文传递
MDTCNet:Multi-Task Classifications Network and TCNN for Direction of Arrival Estimation
20
作者 Yu Jiarun Wang Yafeng 《China Communications》 SCIE CSCD 2024年第10期148-166,共19页
The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number i... The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number in millimeter wave system,the multi-task deep residual shrinkage network(MTDRSN) and transfer learning-based convolutional neural network(TCNN), namely MDTCNet, are proposed. The sampling covariance matrix based on the received signal is used as the input to the proposed network. A DRSN-based multi-task classifications model is first introduced to estimate signal sources number and multipath number simultaneously. Then, the DoAs with multi-signal and multipath are estimated by the regression model. The proposed CNN is applied for DoAs estimation with the predicted number of signal sources and paths. Furthermore, the modelbased transfer learning is also introduced into the regression model. The TCNN inherits the partial network parameters of the already formed optimization model obtained by the CNN. A series of experimental results show that the MDTCNet-based DoAs estimation method can accurately predict the signal sources number and multipath number under a range of signal-to-noise ratios. Remarkably, the proposed method achieves the lower root mean square error compared with some existing deep learning-based and traditional methods. 展开更多
关键词 DoA estimation MDTCNet millimeter wave system multi-task classifications model regression model
在线阅读 下载PDF
上一页 1 2 12 下一页 到第
使用帮助 返回顶部