期刊文献+
共找到82篇文章
< 1 2 5 >
每页显示 20 50 100
Enhancing Multi-Class Cyberbullying Classification with Hybrid Feature Extraction and Transformer-Based Models
1
作者 Suliman Mohamed Fati Mohammed A.Mahdi +4 位作者 Mohamed A.G.Hazber Shahanawaj Ahamad Sawsan A.Saad Mohammed Gamal Ragab Mohammed Al-Shalabi 《Computer Modeling in Engineering & Sciences》 2025年第5期2109-2131,共23页
Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or... Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content. 展开更多
关键词 Cyberbullying classification multi-class classification BERT models machine learning TF-IDF Word2Vec social media analysis transformer models
在线阅读 下载PDF
Combining transformer and 3DCNN models to achieve co-design of structures and sequences of antibodies in a diffusional manner
2
作者 Yue Hu Feng Tao +3 位作者 Jiajie Xu Wen-Jun Lan Jing Zhang Wei Lan 《Journal of Pharmaceutical Analysis》 2025年第6期1406-1408,共3页
AlphaPanda(AlphaFold2[1]inspired protein-specific antibody design in a diffusional manner)is an advanced algorithm for designing complementary determining regions(CDRs)of the antibody targeted the specific epitope,com... AlphaPanda(AlphaFold2[1]inspired protein-specific antibody design in a diffusional manner)is an advanced algorithm for designing complementary determining regions(CDRs)of the antibody targeted the specific epitope,combining transformer[2]models,3DCNN[3],and diffusion[4]generative models. 展开更多
关键词 advanced algorithm diffusion generative models dcnn epitope targeting antibody design complementary determining regions complementary determining regions cdrs transformer models
在线阅读 下载PDF
Millimeter-wave modeling based on transformer model for InP high electron mobility transistor
3
作者 ZHANG Ya-Xue ZHANG Ao GAO Jian-Jun 《红外与毫米波学报》 北大核心 2025年第4期534-539,共6页
In this paper,the small-signal modeling of the Indium Phosphide High Electron Mobility Transistor(InP HEMT)based on the Transformer neural network model is investigated.The AC S-parameters of the HEMT device are train... In this paper,the small-signal modeling of the Indium Phosphide High Electron Mobility Transistor(InP HEMT)based on the Transformer neural network model is investigated.The AC S-parameters of the HEMT device are trained and validated using the Transformer model.In the proposed model,the eight-layer transformer encoders are connected in series and the encoder layer of each Transformer consists of the multi-head attention layer and the feed-forward neural network layer.The experimental results show that the measured and modeled S-parameters of the HEMT device match well in the frequency range of 0.5-40 GHz,with the errors versus frequency less than 1%.Compared with other models,good accuracy can be achieved to verify the effectiveness of the proposed model. 展开更多
关键词 transformer model neural network high electron mobility transistor(HEMT) small signal model
在线阅读 下载PDF
Mobility-Aware Edge Caching with Transformer-DQN in D2D-Enabled Heterogeneous Networks
4
作者 Yiming Guo Hongyu Ma 《Computers, Materials & Continua》 2025年第11期3485-3505,共21页
In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic natu... In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions. 展开更多
关键词 Mobile edge caching D2D heterogeneous networks deep reinforcement learning transformer model transmission delay optimization
在线阅读 下载PDF
High-precision copper-grade identification via a vision transformer with PGNAA
5
作者 Jie Cao Chong-Gui Zhong +6 位作者 Han-Ting You Yan Zhang Ren-Bo Wang Shu-Min Zhou Jin-Hui Qu Rui Chen Shi-Liang Liu 《Nuclear Science and Techniques》 2025年第7期89-99,共11页
The identification of ore grades is a critical step in mineral resource exploration and mining.Prompt gamma neutron activation analysis(PGNAA)technology employs gamma rays generated by the nuclear reactions between ne... The identification of ore grades is a critical step in mineral resource exploration and mining.Prompt gamma neutron activation analysis(PGNAA)technology employs gamma rays generated by the nuclear reactions between neutrons and samples to achieve the qualitative and quantitative detection of sample components.In this study,we present a novel method for identifying copper grade by combining the vision transformer(ViT)model with the PGNAA technique.First,a Monte Carlo simulation is employed to determine the optimal sizes of the neutron moderator,thermal neutron absorption material,and dimensions of the device.Subsequently,based on the parameters obtained through optimization,a PGNAA copper ore measurement model is established.The gamma spectrum of the copper ore is analyzed using the ViT model.The ViT model is optimized for hyperparameters using a grid search.To ensure the reliability of the identification results,the test results are obtained through five repeated tenfold cross-validations.Long short-term memory and convolutional neural network models are compared with the ViT method.These results indicate that the ViT method is efficient in identifying copper ore grades with average accuracy,precision,recall,F_(1)score,and F_(1)(-)score values of 0.9795,0.9637,0.9614,0.9625,and 0.9942,respectively.When identifying associated minerals,the ViT model can identify Pb,Zn,Fe,and Co minerals with identification accuracies of 0.9215,0.9396,0.9966,and 0.8311,respectively. 展开更多
关键词 Copper-grade identification Vision transformer model Prompt gamma neutron activation analysis Monte Carlo N-particle
在线阅读 下载PDF
Retinexformer+:Retinex-Based Dual-Channel Transformer for Low-Light Image Enhancement
6
作者 Song Liu Hongying Zhang +1 位作者 Xue Li Xi Yang 《Computers, Materials & Continua》 2025年第2期1969-1984,共16页
Enhancing low-light images with color distortion and uneven multi-light source distribution presents challenges. Most advanced methods for low-light image enhancement are based on the Retinex model using deep learning... Enhancing low-light images with color distortion and uneven multi-light source distribution presents challenges. Most advanced methods for low-light image enhancement are based on the Retinex model using deep learning. Retinexformer introduces channel self-attention mechanisms in the IG-MSA. However, it fails to effectively capture long-range spatial dependencies, leaving room for improvement. Based on the Retinexformer deep learning framework, we designed the Retinexformer+ network. The “+” signifies our advancements in extracting long-range spatial dependencies. We introduced multi-scale dilated convolutions in illumination estimation to expand the receptive field. These convolutions effectively capture the weakening semantic dependency between pixels as distance increases. In illumination restoration, we used Unet++ with multi-level skip connections to better integrate semantic information at different scales. The designed Illumination Fusion Dual Self-Attention (IF-DSA) module embeds multi-scale dilated convolutions to achieve spatial self-attention. This module captures long-range spatial semantic relationships within acceptable computational complexity. Experimental results on the Low-Light (LOL) dataset show that Retexformer+ outperforms other State-Of-The-Art (SOTA) methods in both quantitative and qualitative evaluations, with the computational complexity increased to an acceptable 51.63 G FLOPS. On the LOL_v1 dataset, RetinexFormer+ shows an increase of 1.15 in Peak Signal-to-Noise Ratio (PSNR) and a decrease of 0.39 in Root Mean Square Error (RMSE). On the LOL_v2_real dataset, the PSNR increases by 0.42 and the RMSE decreases by 0.18. Experimental results on the Exdark dataset show that Retexformer+ can effectively enhance real-scene images and maintain their semantic information. 展开更多
关键词 Low-light image enhancement RETINEX transformer model
在线阅读 下载PDF
Generating Abstractive Summaries from Social Media Discussions Using Transformers
7
作者 Afrodite Papagiannopoulou Chrissanthi Angeli Mazida Ahmad 《Open Journal of Applied Sciences》 2025年第1期239-258,共20页
The rise of social media platforms has revolutionized communication, enabling the exchange of vast amounts of data through text, audio, images, and videos. These platforms have become critical for sharing opinions and... The rise of social media platforms has revolutionized communication, enabling the exchange of vast amounts of data through text, audio, images, and videos. These platforms have become critical for sharing opinions and insights, influencing daily habits, and driving business, political, and economic decisions. Text posts are particularly significant, and natural language processing (NLP) has emerged as a powerful tool for analyzing such data. While traditional NLP methods have been effective for structured media, social media content poses unique challenges due to its informal and diverse nature. This has spurred the development of new techniques tailored for processing and extracting insights from unstructured user-generated text. One key application of NLP is the summarization of user comments to manage overwhelming content volumes. Abstractive summarization has proven highly effective in generating concise, human-like summaries, offering clear overviews of key themes and sentiments. This enhances understanding and engagement while reducing cognitive effort for users. For businesses, summarization provides actionable insights into customer preferences and feedback, enabling faster trend analysis, improved responsiveness, and strategic adaptability. By distilling complex data into manageable insights, summarization plays a vital role in improving user experiences and empowering informed decision-making in a data-driven landscape. This paper proposes a new implementation framework by fine-tuning and parameterizing Transformer Large Language Models to manage and maintain linguistic and semantic components in abstractive summary generation. The system excels in transforming large volumes of data into meaningful summaries, as evidenced by its strong performance across metrics like fluency, consistency, readability, and semantic coherence. 展开更多
关键词 Abstractive Summarization transformerS Social Media Summarization transformer Language models
在线阅读 下载PDF
Multi-Label Movie Genre Classification with Attention Mechanism on Movie Plots
8
作者 Faheem Shaukat Naveed Ejaz +3 位作者 Rashid Kamal Tamim Alkhalifah Sheraz Aslam Mu Mu 《Computers, Materials & Continua》 2025年第6期5595-5622,共28页
Automated and accurate movie genre classification is crucial for content organization,recommendation systems,and audience targeting in the film industry.Although most existing approaches focus on audiovisual features ... Automated and accurate movie genre classification is crucial for content organization,recommendation systems,and audience targeting in the film industry.Although most existing approaches focus on audiovisual features such as trailers and posters,the text-based classification remains underexplored despite its accessibility and semantic richness.This paper introduces the Genre Attention Model(GAM),a deep learning architecture that integrates transformer models with a hierarchical attention mechanism to extract and leverage contextual information from movie plots formulti-label genre classification.In order to assess its effectiveness,we assessmultiple transformer-based models,including Bidirectional Encoder Representations fromTransformers(BERT),ALite BERT(ALBERT),Distilled BERT(DistilBERT),Robustly Optimized BERT Pretraining Approach(RoBERTa),Efficiently Learning an Encoder that Classifies Token Replacements Accurately(ELECTRA),eXtreme Learning Network(XLNet)and Decodingenhanced BERT with Disentangled Attention(DeBERTa).Experimental results demonstrate the superior performance of DeBERTa-based GAM,which employs a two-tier hierarchical attention mechanism:word-level attention highlights key terms,while sentence-level attention captures critical narrative segments,ensuring a refined and interpretable representation of movie plots.Evaluated on three benchmark datasets Trailers12K,Large Movie Trailer Dataset-9(LMTD-9),and MovieLens37K.GAM achieves micro-average precision scores of 83.63%,83.32%,and 83.34%,respectively,surpassing state-of-the-artmodels.Additionally,GAMis computationally efficient,requiring just 6.10Giga Floating Point Operations Per Second(GFLOPS),making it a scalable and cost-effective solution.These results highlight the growing potential of text-based deep learning models in genre classification and GAM’s effectiveness in improving predictive accuracy while maintaining computational efficiency.With its robust performance,GAM offers a versatile and scalable framework for content recommendation,film indexing,and media analytics,providing an interpretable alternative to traditional audiovisual-based classification techniques. 展开更多
关键词 Multi-label classification artificial intelligence movie genre classification hierarchical attention mechanisms natural language processing content recommendation text-based genre classification explainable AI(Artificial Intelligence) transformer models BERT
在线阅读 下载PDF
CT-NET: A Novel Convolutional Transformer-Based Network for Short-Term Solar Energy Forecasting Using Climatic Information 被引量:1
9
作者 Muhammad Munsif Fath U Min Ullah +2 位作者 Samee Ullah Khan Noman Khan Sung Wook Baik 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1751-1773,共23页
Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challeng... Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challenging despite the economic benefits.Existing PV forecasting techniques(sequential and convolutional neural networks(CNN))are sensitive to environmental conditions,reducing energy distribution system performance.To handle these issues,this article proposes an efficient,weather-resilient convolutional-transformer-based network(CT-NET)for accurate and efficient PV power forecasting.The network consists of three main modules.First,the acquired PV generation data are forwarded to the pre-processing module for data refinement.Next,to carry out data encoding,a CNNbased multi-head attention(MHA)module is developed in which a single MHA is used to decode the encoded data.The encoder module is mainly composed of 1D convolutional and MHA layers,which extract local as well as contextual features,while the decoder part includes MHA and feedforward layers to generate the final prediction.Finally,the performance of the proposed network is evaluated using standard error metrics,including the mean squared error(MSE),root mean squared error(RMSE),and mean absolute percentage error(MAPE).An ablation study and comparative analysis with several competitive state-of-the-art approaches revealed a lower error rate in terms of MSE(0.0471),RMSE(0.2167),and MAPE(0.6135)over publicly available benchmark data.In addition,it is demonstrated that our proposed model is less complex,with the lowest number of parameters(0.0135 M),size(0.106 MB),and inference time(2 ms/step),suggesting that it is easy to integrate into the smart grid. 展开更多
关键词 Solar energy forecasting renewable energy systems photovoltaic generation forecasting time series data transformer models deep learning machine learning
在线阅读 下载PDF
Remote sensing image semantic segmentation algorithm based on improved DeepLabv3+
10
作者 SONG Xirui GE Hongwei LI Ting 《Journal of Measurement Science and Instrumentation》 2025年第2期205-215,共11页
The convolutional neural network(CNN)method based on DeepLabv3+has some problems in the semantic segmentation task of high-resolution remote sensing images,such as fixed receiving field size of feature extraction,lack... The convolutional neural network(CNN)method based on DeepLabv3+has some problems in the semantic segmentation task of high-resolution remote sensing images,such as fixed receiving field size of feature extraction,lack of semantic information,high decoder magnification,and insufficient detail retention ability.A hierarchical feature fusion network(HFFNet)was proposed.Firstly,a combination of transformer and CNN architectures was employed for feature extraction from images of varying resolutions.The extracted features were processed independently.Subsequently,the features from the transformer and CNN were fused under the guidance of features from different sources.This fusion process assisted in restoring information more comprehensively during the decoding stage.Furthermore,a spatial channel attention module was designed in the final stage of decoding to refine features and reduce the semantic gap between shallow CNN features and deep decoder features.The experimental results showed that HFFNet had superior performance on UAVid,LoveDA,Potsdam,and Vaihingen datasets,and its cross-linking index was better than DeepLabv3+and other competing methods,showing strong generalization ability. 展开更多
关键词 semantic segmentation high-resolution remote sensing image deep learning transformer model attention mechanism feature fusion ENCODER DECODER
在线阅读 下载PDF
Multilingual Virtual Healthcare Assistant
11
作者 Geetika Munjal Piyush Agarwal +1 位作者 Lakshay Goyal Nandy Samiran 《Health Care Science》 2025年第4期281-288,共8页
This study proposes a virtual healthcare assistant framework designed to provide support in multiple languages for efficient and accurate healthcare assistance.The system employs a transformer model to process sophist... This study proposes a virtual healthcare assistant framework designed to provide support in multiple languages for efficient and accurate healthcare assistance.The system employs a transformer model to process sophisticated,multilingual user inputs and gain improved contextual understanding compared to conventional models,including long short-term memory(LSTM)models.In contrast to LSTMs,which sequence processes information and may experience challenges with long-range dependencies,transformers utilize self-attention to learn relationships among every aspect of the input in parallel.This enables them to execute more accurately in various languages and contexts,making them well-suited for applications such as translation,summarization,and conversational Comparative evaluations revealed the superiority of the transformer model(accuracy rate:85%)compared with that of the LSTM model(accuracy rate:65%).The experiments revealed several advantages of the transformer architecture over the LSTM model,such as more effective self-attention,the ability for models to work in parallel with each other,and contextual understanding for better multilingual compatibility.Additionally,our prediction model exhibited effectiveness for disease diagnosis,with accuracy of 85%or greater in identifying the relationship between symptoms and diseases among different demographics.The system provides translation support from English to other languages,with conversion to French(Bilingual Evaluation Understudy score:0.7),followed by English to Hindi(0.6).The lowest Bilingual Evaluation Understudy score was found for English to Telugu(0.39).This virtual assistant can also perform symptom analysis and disease prediction,with output given in the preferred language of the user. 展开更多
关键词 BLEU score encoder-only transformer model healthcare chatbot LSTM NLP virtual healthcare
暂未订购
From a Study on Translation Strategies for Culture-Loaded Words of ZIZHITONGJIAN from the Perspective of Eco-translatology
12
作者 Junyi Zhu 《Journal of Contemporary Educational Research》 2025年第8期293-300,共8页
ZIZHITONGJIAN is a key historical work that reflects not only political events but also many culture-loaded expressions rooted in traditional Chinese life.These expressions,including official titles,ritual words,and h... ZIZHITONGJIAN is a key historical work that reflects not only political events but also many culture-loaded expressions rooted in traditional Chinese life.These expressions,including official titles,ritual words,and historical references,carry strong cultural meaning that is hard to translate.And these words are often described as culture-loaded words.Previous research on ZIZHITONGJIAN has offered valuable insights into its translation,focusing on general strategies,historical context,or selected passages.However,these discussions often remain broad in scope,lacking systematic comparison across different types of English editions.This study uses Hu Gengshen’s eco-translatology theory to explore how these culture-loaded words are handled in three kinds of English editions by listing out some classical examples.By applying eco-translatology,this study identifies common translation issues across different English editions and offers a methodological reference for future research on classical Chinese texts,especially in handling culture-loaded words with greater cultural and communicative sensitivity. 展开更多
关键词 ECO-TRANSLATOLOGY Culture-loaded words ZIZHITONGJIAN Translation strategy 3D transformation model
在线阅读 下载PDF
Impacts of near-M_(s)austempering treatment on microstructure evolution and bainitic transformation kinetics of a medium Mn steel
13
作者 Yong-gang Yang Xin-yue Liu +4 位作者 Rui-zhi Li Yu-lai Chen Hong-xiang Wu Guo-min Sun Zhen-li Mi 《Journal of Iron and Steel Research International》 2025年第1期249-259,共11页
The microstructure evolution and bainitic transformation of an Fe-0.19C-4.03Mn-1.48Si steel subjected to near-M_(s)austempering treatment were systematically investigated by combining dilatometer,X-ray diffraction,and... The microstructure evolution and bainitic transformation of an Fe-0.19C-4.03Mn-1.48Si steel subjected to near-M_(s)austempering treatment were systematically investigated by combining dilatometer,X-ray diffraction,and electron microscopy.Three additional austempering treatments with isothermal temperatures above M_(s)were used as benchmarks.Results show that the incubation period for the bainitic transformation occurs when the medium Mn steel is treated with the austempering temperature above M_(s).However,when subjected to near-M_(s)isothermal treatment,the medium Mn steel does not show an incubation period and has the fastest bainitic transformation rate.Moreover,the largest volume fraction of bainite with a value of 74.7%is obtained on the condition of near-M_(s)austempering treatment after cooling to room temperature.Dilatometer and microstructure evolution analysis indicates that the elimination of the incubation period and the fastest rate of bainitic transformation are related to the preformed martensite.The advent of preformed martensite allows the specimen to generate more bainite in a limited time.Considering bainitic ferrite nucleation at austenite grain boundaries and through autocatalysis at ferrite/austenite interfaces,a model is established to understand the kinetics of bainite formation and it can describe the nucleation rate of bainitic transformation well when compared to the experimental results. 展开更多
关键词 Medium manganese steel Bainitic transformation Microstructure Near-M_(s)austempering Transformation modeling
原文传递
Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
14
作者 Li Hong Yu Liu +1 位作者 Mengqiao Xu Wenhui Deng 《Chinese Physics B》 2025年第1期96-106,共11页
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs... Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%. 展开更多
关键词 traveling salesman problem deep reinforcement learning simulated annealing algorithm transformer model whale optimization algorithm
原文传递
UAF-based integration of design and simulation model for system-of-systems
15
作者 FENG Yimin GE Ping +2 位作者 SHAO Yanli ZOU Qiang LIU Yusheng 《Journal of Systems Engineering and Electronics》 2025年第1期108-126,共19页
Model-based system-of-systems(SOS)engineering(MBSoSE)is becoming a promising solution for the design of SoS with increasing complexity.However,bridging the models from the design phase to the simulation phase poses si... Model-based system-of-systems(SOS)engineering(MBSoSE)is becoming a promising solution for the design of SoS with increasing complexity.However,bridging the models from the design phase to the simulation phase poses significant challenges and requires an integrated approach.In this study,a unified requirement modeling approach is proposed based on unified architecture framework(UAF).Theoretical models are proposed which compose formalized descriptions from both topdown and bottom-up perspectives.Based on the description,the UAF profile is proposed to represent the SoS mission and constituent systems(CS)goal.Moreover,the agent-based simulation information is also described based on the overview,design concepts,and details(ODD)protocol as the complement part of the SoS profile,which can be transformed into different simulation platforms based on the eXtensible markup language(XML)technology and model-to-text method.In this way,the design of the SoS is simulated automatically in the early design stage.Finally,the method is implemented and an example is given to illustrate the whole process. 展开更多
关键词 model-based systems engineering unified architecture framework(UAF) system-of-systems engineering model transformation SIMULATION
在线阅读 下载PDF
Micro-expression recognition algorithm based on graph convolutional network and Transformer model 被引量:1
16
作者 吴进 PANG Wenting +1 位作者 WANG Lei ZHAO Bo 《High Technology Letters》 EI CAS 2023年第2期213-222,共10页
Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most ... Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most existing micro-expression recognition technologies so far focus on modeling the single category of micro-expression images and neural network structure.Aiming at the problems of low recognition rate and weak model generalization ability in micro-expression recognition, a micro-expression recognition algorithm is proposed based on graph convolution network(GCN) and Transformer model.Firstly, action unit(AU) feature detection is extracted and facial muscle nodes in the neighborhood are divided into three subsets for recognition.Then, graph convolution layer is used to find the layout of dependencies between AU nodes of micro-expression classification.Finally, multiple attentional features of each facial action are enriched with Transformer model to include more sequence information before calculating the overall correlation of each region.The proposed method is validated in CASME II and CAS(ME)^2 datasets, and the recognition rate reached 69.85%. 展开更多
关键词 micro-expression recognition graph convolutional network(GCN) action unit(AU)detection transformer model
在线阅读 下载PDF
Formal Verification of TASM Models by Translating into UPPAAL 被引量:1
17
作者 胡凯 张腾 +3 位作者 杨志斌 顾斌 蒋树 姜泮昌 《Journal of Donghua University(English Edition)》 EI CAS 2012年第1期51-54,共4页
Timed abstract state machine(TASM) is a formal specification language used to specify and simulate the behavior of real-time systems. Formal verification of TASM model can be fulfilled through model checking activitie... Timed abstract state machine(TASM) is a formal specification language used to specify and simulate the behavior of real-time systems. Formal verification of TASM model can be fulfilled through model checking activities by translating into UPPAAL. Firstly, the translational semantics from TASM to UPPAAL is presented through atlas transformation language(ATL). Secondly, the implementation of the proposed model transformation tool TASM2UPPAAL is provided. Finally, a case study is given to illustrate the automatic transformation from TASM model to UPPAAL model. 展开更多
关键词 timed abstract state machine(TASM) formal verification model transformation atlas transformation language(ATL) UPPAAL
在线阅读 下载PDF
Model Transformer Evaluation of High-Permeability Grain-Oriented Electrical Steels 被引量:1
18
作者 Masayoshi Ishida, Seiji Okabe, Takeshi Imamura and Michiro Komatsubara (Kawasaki Steel Corporation, Kurashiki 712-8511, Japan) 《Journal of Materials Science & Technology》 SCIE EI CAS CSCD 2000年第2期223-227,共5页
The dependence of transformer performance on the material properties was investigated using two laboratory-processed 0.23 mm thick grain-oriented electrical steels domain-refined with elec-trolytically etched grooves ... The dependence of transformer performance on the material properties was investigated using two laboratory-processed 0.23 mm thick grain-oriented electrical steels domain-refined with elec-trolytically etched grooves having different magnetic properties. The iron loss at 1.7 T, 50 Hz and the flux density at 800 A/m of material A were 0.73 W/kg and 1.89 T, respectively; and those of material B, 0.83 W/kg and 1.88 T. Model stacked and wound transformer core experiments using the tested materials exhibited performance well reflecting the material characteristics. In a three-phase stacked core with step-lap joints excited to 1.7 T, 50 Hz, the core loss, the exciting current and the noise level were 0.86 W/kg, 0.74 A and 52 dB, respectively, with material A; and 0.97 W/kg, 1.0 A and 54 dB with material B. The building factors for the core losses of the two materials were almost the same in both core configurations. The effect of higher harmonics on transformer performance was also investigated. 展开更多
关键词 Model transformer Evaluation of High-Permeability Grain-Oriented Electrical Steels
在线阅读 下载PDF
A Deep Learning Ensemble Method for Forecasting Daily Crude Oil Price Based on Snapshot Ensemble of Transformer Model
19
作者 Ahmed Fathalla Zakaria Alameer +1 位作者 Mohamed Abbas Ahmed Ali 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期929-950,共22页
The oil industries are an important part of a country’s economy.The crude oil’s price is influenced by a wide range of variables.Therefore,how accurately can countries predict its behavior and what predictors to emp... The oil industries are an important part of a country’s economy.The crude oil’s price is influenced by a wide range of variables.Therefore,how accurately can countries predict its behavior and what predictors to employ are two main questions.In this view,we propose utilizing deep learning and ensemble learning techniques to boost crude oil’s price forecasting performance.The suggested method is based on a deep learning snapshot ensemble method of the Transformer model.To examine the superiority of the proposed model,this paper compares the proposed deep learning ensemble model against different machine learning and statistical models for daily Organization of the Petroleum Exporting Countries(OPEC)oil price forecasting.Experimental results demonstrated the outperformance of the proposed method over statistical and machine learning methods.More precisely,the proposed snapshot ensemble of Transformer method achieved relative improvement in the forecasting performance compared to autoregressive integrated moving average ARIMA(1,1,1),ARIMA(0,1,1),autoregressive moving average(ARMA)(0,1),vector autoregression(VAR),random walk(RW),support vector machine(SVM),and random forests(RF)models by 99.94%,99.62%,99.87%,99.65%,7.55%,98.38%,and 99.35%,respectively,according to mean square error metric. 展开更多
关键词 Deep learning ensemble learning transformer model crude oil price
在线阅读 下载PDF
Vehicle Density Prediction in Low Quality Videos with Transformer Timeseries Prediction Model(TTPM)
20
作者 D.Suvitha M.Vijayalakshmi 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期873-894,共22页
Recent advancement in low-cost cameras has facilitated surveillance in various developing towns in India.The video obtained from such surveillance are of low quality.Still counting vehicles from such videos are necess... Recent advancement in low-cost cameras has facilitated surveillance in various developing towns in India.The video obtained from such surveillance are of low quality.Still counting vehicles from such videos are necessity to avoid traf-fic congestion and allows drivers to plan their routes more precisely.On the other hand,detecting vehicles from such low quality videos are highly challenging with vision based methodologies.In this research a meticulous attempt is made to access low-quality videos to describe traffic in Salem town in India,which is mostly an un-attempted entity by most available sources.In this work profound Detection Transformer(DETR)model is used for object(vehicle)detection.Here vehicles are anticipated in a rush-hour traffic video using a set of loss functions that carry out bipartite coordinating among estimated and information acquired on real attributes.Every frame in the traffic footage has its date and time which is detected and retrieved using Tesseract Optical Character Recognition.The date and time extricated and perceived from the input image are incorporated with the length of the recognized objects acquired from the DETR model.This furnishes the vehicles report with timestamp.Transformer Timeseries Prediction Model(TTPM)is proposed to predict the density of the vehicle for future prediction,here the regular NLP layers have been removed and the encoding temporal layer has been modified.The proposed TTPM error rate outperforms the existing models with RMSE of 4.313 and MAE of 3.812. 展开更多
关键词 Detection transformer self-attention tesseract optical character recognition transformer timeseries prediction model time encoding vector
在线阅读 下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部