critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study pr...critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management.展开更多
Analysing the heat states of transformers under DC bias requires careful consideration of their own structural and material characteristics.Although the 3D finite element method(FEM)is a reliable way of simulating all...Analysing the heat states of transformers under DC bias requires careful consideration of their own structural and material characteristics.Although the 3D finite element method(FEM)is a reliable way of simulating all the details of transformers,it may be time-consuming or have encountered convergence difficulties due to the complex internal structure of the transformer.To address the issues,this paper proposes a fast calculation model for estimating the top-oil temperature rise and the winding hotspot temperature rise of single-phase three-limb transformers under DC bias.This model is based on the coupling principle of electric circuits,magnetic circuits and thermal circuits,and it considers the winding loss and core loss of the transformer under DC bias as key factors linking electromagnetic and thermal effects.All the model parameters can be obtained from nameplate data and regular test data to ensure the method's engineering practicality.The results were compared with 3D FEM,demonstrating favourable performance in terms of computational speed and availability.展开更多
The rise of social media platforms has revolutionized communication, enabling the exchange of vast amounts of data through text, audio, images, and videos. These platforms have become critical for sharing opinions and...The rise of social media platforms has revolutionized communication, enabling the exchange of vast amounts of data through text, audio, images, and videos. These platforms have become critical for sharing opinions and insights, influencing daily habits, and driving business, political, and economic decisions. Text posts are particularly significant, and natural language processing (NLP) has emerged as a powerful tool for analyzing such data. While traditional NLP methods have been effective for structured media, social media content poses unique challenges due to its informal and diverse nature. This has spurred the development of new techniques tailored for processing and extracting insights from unstructured user-generated text. One key application of NLP is the summarization of user comments to manage overwhelming content volumes. Abstractive summarization has proven highly effective in generating concise, human-like summaries, offering clear overviews of key themes and sentiments. This enhances understanding and engagement while reducing cognitive effort for users. For businesses, summarization provides actionable insights into customer preferences and feedback, enabling faster trend analysis, improved responsiveness, and strategic adaptability. By distilling complex data into manageable insights, summarization plays a vital role in improving user experiences and empowering informed decision-making in a data-driven landscape. This paper proposes a new implementation framework by fine-tuning and parameterizing Transformer Large Language Models to manage and maintain linguistic and semantic components in abstractive summary generation. The system excels in transforming large volumes of data into meaningful summaries, as evidenced by its strong performance across metrics like fluency, consistency, readability, and semantic coherence.展开更多
Cyberbullying is a remarkable issue in the Arabic-speaking world,affecting children,organizations,and businesses.Various efforts have been made to combat this problem through proposed models using machine learning(ML)...Cyberbullying is a remarkable issue in the Arabic-speaking world,affecting children,organizations,and businesses.Various efforts have been made to combat this problem through proposed models using machine learning(ML)and deep learning(DL)approaches utilizing natural language processing(NLP)methods and by proposing relevant datasets.However,most of these endeavors focused predominantly on the English language,leaving a substantial gap in addressing Arabic cyberbullying.Given the complexities of the Arabic language,transfer learning techniques and transformers present a promising approach to enhance the detection and classification of abusive content by leveraging large and pretrained models that use a large dataset.Therefore,this study proposes a hybrid model using transformers trained on extensive Arabic datasets.It then fine-tunes the hybrid model on a newly curated Arabic cyberbullying dataset collected from social media platforms,in particular Twitter.Additionally,the following two hybrid transformer models are introduced:the first combines CAmelid Morphologically-aware pretrained Bidirectional Encoder Representations from Transformers(CAMeLBERT)with Arabic Generative Pre-trained Transformer 2(AraGPT2)and the second combines Arabic BERT(AraBERT)with Cross-lingual Language Model-RoBERTa(XLM-R).Two strategies,namely,feature fusion and ensemble voting,are employed to improve the model performance accuracy.Experimental results,measured through precision,recall,F1-score,accuracy,and AreaUnder the Curve-Receiver Operating Characteristic(AUC-ROC),demonstrate that the combined CAMeLBERT and AraGPT2 models using feature fusion outperformed traditional DL models,such as Long Short-Term Memory(LSTM)and Bidirectional Long Short-Term Memory(BiLSTM),as well as other independent Arabic-based transformer models.展开更多
As the plasma current power in tokamak devices increases,a significant number of stray magnetic fields are generated around the equipment.These stray magnetic fields can disrupt the operation of electronic power devic...As the plasma current power in tokamak devices increases,a significant number of stray magnetic fields are generated around the equipment.These stray magnetic fields can disrupt the operation of electronic power devices,particularly transformers in switched-mode power supplies.Testing flyback converters with transformers under strong background magnetic fields highlights electromagnetic compatibility(EMC)issues for such switched-mode power supplies.This study utilizes finite element analysis software to simulate the electromagnetic environment of switched-mode power supply transformers and investigates the impact of variations in different magnetic field parameters on the performance of switched-mode power supplies under strong stray magnetic fields.The findings indicate that EMC issues are associated with transformer core saturation and can be alleviated through appropriate configurations of the core size,air gap,fillet radius,and installation direction.This study offers novel solutions for addressing EMC issues in high magnetic field environments.展开更多
X(formerly known as Twitter)is one of the most prominent social media platforms,enabling users to share short messages(tweets)with the public or their followers.It serves various purposes,from real-time news dissemina...X(formerly known as Twitter)is one of the most prominent social media platforms,enabling users to share short messages(tweets)with the public or their followers.It serves various purposes,from real-time news dissemination and political discourse to trend spotting and consumer engagement.X has emerged as a key space for understanding shifting brand perceptions,consumer preferences,and product-related sentiment in the fashion industry.However,the platform’s informal,dynamic,and context-dependent language poses substantial challenges for sentiment analysis,mainly when attempting to detect sarcasm,slang,and nuanced emotional tones.This study introduces a hybrid deep learning framework that integrates Transformer encoders,recurrent neural networks(i.e.,Long Short-Term Memory(LSTM)and Gated Recurrent Unit(GRU)),and attention mechanisms to improve the accuracy of fashion-related sentiment classification.These methods were selected due to their proven strength in capturing both contextual dependencies and sequential structures,which are essential for interpreting short-form text.Our model was evaluated on a dataset of 20,000 fashion tweets.The experimental results demonstrate a classification accuracy of 92.25%,outperforming conventional models such as Logistic Regression,Linear Support Vector Machine(SVM),and even standalone LSTM by a margin of up to 8%.This improvement highlights the importance of hybrid architectures in handling noisy,informal social media data.This study’s findings offer strong implications for digital marketing and brand management,where timely sentiment detection is critical.Despite the promising results,challenges remain regarding the precise identification of negative sentiments,indicating that further work is needed to detect subtle and contextually embedded expressions.展开更多
Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status...Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes.展开更多
In recent years,Transformer has achieved remarkable results in the field of computer vision,with its built-in attention layers effectively modeling global dependencies in images by transforming image features into tok...In recent years,Transformer has achieved remarkable results in the field of computer vision,with its built-in attention layers effectively modeling global dependencies in images by transforming image features into token forms.However,Transformers often face high computational costs when processing large-scale image data,which limits their feasibility in real-time applications.To address this issue,we propose Token Masked Pose Transformers(TMPose),constructing an efficient Transformer network for pose estimation.This network applies semantic-level masking to tokens and employs three different masking strategies to optimize model performance,aiming to reduce computational complexity.Experimental results show that TMPose reduces computational complexity by 61.1%on the COCO validation dataset,with negligible loss in accuracy.Additionally,our performance on the MPII dataset is also competitive.This research not only enhances the accuracy of pose estimation but also significantly reduces the demand for computational resources,providing new directions for further studies in this field.展开更多
Critical for metering and protection in electric railway traction power supply systems(TPSSs),the measurement performance of voltage transformers(VTs)must be timely and reliably monitored.This paper outlines a three-s...Critical for metering and protection in electric railway traction power supply systems(TPSSs),the measurement performance of voltage transformers(VTs)must be timely and reliably monitored.This paper outlines a three-step,RMS data only method for evaluating VTs in TPSSs.First,a kernel principal component analysis approach is used to diagnose the VT exhibiting significant measurement deviations over time,mitigating the influence of stochastic fluctuations in traction loads.Second,a back propagation neural network is employed to continuously estimate the measurement deviations of the targeted VT.Third,a trend analysis method is developed to assess the evolution of the measurement performance of VTs.Case studies conducted on field data from an operational TPSS demonstrate the effectiveness of the proposed method in detecting VTs with measurement deviations exceeding 1%relative to their original accuracy levels.Additionally,the method accurately tracks deviation trends,enabling the identification of potential early-stage faults in VTs and helping prevent significant economic losses in TPSS operations.展开更多
The internal component structure of the converter transformer plays an extremely important role in the generation and propagation of vibration noise.In order to comprehensively reveal the influence of the component st...The internal component structure of the converter transformer plays an extremely important role in the generation and propagation of vibration noise.In order to comprehensively reveal the influence of the component structure on the vibration and noise of converter transformers,this paper conducted vibration and noise experiments on different combinations of three iron core structures,four winding structures,two oil tank structures,two foot insulation structures and three positioning structures under different frequency harmonic excitations in a semi-anechoic chamber environment.The results show that the optimal configuration for minimising noise in converter transformers comprises the following components:an entanglement internal screen winding within the coil assembly,a 7.2 mm six-step-123 iron core,a cross-shaped reinforced oil tank,bottom foot insulation,an upper eccentric circle design and lower pouring positioning.展开更多
This paper focuses on the research of the main transformer selection and layout scheme for new energy step-up substations.From the perspective of engineering design,it analyzes the principles of main transformer selec...This paper focuses on the research of the main transformer selection and layout scheme for new energy step-up substations.From the perspective of engineering design,it analyzes the principles of main transformer selection,key parameters,and their matching with the characteristics of new energy.It also explores the layout methods and optimization strategies.Combined with typical case studies,optimization suggestions are proposed for the design of main transformers in new energy step-up substations.The research shows that rational main transformer selection and scientific layout schemes can better adapt to the characteristics of new energy projects while effectively improving land use efficiency and economic viability.This study can provide technical experience support for the design of new energy projects.展开更多
在新型电力系统复杂工况下,以策略表为主体、通过“离线仿真、在线匹配”的预案式频率稳定控制方案存在较高失配风险,甚至因调控失当引发二次冲击,严重威胁电力系统的安全稳定运行。提出一种计及预案式失配冲击的响应驱动频率稳定紧急...在新型电力系统复杂工况下,以策略表为主体、通过“离线仿真、在线匹配”的预案式频率稳定控制方案存在较高失配风险,甚至因调控失当引发二次冲击,严重威胁电力系统的安全稳定运行。提出一种计及预案式失配冲击的响应驱动频率稳定紧急切负荷策略。该策略动作在预案式控制之后,是对预案式控制的有益补充,能够有效提升系统频率稳定性。首先建立了基于系统频率响应(system frequency response,SFR)模型辨识的频率稳定切负荷量计算方法。提出了基于频率稀疏量测的SFR模型辨识方法,在此基础上建立了含稳定控制的SFR模型,根据频率稳定控制目标迭代求解切负荷量。其次,建立了基于Transformer网络的频率控制敏感点挖掘模型,通过分析关键发电机母线节点频率时序值和频率控制敏感点的映射关系,实现响应驱动的频率控制敏感点在线挖掘。最后,按照敏感点排序快速分配控制措施总量,构建频率稳定紧急控制方案。在某实际交直流混联万节点仿真系统验证了所提方法的有效性。展开更多
针对地图综合中建筑多边形化简方法依赖人工规则、自动化程度低且难以利用已有化简成果的问题,本文提出了一种基于Transformer机制的建筑多边形化简模型。该模型首先把建筑多边形映射至一定范围的网格空间,将建筑多边形的坐标串表达为...针对地图综合中建筑多边形化简方法依赖人工规则、自动化程度低且难以利用已有化简成果的问题,本文提出了一种基于Transformer机制的建筑多边形化简模型。该模型首先把建筑多边形映射至一定范围的网格空间,将建筑多边形的坐标串表达为网格序列,从而获取建筑多边形化简前后的Token序列,构建出建筑多边形化简样本对数据;随后采用Transformer架构建立模型,基于样本数据利用模型的掩码自注意力机制学习点序列之间的依赖关系,最终逐点生成新的简化多边形,从而实现建筑多边形的化简。在训练过程中,模型使用结构化的样本数据,设计了忽略特定索引的交叉熵损失函数以提升化简质量。试验设计包括主试验与泛化验证两部分。主试验基于洛杉矶1∶2000建筑数据集,分别采用0.2、0.3和0.5 mm 3种网格尺寸对多边形进行编码,实现了目标比例尺为1∶5000与1∶10000的化简。试验结果表明,在0.3 mm的网格尺寸下模型性能最优,验证集上的化简结果与人工标注的一致率超过92.0%,且针对北京部分区域的建筑多边形数据的泛化试验验证了模型的迁移能力;与LSTM模型的对比分析显示,在参数规模相近的条件下,LSTM模型无法形成有效收敛,并生成可用结果。本文证实了Transformer在处理空间几何序列任务中的潜力,且能够有效复用已有化简样本,为智能建筑多边形化简提供了具有工程实用价值的途径。展开更多
基金funded by the Ministry of Higher Education(MoHE)Malaysia through the Fundamental Research Grant Scheme—Early Career Researcher(FRGS-EC),grant number FRGSEC/1/2024/ICT02/UNIMAP/02/8.
文摘critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management.
基金supported by the National Natural Science Foundation of China Joint Fund Key Project(Grant U22B20118).
文摘Analysing the heat states of transformers under DC bias requires careful consideration of their own structural and material characteristics.Although the 3D finite element method(FEM)is a reliable way of simulating all the details of transformers,it may be time-consuming or have encountered convergence difficulties due to the complex internal structure of the transformer.To address the issues,this paper proposes a fast calculation model for estimating the top-oil temperature rise and the winding hotspot temperature rise of single-phase three-limb transformers under DC bias.This model is based on the coupling principle of electric circuits,magnetic circuits and thermal circuits,and it considers the winding loss and core loss of the transformer under DC bias as key factors linking electromagnetic and thermal effects.All the model parameters can be obtained from nameplate data and regular test data to ensure the method's engineering practicality.The results were compared with 3D FEM,demonstrating favourable performance in terms of computational speed and availability.
文摘The rise of social media platforms has revolutionized communication, enabling the exchange of vast amounts of data through text, audio, images, and videos. These platforms have become critical for sharing opinions and insights, influencing daily habits, and driving business, political, and economic decisions. Text posts are particularly significant, and natural language processing (NLP) has emerged as a powerful tool for analyzing such data. While traditional NLP methods have been effective for structured media, social media content poses unique challenges due to its informal and diverse nature. This has spurred the development of new techniques tailored for processing and extracting insights from unstructured user-generated text. One key application of NLP is the summarization of user comments to manage overwhelming content volumes. Abstractive summarization has proven highly effective in generating concise, human-like summaries, offering clear overviews of key themes and sentiments. This enhances understanding and engagement while reducing cognitive effort for users. For businesses, summarization provides actionable insights into customer preferences and feedback, enabling faster trend analysis, improved responsiveness, and strategic adaptability. By distilling complex data into manageable insights, summarization plays a vital role in improving user experiences and empowering informed decision-making in a data-driven landscape. This paper proposes a new implementation framework by fine-tuning and parameterizing Transformer Large Language Models to manage and maintain linguistic and semantic components in abstractive summary generation. The system excels in transforming large volumes of data into meaningful summaries, as evidenced by its strong performance across metrics like fluency, consistency, readability, and semantic coherence.
基金funded by the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,through the project number“NBU-FFR-2025-1197-01”.
文摘Cyberbullying is a remarkable issue in the Arabic-speaking world,affecting children,organizations,and businesses.Various efforts have been made to combat this problem through proposed models using machine learning(ML)and deep learning(DL)approaches utilizing natural language processing(NLP)methods and by proposing relevant datasets.However,most of these endeavors focused predominantly on the English language,leaving a substantial gap in addressing Arabic cyberbullying.Given the complexities of the Arabic language,transfer learning techniques and transformers present a promising approach to enhance the detection and classification of abusive content by leveraging large and pretrained models that use a large dataset.Therefore,this study proposes a hybrid model using transformers trained on extensive Arabic datasets.It then fine-tunes the hybrid model on a newly curated Arabic cyberbullying dataset collected from social media platforms,in particular Twitter.Additionally,the following two hybrid transformer models are introduced:the first combines CAmelid Morphologically-aware pretrained Bidirectional Encoder Representations from Transformers(CAMeLBERT)with Arabic Generative Pre-trained Transformer 2(AraGPT2)and the second combines Arabic BERT(AraBERT)with Cross-lingual Language Model-RoBERTa(XLM-R).Two strategies,namely,feature fusion and ensemble voting,are employed to improve the model performance accuracy.Experimental results,measured through precision,recall,F1-score,accuracy,and AreaUnder the Curve-Receiver Operating Characteristic(AUC-ROC),demonstrate that the combined CAMeLBERT and AraGPT2 models using feature fusion outperformed traditional DL models,such as Long Short-Term Memory(LSTM)and Bidirectional Long Short-Term Memory(BiLSTM),as well as other independent Arabic-based transformer models.
基金supported by the Natural Science Foundation of Anhui Province(No.228085ME142)the Comprehensive Research Facility for the Fusion Technology Program of China(No.20180000527301001228)the Open Fund of the Magnetic Confinement Fusion Laboratory of Anhui Province(No.2024AMF04003)。
文摘As the plasma current power in tokamak devices increases,a significant number of stray magnetic fields are generated around the equipment.These stray magnetic fields can disrupt the operation of electronic power devices,particularly transformers in switched-mode power supplies.Testing flyback converters with transformers under strong background magnetic fields highlights electromagnetic compatibility(EMC)issues for such switched-mode power supplies.This study utilizes finite element analysis software to simulate the electromagnetic environment of switched-mode power supply transformers and investigates the impact of variations in different magnetic field parameters on the performance of switched-mode power supplies under strong stray magnetic fields.The findings indicate that EMC issues are associated with transformer core saturation and can be alleviated through appropriate configurations of the core size,air gap,fillet radius,and installation direction.This study offers novel solutions for addressing EMC issues in high magnetic field environments.
文摘X(formerly known as Twitter)is one of the most prominent social media platforms,enabling users to share short messages(tweets)with the public or their followers.It serves various purposes,from real-time news dissemination and political discourse to trend spotting and consumer engagement.X has emerged as a key space for understanding shifting brand perceptions,consumer preferences,and product-related sentiment in the fashion industry.However,the platform’s informal,dynamic,and context-dependent language poses substantial challenges for sentiment analysis,mainly when attempting to detect sarcasm,slang,and nuanced emotional tones.This study introduces a hybrid deep learning framework that integrates Transformer encoders,recurrent neural networks(i.e.,Long Short-Term Memory(LSTM)and Gated Recurrent Unit(GRU)),and attention mechanisms to improve the accuracy of fashion-related sentiment classification.These methods were selected due to their proven strength in capturing both contextual dependencies and sequential structures,which are essential for interpreting short-form text.Our model was evaluated on a dataset of 20,000 fashion tweets.The experimental results demonstrate a classification accuracy of 92.25%,outperforming conventional models such as Logistic Regression,Linear Support Vector Machine(SVM),and even standalone LSTM by a margin of up to 8%.This improvement highlights the importance of hybrid architectures in handling noisy,informal social media data.This study’s findings offer strong implications for digital marketing and brand management,where timely sentiment detection is critical.Despite the promising results,challenges remain regarding the precise identification of negative sentiments,indicating that further work is needed to detect subtle and contextually embedded expressions.
基金supported by the Deanship of Research and Graduate Studies at King Khalid University under Small Research Project grant number RGP1/139/45.
文摘Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes.
基金supported in part by the Scientific Research Start-Up Fund of Zhejiang Sci-Tech University,under the project titled“(National Treasury)Development of a Digital Silk Museum System Based on Metaverse and AR”(Project No.11121731282202-01).
文摘In recent years,Transformer has achieved remarkable results in the field of computer vision,with its built-in attention layers effectively modeling global dependencies in images by transforming image features into token forms.However,Transformers often face high computational costs when processing large-scale image data,which limits their feasibility in real-time applications.To address this issue,we propose Token Masked Pose Transformers(TMPose),constructing an efficient Transformer network for pose estimation.This network applies semantic-level masking to tokens and employs three different masking strategies to optimize model performance,aiming to reduce computational complexity.Experimental results show that TMPose reduces computational complexity by 61.1%on the COCO validation dataset,with negligible loss in accuracy.Additionally,our performance on the MPII dataset is also competitive.This research not only enhances the accuracy of pose estimation but also significantly reduces the demand for computational resources,providing new directions for further studies in this field.
基金supported by the National Natural Science Foundation of China(No.52107125)Applied Basic Research Project of Sichuan Province(No.2022NSFSC0250)Chengdu Guojia Electrical Engineering Co.,Ltd.(No.KYL202312-0043).
文摘Critical for metering and protection in electric railway traction power supply systems(TPSSs),the measurement performance of voltage transformers(VTs)must be timely and reliably monitored.This paper outlines a three-step,RMS data only method for evaluating VTs in TPSSs.First,a kernel principal component analysis approach is used to diagnose the VT exhibiting significant measurement deviations over time,mitigating the influence of stochastic fluctuations in traction loads.Second,a back propagation neural network is employed to continuously estimate the measurement deviations of the targeted VT.Third,a trend analysis method is developed to assess the evolution of the measurement performance of VTs.Case studies conducted on field data from an operational TPSS demonstrate the effectiveness of the proposed method in detecting VTs with measurement deviations exceeding 1%relative to their original accuracy levels.Additionally,the method accurately tracks deviation trends,enabling the identification of potential early-stage faults in VTs and helping prevent significant economic losses in TPSS operations.
基金supported by the Key R&D Program of Shandong Province(2021CXGC010210).
文摘The internal component structure of the converter transformer plays an extremely important role in the generation and propagation of vibration noise.In order to comprehensively reveal the influence of the component structure on the vibration and noise of converter transformers,this paper conducted vibration and noise experiments on different combinations of three iron core structures,four winding structures,two oil tank structures,two foot insulation structures and three positioning structures under different frequency harmonic excitations in a semi-anechoic chamber environment.The results show that the optimal configuration for minimising noise in converter transformers comprises the following components:an entanglement internal screen winding within the coil assembly,a 7.2 mm six-step-123 iron core,a cross-shaped reinforced oil tank,bottom foot insulation,an upper eccentric circle design and lower pouring positioning.
文摘This paper focuses on the research of the main transformer selection and layout scheme for new energy step-up substations.From the perspective of engineering design,it analyzes the principles of main transformer selection,key parameters,and their matching with the characteristics of new energy.It also explores the layout methods and optimization strategies.Combined with typical case studies,optimization suggestions are proposed for the design of main transformers in new energy step-up substations.The research shows that rational main transformer selection and scientific layout schemes can better adapt to the characteristics of new energy projects while effectively improving land use efficiency and economic viability.This study can provide technical experience support for the design of new energy projects.
文摘在新型电力系统复杂工况下,以策略表为主体、通过“离线仿真、在线匹配”的预案式频率稳定控制方案存在较高失配风险,甚至因调控失当引发二次冲击,严重威胁电力系统的安全稳定运行。提出一种计及预案式失配冲击的响应驱动频率稳定紧急切负荷策略。该策略动作在预案式控制之后,是对预案式控制的有益补充,能够有效提升系统频率稳定性。首先建立了基于系统频率响应(system frequency response,SFR)模型辨识的频率稳定切负荷量计算方法。提出了基于频率稀疏量测的SFR模型辨识方法,在此基础上建立了含稳定控制的SFR模型,根据频率稳定控制目标迭代求解切负荷量。其次,建立了基于Transformer网络的频率控制敏感点挖掘模型,通过分析关键发电机母线节点频率时序值和频率控制敏感点的映射关系,实现响应驱动的频率控制敏感点在线挖掘。最后,按照敏感点排序快速分配控制措施总量,构建频率稳定紧急控制方案。在某实际交直流混联万节点仿真系统验证了所提方法的有效性。
文摘针对地图综合中建筑多边形化简方法依赖人工规则、自动化程度低且难以利用已有化简成果的问题,本文提出了一种基于Transformer机制的建筑多边形化简模型。该模型首先把建筑多边形映射至一定范围的网格空间,将建筑多边形的坐标串表达为网格序列,从而获取建筑多边形化简前后的Token序列,构建出建筑多边形化简样本对数据;随后采用Transformer架构建立模型,基于样本数据利用模型的掩码自注意力机制学习点序列之间的依赖关系,最终逐点生成新的简化多边形,从而实现建筑多边形的化简。在训练过程中,模型使用结构化的样本数据,设计了忽略特定索引的交叉熵损失函数以提升化简质量。试验设计包括主试验与泛化验证两部分。主试验基于洛杉矶1∶2000建筑数据集,分别采用0.2、0.3和0.5 mm 3种网格尺寸对多边形进行编码,实现了目标比例尺为1∶5000与1∶10000的化简。试验结果表明,在0.3 mm的网格尺寸下模型性能最优,验证集上的化简结果与人工标注的一致率超过92.0%,且针对北京部分区域的建筑多边形数据的泛化试验验证了模型的迁移能力;与LSTM模型的对比分析显示,在参数规模相近的条件下,LSTM模型无法形成有效收敛,并生成可用结果。本文证实了Transformer在处理空间几何序列任务中的潜力,且能够有效复用已有化简样本,为智能建筑多边形化简提供了具有工程实用价值的途径。