期刊文献+
共找到254,659篇文章
< 1 2 250 >
每页显示 20 50 100
基于BSimilar优化PTransformer的光伏功率短期预测
1
作者 张文广 蔡浩 +1 位作者 刘科 孙盼荣 《动力工程学报》 北大核心 2026年第1期77-84,102,共9页
为提高光伏功率短期预测的精度,提出了考虑光伏设备性能退化因素的相似日算法优化的分时段多通道独立光伏功率短期预测方法。首先,在PTransformer模型中用分时段与通道独立的方法来处理光伏输入数据,以降低空间复杂度及提高长时间数据... 为提高光伏功率短期预测的精度,提出了考虑光伏设备性能退化因素的相似日算法优化的分时段多通道独立光伏功率短期预测方法。首先,在PTransformer模型中用分时段与通道独立的方法来处理光伏输入数据,以降低空间复杂度及提高长时间数据序列的关注度。其次,运用Transformer的编码器模型,通过自身注意力机制捕捉光伏序列特征之间的依赖关系,进行光伏功率的短期预测。最后,运用夹角余弦距离计算相似度并考虑光伏设备性能退化因素确定相似日,利用其功率数据优化PTransformer模型,以改善功率数据的滞后性。结果表明:相比典型的光伏功率短期预测方法,所提方法训练速度更快,预测精准度更高,并且对复杂天气状况下的光伏功率也有较好的预测结果。 展开更多
关键词 光伏功率 短期预测 性能退化 贝叶斯分析 transformER 相似日
在线阅读 下载PDF
融合群分解与Transformer-KAN的短期风速预测
2
作者 史加荣 张思怡 《南京信息工程大学学报》 北大核心 2026年第1期60-68,共9页
针对风速固有的不稳定性,通过融合群分解(Swarm Decomposition,SWD)、Transformer和Kolmogorov-Arnold网络(KAN),提出一种SWD-Transformer-KAN预测模型.首先,利用SWD对原始风速数据进行分解,以提取关键特征.其次,针对每个被分解的子序列... 针对风速固有的不稳定性,通过融合群分解(Swarm Decomposition,SWD)、Transformer和Kolmogorov-Arnold网络(KAN),提出一种SWD-Transformer-KAN预测模型.首先,利用SWD对原始风速数据进行分解,以提取关键特征.其次,针对每个被分解的子序列,建立Transformer-KAN模型,所建模型充分利用了Transformer的时序处理能力和KAN的非线性逼近能力.最后,对所有子序列的预测结果进行叠加,得到最终的风速预测值.为了验证所提出模型的有效性,将其与其他模型进行实验对比,结果表明,SWD-Transformer-KAN模型具有最优的预测性能,其决定系数(R2)高达99.91%. 展开更多
关键词 风速预测 群分解 transformER Kolmogorov-Arnold网络
在线阅读 下载PDF
基于Transformer模型堤坝渗漏入口精准识别方法研究
3
作者 梁越 赵硕 +4 位作者 喻金桃 许彬 张斌 龚胜勇 舒云林 《岩土工程学报》 北大核心 2026年第1期187-195,共9页
渗漏是堤坝工程面临的主要安全隐患,渗漏入口精确识别与定位对降低堤坝风险至关重要。通过堤坝渗漏入口示踪剂分布及其运移特征模拟数据,训练学习Transformer模型以确定最优参数条件并分析该条件下该模型的预测效果,进一步通过室内模型... 渗漏是堤坝工程面临的主要安全隐患,渗漏入口精确识别与定位对降低堤坝风险至关重要。通过堤坝渗漏入口示踪剂分布及其运移特征模拟数据,训练学习Transformer模型以确定最优参数条件并分析该条件下该模型的预测效果,进一步通过室内模型试验验证该模型的可靠性。研究表明:①当迭代次数达600次时,模型预测的流速最大值相对误差最小,且最大流速值坐标与真实渗漏入口坐标最为接近,预测效果最佳;在此条件下,当数据采集时长为50 s时,模型预测的流速最大值相对偏差最小,预测效果最优。②在最佳迭代次数和数据采集时长条件下,模型预测精度超过95%,渗漏入口大小和渗漏流量的预测值与真实值差异极小,且流速和位置预测相对误差均较低,其中位置预测相对误差低于5%。③将电导率试验采集数据转换为示踪剂浓度并输入至该模型进行流速分布预测,可知该模型能准确定位渗漏入口位置,且流速和渗漏入口坐标的预测平均相对误差均低于10%,进而验证了该模型在渗漏入口定位中的有效性与准确性。相关研究成果可为堤坝渗漏入口精确识别奠定理论基础和提供技术支撑。 展开更多
关键词 堤坝 渗漏入口 transformer模型 精准识别 室内模型试验
原文传递
基于长短期记忆网络-Transformer模型参数优化的锂离子电池剩余使用寿命预测
4
作者 高建树 郝世宇 党一诺 《汽车工程师》 2026年第1期32-39,共8页
为提高锂离子电池剩余使用寿命(RUL)预测的准确性,提出了一种基于长短期记忆(LSTM)网络-Transformer模型参数优化的RUL预测方法,采用网格搜索法选取模型的超参数,利用LSTM网络提取锂离子电池时间序列中的长短期依赖关系,使用Transforme... 为提高锂离子电池剩余使用寿命(RUL)预测的准确性,提出了一种基于长短期记忆(LSTM)网络-Transformer模型参数优化的RUL预测方法,采用网格搜索法选取模型的超参数,利用LSTM网络提取锂离子电池时间序列中的长短期依赖关系,使用Transformer的自注意力机制处理全局信息并对超参数进行优化,通过全连接层进行最终的寿命预测。基于美国国家航空航天局(NASA)数据集和先进生命周期工程中心(CALCE)数据集的试验验证结果表明,模型在更短的序列长度、更少的隐藏层数量和训练次数等条件下,在多种评价指标上均优于LSTM网络模型、Transformer模型及其他神经网络模型,具有更高的预测精度和鲁棒性。最后,通过不同电池的对比试验进一步验证了模型在不同电池数据上的泛化能力。 展开更多
关键词 锂离子电池 剩余使用寿命预测 参数优化 长短期记忆神经网络 transformER 混合模型
在线阅读 下载PDF
基于Transformer-卷积神经网络模型实现单节点腰部康复训练动作识别任务
5
作者 余圣涵 成贤锴 +1 位作者 郑跃 杨颖 《中国组织工程研究》 北大核心 2026年第16期4125-4136,共12页
背景:惯性测量单元被广泛用于人体姿态感知与动态捕捉。深度学习已逐步替代传统规则与特征工程,广泛应用于动作识别任务。卷积神经网络在提取局部动态特征方面表现良好,Transformer则在建模长时序依赖方面展现出强大能力。目的:通过基于... 背景:惯性测量单元被广泛用于人体姿态感知与动态捕捉。深度学习已逐步替代传统规则与特征工程,广泛应用于动作识别任务。卷积神经网络在提取局部动态特征方面表现良好,Transformer则在建模长时序依赖方面展现出强大能力。目的:通过基于Transformer-卷积神经网络融合模型识别方法,实现在单惯性传感器条件下的腰部康复训练动作识别任务。方法:采集6名健康受试者佩戴单个惯性传感器条件下执行腰部康复动作的加速度与角速度数据,以动作类型为数据进行标注,制作腰部康复动作数据集。通过腰部康复动作数据集对Transformer-卷积神经网络融合模型进行训练,构建动作分类模型。通过留一交叉验证评估模型准确性,并与线性判别分析、支持向量机、多层感知、经典Transformer等模型进行性能对比。结果与结论:在5类动作识别任务中,Transformer-卷积神经网络模型准确率达96.67%,F1-score为0.9669。在单传感器输入的条件下,相较于传统模型,在识别精度与泛化能力方面具有明显优势。验证了基于单惯性测量单元数据的深度模型在腰部康复动作分类任务中的实用性,为轻量化、高部署性的居家腰部康复训练系统提供基础。 展开更多
关键词 慢性腰痛 康复训练 深度学习 transformER 单节点惯性传感器 动作分类
暂未订购
基于Transformer-XGBoost框架的轨交车辆电池多视角数据健康诊断研究
6
作者 王健 毛建 +4 位作者 唐超伟 孙小康 候晓双 王春生 廖垠钦 《电源技术》 北大核心 2026年第1期129-142,共14页
锂离子电池凭借其高能量密度和长寿命,在轨道交通与储能系统中得到了广泛应用,但随着充放电循环次数的增加,其健康状态(SOH)逐步衰退,给电池管理带来安全风险与维护挑战。传统的SOH预测方法主要依赖单一视角的增量容量分析(ICA)及常规... 锂离子电池凭借其高能量密度和长寿命,在轨道交通与储能系统中得到了广泛应用,但随着充放电循环次数的增加,其健康状态(SOH)逐步衰退,给电池管理带来安全风险与维护挑战。传统的SOH预测方法主要依赖单一视角的增量容量分析(ICA)及常规数据驱动模型,难以全面捕捉电池退化过程中电化学特性与时序动态的多尺度变化,导致预测精度和鲁棒性均受限。提出了一种基于多视角数据分析的SOH预测方法,通过融合电压视图与时间视图下的增量容量(IC)曲线信息构建多视图健康因子(HI),并设计了结合Transformer与极限梯度提升(XGBoost)的预测框架。其中,Transformer采用动态时间窗调整和双尺度注意力机制,以适应不同退化阶段下的时序特征提取。而XGBoost则通过引入物理信息约束,进一步提升了预测的稳定性与鲁棒性。在马里兰大学的PL13电池训练集中,该方法实现的均方根误差(RMSE)仅为3.13×10^(−3),决定系数R^(2)高达0.997;而在PL11电池测试集中,RMSE仅为4.57×10^(−3),R^(2)达到0.994,充分验证了该方法在多视角特征融合和动态时序建模方面的卓越性能。 展开更多
关键词 健康状态 多视角数据分析 transformER XGBoost 电池管理系统
在线阅读 下载PDF
基于LSTM-Transformer模型的突水条件下矿井涌水量预测
7
作者 李振华 姜雨菲 +1 位作者 杜锋 王文强 《河南理工大学学报(自然科学版)》 北大核心 2026年第1期77-85,共9页
目的矿井涌水量精准预测对预防矿井水害和保障矿井安全生产具有重要意义,为精准预测矿井涌水量,构建适用于华北型煤田受底板L_(1-4)灰岩含水层和奥陶系灰岩含水层水害威胁的矿井涌水量预测模型。方法以河南某典型矿井的水文监测数据为基... 目的矿井涌水量精准预测对预防矿井水害和保障矿井安全生产具有重要意义,为精准预测矿井涌水量,构建适用于华北型煤田受底板L_(1-4)灰岩含水层和奥陶系灰岩含水层水害威胁的矿井涌水量预测模型。方法以河南某典型矿井的水文监测数据为基础,提出LSTMTransformer模型。利用LSTM捕捉矿井涌水量的动态时序特征,通过Transformer的多头注意力机制分析含水层水位变化和矿井涌水量之间的复杂时序关联,构建水位动态变化驱动下的矿井涌水量精准预测框架。结果结果表明,LSTM-Transformer模型预测精度显著优于LSTM,CNN,Transformer和CNN-LSTM模型的,其均方根误差为20.91 m^(3)/h,平均绝对误差为16.08 m^(3)/h,平均绝对百分比误差为1.12%,且和单因素涌水量预测模型相比,水位-涌水量双因素预测模型预测结果更加稳定。结论LSTM-Transformer模型成功克服传统方法在捕捉复杂水文地质系统中水位-涌水量动态关联上的局限,为矿井涌水量动态预测提供可解释性强、鲁棒性好的解决方案,也为类似地质条件下矿井涌水量预测提供了新方法。 展开更多
关键词 涌水量预测 水位动态响应 LSTM-transformer耦合模型 时间序列预测 注意力机制 矿井安全生产
在线阅读 下载PDF
基于CNN-Transformer-ARG的双护盾TBM掘进速度预测模型
8
作者 刘永胜 沈军宏 +1 位作者 李达 候超 《河海大学学报(自然科学版)》 北大核心 2026年第1期112-118,176,共8页
为准确预测双护盾TBM掘进速度,提出了一种结合CNN、Transformer以及自适应残差门控(ARG)机制的智能预测模型。该模型通过双层卷积模块提取不同视角下掘进参数的局部特征,通过Transformer捕捉掘进参数的全局特征,并引入ARG机制动态加权... 为准确预测双护盾TBM掘进速度,提出了一种结合CNN、Transformer以及自适应残差门控(ARG)机制的智能预测模型。该模型通过双层卷积模块提取不同视角下掘进参数的局部特征,通过Transformer捕捉掘进参数的全局特征,并引入ARG机制动态加权所提取的局部和全局特征,基于历史掘进段监测数据预测未来掘进段的掘进速度均值、最大值和最小值。采用四川某山地轨道交通项目提取的927组掘进数据对模型进行了验证,结果表明:模型预测的均方误差、平均绝对误差、均方根误差和决定系数分别为0.07、0.21、0.26和0.86,均优于3个对比模型;模型提取的多源特征经过权重分配关注重点信息后提升了预测结果的精度,验证了ARG机制对于多源模型的有效性,可为类似结构模型多源特征数据流的处理提供参考。 展开更多
关键词 双护盾TBM 掘进速度预测 transformER 自适应残差门控
在线阅读 下载PDF
层级特征融合Transformer的图像分类算法
9
作者 段士玺 王博 《电子科技》 2026年第2期72-78,共7页
针对传统ViT(Vision Transformer)模型难以完成图像多层级分类问题,文中提出了基于ViT的图像分类模型层级特征融合视觉Transformer(Hierarchical Feature Fusion Vision Transformer,HICViT)。输入数据经过ViT提取模块生成多个不同层级... 针对传统ViT(Vision Transformer)模型难以完成图像多层级分类问题,文中提出了基于ViT的图像分类模型层级特征融合视觉Transformer(Hierarchical Feature Fusion Vision Transformer,HICViT)。输入数据经过ViT提取模块生成多个不同层级的特征图,每个特征图包含不同层次的抽象特征表示。基于层级标签将ViT提取的特征映射为多级特征,运用层级特征融合策略整合不同层级信息,有效增强模型的分类性能。在CIFRA-10、CIFRA-100和CUB-200-2011这3个数据集将所提模型与多种先进深度学习模型进行对比和分析。在CIFRA-10数据集,所提方法在第1层级、第2层级和第3层级的分类精度分别为99.70%、98.80%和97.80%。在CIFRA-100数据集,所提方法在第1层级、第2层级和第3层级的分类精度分别为95.23%、93.54%和90.12%。在CUB-200-2011数据集,所提方法在第1层级和第2层级的分类精度分别为98.09%和93.66%。结果表明,所提模型的分类准确率优于其他对比模型。 展开更多
关键词 深度学习 卷积神经网络 transformER 图像分类 层级特征 特征融合 多头注意力 Vision transformer
在线阅读 下载PDF
基于麻雀搜索算法优化Transformer的短文本情感分析方法
10
作者 胡翔 《微处理机》 2026年第1期53-58,共6页
短文本情感分析面临诸多挑战,如语义稀疏、表达简洁、缺乏上下文信息等,导致情感特征提取不完整,进而影响分类精度。为解决这些问题,提出基于麻雀搜索算法(SSA)优化Transformer的短文本情感分析方法。该方法通过构建词向量矩阵,转变短... 短文本情感分析面临诸多挑战,如语义稀疏、表达简洁、缺乏上下文信息等,导致情感特征提取不完整,进而影响分类精度。为解决这些问题,提出基于麻雀搜索算法(SSA)优化Transformer的短文本情感分析方法。该方法通过构建词向量矩阵,转变短文本的表现形式;利用Transformer模型提取情感特征,并引入SSA优化模型超参数;将所提取情感特征输入全连接层+Softmax分类器中,采用交叉熵损失的梯度下降算法衡量文本预测情感与真实情感之间的差异,完成短文本情感分析。SSA具有全局搜索能力强、收敛速度快等优点,能有效优化Transformer模型的超参数,提升模型性能。试验结果表明,所提出方法的迭代损失值较低,分类精度较高,能够较好地捕捉情感特征且对各类情感区分能力强。 展开更多
关键词 麻雀搜索算法 transformer模型 短文本情感分析 情感特征
在线阅读 下载PDF
基于动态滑动时间窗口与Transformer的电动汽车充电负荷预测
11
作者 郝爽 祖国强 +2 位作者 贾明辉 张志杰 李少雄 《河北工业大学学报》 2026年第1期44-52,68,共10页
因电动汽车充电行为具有非线性、时变性,传统预测方法难以捕捉其负荷复杂特征,因此本文提出基于动态窗口与Transformer的电动汽车充电负荷预测方法。首先,引入结合萤火虫算法(firefly algorithm,FA)的变分模态分解(variational mode dec... 因电动汽车充电行为具有非线性、时变性,传统预测方法难以捕捉其负荷复杂特征,因此本文提出基于动态窗口与Transformer的电动汽车充电负荷预测方法。首先,引入结合萤火虫算法(firefly algorithm,FA)的变分模态分解(variational mode decomposition,VMD),利用FA算法优化VMD的超参数,提取不同频率模态分量,降低数据噪声与复杂度。其次,按各模态波动与变化率,用动态滑动时间窗口技术确定动态滑动时间大小。然后,根据动态滑动时间窗口调整长短期记忆网络(long short-term memory network,LSTM)-Transformer模型参数,将各模态分量与动态滑动时间窗口输入LSTM-Transformer模型,由LSTM负责捕捉短期动态,Transformer用于把握全局依赖,以此提升预测精度。最终,累加各分量预测值得出结果。经Palo Alto电动汽车负荷数据集验证,与固定时间窗口的VMD-LSTM-Transformer模型相比,所提方法的平均绝对百分比误差降低9.23%。 展开更多
关键词 电动汽车负荷预测 变分模态分解 萤火虫算法 动态滑动时间窗口 transformER
在线阅读 下载PDF
A Transformative Masterpiece--Chinese-built bridge in Tanzania boosts trade,connectivity
12
作者 DERRICK SILIMINA 《ChinAfrica》 2026年第1期42-43,共2页
In the Kigongo area of Mwanza Region,northwest Tanzania,fishmonger Neema Aisha remembers how the morning’s fresh catch would sour while she queued for the ferry,putting her business at risk.
关键词 business risk FERRY BRIDGE CONNECTIVITY TRADE fishmonger transformative
原文传递
M2ATNet: Multi-Scale Multi-Attention Denoising and Feature Fusion Transformer for Low-Light Image Enhancement
13
作者 Zhongliang Wei Jianlong An Chang Su 《Computers, Materials & Continua》 2026年第1期1819-1838,共20页
Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approach... Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments. 展开更多
关键词 Low-light image enhancement multi-scale multi-attention transformER
在线阅读 下载PDF
Effect of fluoride roasting on copper species transformation on chrysocolla surfaces and its role in enhanced sulfidation flotation
14
作者 Yingqiang Ma Xin Huang +5 位作者 Yafeng Fu Zhenguo Song Sen Luo Shuanglin Zheng Feng Rao Wanzhong Yin 《International Journal of Minerals,Metallurgy and Materials》 2026年第1期165-176,共12页
It is difficult to recover chrysocolla from sulfidation flotation which is closely related to the mineral surface composition.In this study,the effects of fluoride roasting on the surface composition of chrysocolla we... It is difficult to recover chrysocolla from sulfidation flotation which is closely related to the mineral surface composition.In this study,the effects of fluoride roasting on the surface composition of chrysocolla were investigated,its impact on sulfidation flotation was explored,and the mechanisms involved in both fluoride roasting and sulfidation flotation were discussed.With CaF_(2)as the roasting reagent,Na_(2)S·9H_(2)O as the sulfidation reagent,and sodium butyl xanthate(NaBX)as the collector,the results of the flotation experiments showed that fluoride roasting improved the floatability of chrysocolla,and the recovery rate increased from 16.87%to 82.74%.X-ray diffraction analysis revealed that after fluoride roasting,approximately all the Cu on the chrysocolla surface was exposed in the form of CuO,which could provide a basis for subsequent sulfidation flotation.The microscopy and elemental analyses revealed that large quantities of"pagoda-like"grains were observed on the sulfidation surface of the fluoride-roasted chrysocolla,indicating high crystallinity particles of copper sulfide.This suggests that the effect of sulfide formation on the chrysocolla surface was more pronounced.X-ray photoelectron spectroscopy revealed that fluoride roasting increased the relative contents of sulfur and copper on the surface and that both the Cu~+and polysulfide fractions on the surface of the minerals increased.This enhances the effect of sulfidation,which is conducive to flotation recovery.Therefore,fluoride roasting improved the effect of copper species transformation and sulfidation on the surface of chysocolla,promoted the adsorption of collectors,and improved the recovery of chrysocolla from sulfidation flotation. 展开更多
关键词 sulfidation flotation CHRYSOCOLLA fluoride roasting copper species transformation enhanced sulfidation
在线阅读 下载PDF
Cell type-dependent role of transforming growth factor-βsignaling on postnatal neural stem cell proliferation and migration
15
作者 Kierra Ware Joshua Peter +1 位作者 Lucas McClain Yu Luo 《Neural Regeneration Research》 2026年第3期1151-1161,共11页
Adult neurogenesis continuously produces new neurons critical for cognitive plasticity in adult rodents.While it is known transforming growth factor-βsignaling is important in embryonic neurogenesis,its role in postn... Adult neurogenesis continuously produces new neurons critical for cognitive plasticity in adult rodents.While it is known transforming growth factor-βsignaling is important in embryonic neurogenesis,its role in postnatal neurogenesis remains unclear.In this study,to define the precise role of transforming growth factor-βsignaling in postnatal neurogenesis at distinct stages of the neurogenic cascade both in vitro and in vivo,we developed two novel inducible and cell type-specific mouse models to specifically silence transforming growth factor-βsignaling in neural stem cells in(mGFAPcre-ALK5fl/fl-Ai9)or immature neuroblasts in(DCXcreERT2-ALK5fl/fl-Ai9).Our data showed that exogenous transforming growth factor-βtreatment led to inhibition of the proliferation of primary neural stem cells while stimulating their migration.These effects were abolished in activin-like kinase 5(ALK5)knockout primary neural stem cells.Consistent with this,inhibition of transforming growth factor-βsignaling with SB-431542 in wild-type neural stem cells stimulated proliferation while inhibited the migration of neural stem cells.Interestingly,deletion of transforming growth factor-βreceptor in neural stem cells in vivo inhibited the migration of postnatal born neurons in mGFAPcre-ALK5fl/fl-Ai9 mice,while abolishment of transforming growth factor-βsignaling in immature neuroblasts in DCXcreERT2-ALK5fl/fl-Ai9 mice did not affect the migration of these cells in the hippocampus.In summary,our data supports a dual role of transforming growth factor-βsignaling in the proliferation and migration of neural stem cells in vitro.Moreover,our data provides novel insights on cell type-specific-dependent requirements of transforming growth factor-βsignaling on neural stem cell proliferation and migration in vivo. 展开更多
关键词 adult neurogenesis DOUBLECORTIN HIPPOCAMPUS MIGRATION neural stem cells PROLIFERATION transforming growth factor-β
暂未订购
Tracking a High-Tech Transition--How technology is powering Guangdong’s manufacturing transformation
16
作者 HU FAN 《ChinAfrica》 2026年第1期30-32,共3页
The moment a media delegation from the Republic of the Congo arrived at the Othello Kitchenware Museum on 18 November 2025,they were greeted with a vivid show of Guangdong’s industrial strength.Standing before them w... The moment a media delegation from the Republic of the Congo arrived at the Othello Kitchenware Museum on 18 November 2025,they were greeted with a vivid show of Guangdong’s industrial strength.Standing before them was not a typical exhibition hall,but a building shaped like a gleaming stainless-steel cooking pot. 展开更多
关键词 othello kitchenware museum TECHNOLOGY industrial strength high tech transition guangdong manufacturing transformation
原文传递
SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention
17
作者 Seyong Jin Muhammad Fayaz +2 位作者 L.Minh Dang Hyoung-Kyu Song Hyeonjoon Moon 《Computers, Materials & Continua》 2026年第1期511-533,共23页
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b... Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation. 展开更多
关键词 Attention mechanism brain tumor segmentation channel-wise attention decoder deep learning medical imaging MRI transformER U-Net
在线阅读 下载PDF
Extreme Attitude Prediction of Amphibious Vehicles Based on Improved Transformer Model and Extreme Loss Function
18
作者 Qinghuai Zhang Boru Jia +3 位作者 Zhengdao Zhu Jianhua Xiang Yue Liu Mengwei Li 《哈尔滨工程大学学报(英文版)》 2026年第1期228-238,共11页
Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instabili... Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instability,occur frequently in both experimental and operational data.This infrequency causes events to be overlooked by existing prediction models,which lack the precision to accurately predict inclination attitudes in amphibious vehicles.To address this gap in predicting attitudes near extreme inclination points,this study introduces a novel loss function,termed generalized extreme value loss.Subsequently,a deep learning model for improved waterborne attitude prediction,termed iInformer,was developed using a Transformer-based approach.During the embedding phase,a text prototype is created based on the vehicle’s operation log data is constructed to help the model better understand the vehicle’s operating environment.Data segmentation techniques are used to highlight local data variation features.Furthermore,to mitigate issues related to poor convergence and slow training speeds caused by the extreme value loss function,a teacher forcing mechanism is integrated into the model,enhancing its convergence capabilities.Experimental results validate the effectiveness of the proposed method,demonstrating its ability to handle data imbalance challenges.Specifically,the model achieves over a 60%improvement in root mean square error under extreme value conditions,with significant improvements observed across additional metrics. 展开更多
关键词 Amphibious vehicle Attitude prediction Extreme value loss function Enhanced transformer architecture External information embedding
在线阅读 下载PDF
A Transformer-Based Deep Learning Framework with Semantic Encoding and Syntax-Aware LSTM for Fake Electronic News Detection
19
作者 Hamza Murad Khan Shakila Basheer +3 位作者 Mohammad Tabrez Quasim Raja`a Al-Naimi Vijaykumar Varadarajan Anwar Khan 《Computers, Materials & Continua》 2026年第1期1024-1048,共25页
With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contex... With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models. 展开更多
关键词 Fake news detection tokenization SMOTE text-to-text transfer transformer(T5) long short-term memory(LSTM) self-attention mechanism(SA) T5-SA-LSTM WELFake dataset FakeNewsPrediction dataset
在线阅读 下载PDF
基于Transformer的时间序列预测方法综述 被引量:5
20
作者 陈嘉俊 刘波 +2 位作者 林伟伟 郑剑文 谢家晨 《计算机科学》 北大核心 2025年第6期96-105,共10页
时间序列预测作为分析历史数据以预测未来趋势的关键技术,已广泛应用于金融、气象等领域。然而,传统方法如自回归移动平均模型和指数平滑法等在处理非线性模式、捕捉长期依赖性时存在局限。最近,基于Transformer的方法因其自注意力机制... 时间序列预测作为分析历史数据以预测未来趋势的关键技术,已广泛应用于金融、气象等领域。然而,传统方法如自回归移动平均模型和指数平滑法等在处理非线性模式、捕捉长期依赖性时存在局限。最近,基于Transformer的方法因其自注意力机制,在自然语言处理与计算机视觉领域取得突破,也开始拓展至时间序列预测领域并取得显著成果。因此,探究如何将Transformer高效运用于时间序列预测,成为推动该领域发展的关键。首先,介绍了时间序列的特性,阐述了时间序列预测的常见任务类别及评估指标。接着,深入解析Transformer的基本架构,并挑选了近年来在时间序列预测中广受关注的Transfo-rmer衍生模型,从模块及架构层面进行分类,并分别从问题解决、创新点及局限性3个维度进行比较和分析。最后,进一步探讨了时间序列预测Transformer在未来可能的研究方向。 展开更多
关键词 时间序列 transformer模型 深度学习 注意力机制 预测
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部