期刊文献+
共找到4,486篇文章
< 1 2 225 >
每页显示 20 50 100
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
1
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
Layered Feature Engineering for E-Commerce Purchase Prediction:A Hierarchical Evaluation on Taobao User Behavior Datasets
2
作者 Liqiu Suo Lin Xia +1 位作者 Yoona Chung Eunchan Kim 《Computers, Materials & Continua》 2026年第4期1865-1889,共25页
Accurate purchase prediction in e-commerce critically depends on the quality of behavioral features.This paper proposes a layered and interpretable feature engineering framework that organizes user signals into three ... Accurate purchase prediction in e-commerce critically depends on the quality of behavioral features.This paper proposes a layered and interpretable feature engineering framework that organizes user signals into three layers:Basic,Conversion&Stability(efficiency and volatility across actions),and Advanced Interactions&Activity(crossbehavior synergies and intensity).Using real Taobao(Alibaba’s primary e-commerce platform)logs(57,976 records for 10,203 users;25 November–03 December 2017),we conducted a hierarchical,layer-wise evaluation that holds data splits and hyperparameters fixed while varying only the feature set to quantify each layer’s marginal contribution.Across logistic regression(LR),decision tree,random forest,XGBoost,and CatBoost models with stratified 5-fold cross-validation,the performance improvedmonotonically fromBasic to Conversion&Stability to Advanced features.With LR,F1 increased from 0.613(Basic)to 0.962(Advanced);boosted models achieved high discrimination(0.995 AUC Score)and an F1 score up to 0.983.Calibration and precision–recall analyses indicated strong ranking quality and acknowledged potential dataset and period biases given the short(9-day)window.By making feature contributions measurable and reproducible,the framework complements model-centric advances and offers a transparent blueprint for production-grade behavioralmodeling.The code and processed artifacts are publicly available,and future work will extend the validation to longer,seasonal datasets and hybrid approaches that combine automated feature learning with domain-driven design. 展开更多
关键词 Hierarchical feature engineering purchase prediction user behavior dataset feature importance e-commerce platform TAOBAO
在线阅读 下载PDF
基于TDIM的高精度功率SMD结壳热阻测量技术
3
作者 吴玉强 郑花 +3 位作者 马凤丽 侯杰 许为新 郭美洋 《半导体技术》 北大核心 2025年第12期1237-1243,共7页
为解决传统热阻测量中功率表面贴装器件(SMD)散热基板与电学引出端共面导致短路及热电偶法测量误差问题,提出一种基于瞬态双界面法(TDIM)的高精度结壳热阻(R_(θJC))测量技术。通过设计含铜板凸台结构与绝缘定位板的专用夹具,有效避免... 为解决传统热阻测量中功率表面贴装器件(SMD)散热基板与电学引出端共面导致短路及热电偶法测量误差问题,提出一种基于瞬态双界面法(TDIM)的高精度结壳热阻(R_(θJC))测量技术。通过设计含铜板凸台结构与绝缘定位板的专用夹具,有效避免了电气短路;结合TDIM替代热电偶法,消除了热量“芯吸”效应与测温位置误差。以TO-277封装肖特基二极管为实验对象,测得其R_(θJC)为0.302 K/W,与器件手册典型值(0.30 K/W)误差仅0.67%。通过在相同结温(423.15 K)下实施两次热表征,显著抑制了温度依赖性误差。本技术为功率SMD的热管理设计提供了可靠的测量方案,具备工程推广价值。 展开更多
关键词 瞬态双界面法(TDIM) 表面贴装器件(smd) 结壳热阻 专用夹具 热表征 热管理 一维热流路径
原文传递
Standardizing Healthcare Datasets in China:Challenges and Strategies
4
作者 Zheng-Yong Hu Xiao-Lei Xiu +2 位作者 Jing-Yu Zhang Wan-Fei Hu Si-Zhu Wu 《Chinese Medical Sciences Journal》 2025年第4期253-267,I0001,共16页
Standardized datasets are foundational to healthcare informatization by enhancing data quality and unleashing the value of data elements.Using bibliometrics and content analysis,this study examines China's healthc... Standardized datasets are foundational to healthcare informatization by enhancing data quality and unleashing the value of data elements.Using bibliometrics and content analysis,this study examines China's healthcare dataset standards from 2011 to 2025.It analyzes their evolution across types,applications,institutions,and themes,highlighting key achievements including substantial growth in quantity,optimized typology,expansion into innovative application scenarios such as health decision support,and broadened institutional involvement.The study also identifies critical challenges,including imbalanced development,insufficient quality control,and a lack of essential metadata—such as authoritative data element mappings and privacy annotations—which hampers the delivery of intelligent services.To address these challenges,the study proposes a multi-faceted strategy focused on optimizing the standard system's architecture,enhancing quality and implementation,and advancing both data governance—through authoritative tracing and privacy protection—and intelligent service provision.These strategies aim to promote the application of dataset standards,thereby fostering and securing the development of new productive forces in healthcare. 展开更多
关键词 healthcare dataset standards data standardization data management
在线阅读 下载PDF
DCS-SOCP-SVM:A Novel Integrated Sampling and Classification Algorithm for Imbalanced Datasets
5
作者 Xuewen Mu Bingcong Zhao 《Computers, Materials & Continua》 2025年第5期2143-2159,共17页
When dealing with imbalanced datasets,the traditional support vectormachine(SVM)tends to produce a classification hyperplane that is biased towards the majority class,which exhibits poor robustness.This paper proposes... When dealing with imbalanced datasets,the traditional support vectormachine(SVM)tends to produce a classification hyperplane that is biased towards the majority class,which exhibits poor robustness.This paper proposes a high-performance classification algorithm specifically designed for imbalanced datasets.The proposed method first uses a biased second-order cone programming support vectormachine(B-SOCP-SVM)to identify the support vectors(SVs)and non-support vectors(NSVs)in the imbalanced data.Then,it applies the synthetic minority over-sampling technique(SV-SMOTE)to oversample the support vectors of the minority class and uses the random under-sampling technique(NSV-RUS)multiple times to undersample the non-support vectors of the majority class.Combining the above-obtained minority class data set withmultiple majority class datasets can obtainmultiple new balanced data sets.Finally,SOCP-SVM is used to classify each data set,and the final result is obtained through the integrated algorithm.Experimental results demonstrate that the proposed method performs excellently on imbalanced datasets. 展开更多
关键词 DCS-SOCP-SVM imbalanced datasets sampling method ensemble method integrated algorithm
在线阅读 下载PDF
A Comprehensive Review of Face Detection Techniques for Occluded Faces:Methods,Datasets,and Open Challenges
6
作者 Thaer Thaher Majdi Mafarja +2 位作者 Muhammed Saffarini Abdul Hakim H.M.Mohamed Ayman A.El-Saleh 《Computer Modeling in Engineering & Sciences》 2025年第6期2615-2673,共59页
Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveill... Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveillance,biometric authentication,and human-computer interaction.This paper provides a comprehensive review of face detection techniques developed to handle occluded faces.Studies are categorized into four main approaches:feature-based,machine learning-based,deep learning-based,and hybrid methods.We analyzed state-of-the-art studies within each category,examining their methodologies,strengths,and limitations based on widely used benchmark datasets,highlighting their adaptability to partial and severe occlusions.The review also identifies key challenges,including dataset diversity,model generalization,and computational efficiency.Our findings reveal that deep learning methods dominate recent studies,benefiting from their ability to extract hierarchical features and handle complex occlusion patterns.More recently,researchers have increasingly explored Transformer-based architectures,such as Vision Transformer(ViT)and Swin Transformer,to further improve detection robustness under challenging occlusion scenarios.In addition,hybrid approaches,which aim to combine traditional andmodern techniques,are emerging as a promising direction for improving robustness.This review provides valuable insights for researchers aiming to develop more robust face detection systems and for practitioners seeking to deploy reliable solutions in real-world,occlusionprone environments.Further improvements and the proposal of broader datasets are required to developmore scalable,robust,and efficient models that can handle complex occlusions in real-world scenarios. 展开更多
关键词 Occluded face detection feature-based deep learning machine learning hybrid approaches datasets
在线阅读 下载PDF
Impact of climate changes on Arizona State precipitation patterns using high-resolution climatic gridded datasets
7
作者 Hayder H.Kareem Shahla Abdulqader Nassrullah 《Journal of Groundwater Science and Engineering》 2025年第1期34-46,共13页
Climate change significantly affects environment,ecosystems,communities,and economies.These impacts often result in quick and gradual changes in water resources,environmental conditions,and weather patterns.A geograph... Climate change significantly affects environment,ecosystems,communities,and economies.These impacts often result in quick and gradual changes in water resources,environmental conditions,and weather patterns.A geographical study was conducted in Arizona State,USA,to examine monthly precipi-tation concentration rates over time.This analysis used a high-resolution 0.50×0.50 grid for monthly precip-itation data from 1961 to 2022,Provided by the Climatic Research Unit.The study aimed to analyze climatic changes affected the first and last five years of each decade,as well as the entire decade,during the specified period.GIS was used to meet the objectives of this study.Arizona experienced 51–568 mm,67–560 mm,63–622 mm,and 52–590 mm of rainfall in the sixth,seventh,eighth,and ninth decades of the second millennium,respectively.Both the first and second five year periods of each decade showed accept-able rainfall amounts despite fluctuations.However,rainfall decreased in the first and second decades of the third millennium.and in the first two years of the third decade.Rainfall amounts dropped to 42–472 mm,55–469 mm,and 74–498 mm,respectively,indicating a downward trend in precipitation.The central part of the state received the highest rainfall,while the eastern and western regions(spanning north to south)had significantly less.Over the decades of the third millennium,the average annual rainfall every five years was relatively low,showing a declining trend due to severe climate changes,generally ranging between 35 mm and 498 mm.The central regions consistently received more rainfall than the eastern and western outskirts.Arizona is currently experiencing a decrease in rainfall due to climate change,a situation that could deterio-rate further.This highlights the need to optimize the use of existing rainfall and explore alternative water sources. 展开更多
关键词 Spatial Analysis Climate Impact Precipitation Rates CRU Dataset GIS Arizona State USA
在线阅读 下载PDF
A Comprehensive Review of Face Detection/Recognition Algorithms and Competitive Datasets to Optimize Machine Vision
8
作者 Mahmood Ul Haq Muhammad Athar Javed Sethi +3 位作者 Sadique Ahmad Naveed Ahmad Muhammad Shahid Anwar Alpamis Kutlimuratov 《Computers, Materials & Continua》 2025年第7期1-24,共24页
Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensi... Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research. 展开更多
关键词 Face recognition algorithms face detection techniques face recognition/detection datasets
在线阅读 下载PDF
The Development of Artificial Intelligence:Toward Consistency in the Logical Structures of Datasets,AI Models,Model Building,and Hardware?
9
作者 Li Guo Jinghai Li 《Engineering》 2025年第7期13-17,共5页
The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficu... The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficulty effectively processing and fully representing their spatiotemporal complexity patterns.The article also discusses a potential path of AI development in the engineering domain.Based on the existing understanding of the principles of multilevel com-plexity,this article suggests that consistency among the logical structures of datasets,AI models,model-building software,and hardware will be an important AI development direction and is worthy of careful consideration. 展开更多
关键词 CONSISTENCY datasets model building ai models artificial intelligence ai explore potential directions HARDWARE artificial intelligence
在线阅读 下载PDF
A critical evaluation of deep-learning based phylogenetic inference programs using simulated datasets
10
作者 Yixiao Zhu Yonglin Li +2 位作者 Chuhao Li Xing-Xing Shen Xiaofan Zhou 《Journal of Genetics and Genomics》 2025年第5期714-717,共4页
Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus o... Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively. 展开更多
关键词 phylogenetic inference explicit models sequence evolution deep learning deep learning dl techniques molecular sequences simulated datasets phylogenetic methods such evolutionary biologymany
原文传递
面向人脸视频防伪检测的大规模中文数据测评基准
11
作者 贝毅君 娄恒瑞 +7 位作者 高克威 宋杰 王蕊 金苍宏 雷杰 宋明黎 胡秉德 冯尊磊 《中国图象图形学报》 北大核心 2026年第1期82-98,共17页
目的针对生成式人工智能(artificial intelligence generated content,AIGC)技术生成的高逼真伪造人脸视频对人类视觉感知的欺骗性问题,以及当前人脸防伪检测算法评估体系在中文数据层面有效性和应用性验证方面的空白,旨在构建面向中文... 目的针对生成式人工智能(artificial intelligence generated content,AIGC)技术生成的高逼真伪造人脸视频对人类视觉感知的欺骗性问题,以及当前人脸防伪检测算法评估体系在中文数据层面有效性和应用性验证方面的空白,旨在构建面向中文场景的量化评估基准以推动防伪检测技术迭代发展。方法提出面向大规模中文人脸伪造视频的CHN-DF(Chinese-deepfake)数据集,详细阐述数据采集、伪造样本生成及质量评估的全流程构建方法。通过多维度实验验证数据集复杂性,兼顾跨模态伪造技术覆盖、环境干扰因子完备性等复杂因素,并建立基于深度检测模型的系统性评测基准。结果发布全球首个包含434727样本的中文人脸视频防伪数据集,实验显示该数据集鉴别难度高,在16种包含SOTA(state-of-the-art)与主流防伪模型的测评中视觉与视听结合的准确率分别控制在85%与70%以下。构建的评测基准覆盖了视觉与听觉模态场景,在跨域泛化性测试中显示模型准确率性能波动平均幅度达19.6%,显著揭示现有算法的应用局限性。结论构建的中文防伪评测基准有效填补领域空白,通过系统性实验阐明数据集特性与算法性能的关联机制,提出针对模型鲁棒性增强、跨模态泛化能力提升等关键发展方向,为面向中文场景的量化评估以及人脸视频防伪技术的实际部署提供数据支撑与实践指导。CHN-DF数据集在线发布地址为:https://doi.org/10.57760/sciencedb.j00240.00067和https://github.com/HengruiLou/CHN-DF. 展开更多
关键词 深度伪造 人脸伪造视频 人脸防伪评测基准 中文数据集 多模态
原文传递
Impacts of random negative training datasets on machine learning-based geologic hazard susceptibility assessment
12
作者 Hao Cheng Wei Hong +3 位作者 Zhen-kai Zhang Zeng-lin Hong Zi-yao Wang Yu-xuan Dong 《China Geology》 2025年第4期676-690,共15页
This study investigated the impacts of random negative training datasets(NTDs)on the uncertainty of machine learning models for geologic hazard susceptibility assessment of the Loess Plateau,northern Shaanxi Province,... This study investigated the impacts of random negative training datasets(NTDs)on the uncertainty of machine learning models for geologic hazard susceptibility assessment of the Loess Plateau,northern Shaanxi Province,China.Based on randomly generated 40 NTDs,the study developed models for the geologic hazard susceptibility assessment using the random forest algorithm and evaluated their performances using the area under the receiver operating characteristic curve(AUC).Specifically,the means and standard deviations of the AUC values from all models were then utilized to assess the overall spatial correlation between the conditioning factors and the susceptibility assessment,as well as the uncertainty introduced by the NTDs.A risk and return methodology was thus employed to quantify and mitigate the uncertainty,with log odds ratios used to characterize the susceptibility assessment levels.The risk and return values were calculated based on the standard deviations and means of the log odds ratios of various locations.After the mean log odds ratios were converted into probability values,the final susceptibility map was plotted,which accounts for the uncertainty induced by random NTDs.The results indicate that the AUC values of the models ranged from 0.810 to 0.963,with an average of 0.852 and a standard deviation of 0.035,indicating encouraging prediction effects and certain uncertainty.The risk and return analysis reveals that low-risk and high-return areas suggest lower standard deviations and higher means across multiple model-derived assessments.Overall,this study introduces a new framework for quantifying the uncertainty of multiple training and evaluation models,aimed at improving their robustness and reliability.Additionally,by identifying low-risk and high-return areas,resource allocation for geologic hazard prevention and control can be optimized,thus ensuring that limited resources are directed toward the most effective prevention and control measures. 展开更多
关键词 LANDSLIDES Debris flows Collapses Ground fissures Geologic hazard prevention and control ENGINEERING Geologic hazard susceptibility assessment Negative training dataset Average spatial correlation Random forest algorithm Risk and return analysis Geological survey engineering Loess Plateau area
在线阅读 下载PDF
基于改进RT-DETR的有遮挡交通标志检测算法
13
作者 于天河 杨壮壮 +2 位作者 胡金帅 常梦瑶 王文龙 《工程科学学报》 北大核心 2026年第2期393-408,共16页
针对交通标志检测中目标尺寸小、检测精度低等问题,尤其是在远距离拍摄、遮挡严重的情况下,传统检测算法往往难以准确识别交通标志.本文提出了一种基于改进RT-DETR的交通标志检测算法.首先,考虑到当前交通标志被遮挡情况下数据集的匮乏... 针对交通标志检测中目标尺寸小、检测精度低等问题,尤其是在远距离拍摄、遮挡严重的情况下,传统检测算法往往难以准确识别交通标志.本文提出了一种基于改进RT-DETR的交通标志检测算法.首先,考虑到当前交通标志被遮挡情况下数据集的匮乏,自建一个遮挡条件下的交通标志数据集.然后,在反向残差移动块中引入膨胀重参数块,构建了一个轻量级的复合膨胀残差块来替换原始主干提取网络中的BasicBlock,增强了模型的特征提取能力.最后,对RT-DETR模型的损失函数进行了优化,提出了DS-IoU联合损失函数加快收模型敛速度.实验结果表明,改进后的算法在自制数据集上的m AP为94.2%,相比于原始算法增加量为4.7%,在公开数据集TT100K和CCTSDB2021的m AP分别为92.8%和91.7%,相比于原始算法增加量分别为3.1%和2.4%,Params和GFLOPs相比于原始的算法分别降低了26.0%和12.5%.本文提出的改进方法极大地减少了计算量和参数数量,有效提升了遮挡情况下的交通标志的检测精度. 展开更多
关键词 交通标志检测 RT-DETR 遮挡数据集 轻量化 联合损失函数
在线阅读 下载PDF
基于改进YOLOv8s的钢筋混凝土结构桥梁表观病害智能检测算法
14
作者 廖维张 黄澍辰 +1 位作者 袁婉莹 秦铭辰 《科学技术与工程》 北大核心 2026年第4期1676-1687,共12页
为提升桥梁长期服役性能分析的智能化与自动化水平,并解决现有病害检测算法在复杂场景下的精度不足问题,提出一种改进YOLOv8s的桥梁表观病害智能检测算法YOLOv8s-RC。将大核可分离卷积注意力机制(large kernel separable convolutional ... 为提升桥梁长期服役性能分析的智能化与自动化水平,并解决现有病害检测算法在复杂场景下的精度不足问题,提出一种改进YOLOv8s的桥梁表观病害智能检测算法YOLOv8s-RC。将大核可分离卷积注意力机制(large kernel separable convolutional attention mechanism,LSKA)引入骨干网络的快速空间金字塔池化模块SPPF中,增强病害特征提取能力;采用双向特征金字塔(bidirectional feature pyramid network,BiFPN)的加权特征融合思想优化颈部网络的特征融,强化特征融合效能;将原有的CIoU损失函数替换为SIoU损失函数,提升预测框的定位精度。消融实验结果表明:在CODEBRIM数据集上,YOLOv8s-RC模型相较于原模型的精确率、召回率、F1分数和mAP@0.5指标分别提升了2.3%、1.7%、2.0%和1.6%。该算法针对小目标病害和弱特征病害表现出更强检测能力,且该模型参数量仅为12.2×10^(6),推理速度为107.5 FPS,也能满足算法部署于轻量级设备后的实时检测需求;在DACL10K数据集上的泛化性测试结果表明,相较于Faster-RCNN、SSD、YOLOv5s和YOLOv8s模型,YOLOv8s-RC模型在不同类型桥梁病害检测场景中表现出较好的泛化能力和预测准确性,为复杂环境下桥梁表观病害识别提供强有力的技术手段。 展开更多
关键词 桥梁工程 病害检测 小目标检测 YOLOv8s CODEBRIM数据集
在线阅读 下载PDF
基于图神经网络实现多尺度特征联合学习的中文作文自动评分
15
作者 文洪建 胡瑞娇 +4 位作者 吴保文 孙家兴 李环 张晴 刘杰 《计算机应用》 北大核心 2026年第2期378-385,共8页
现有基于预训练语言模型(PLM)的作文自动评分(AES)方法偏向于直接使用从PLM提取的全局语义特征表示作文的质量,却忽略了作文质量与更细粒度特征关联关系的问题。聚焦于中文AES研究,从多种文本角度分析和评估作文质量,提出利用图神经网络... 现有基于预训练语言模型(PLM)的作文自动评分(AES)方法偏向于直接使用从PLM提取的全局语义特征表示作文的质量,却忽略了作文质量与更细粒度特征关联关系的问题。聚焦于中文AES研究,从多种文本角度分析和评估作文质量,提出利用图神经网络(GNN)对作文的多尺度特征进行联合学习的中文AES方法。首先,利用GNN分别获取作文在句子级别和段落级别的篇章特征;然后,将这些篇章特征与作文的全局语义特征进行联合特征学习,实现对作文更精准的评分;最后,构建一个中文AES数据集,为中文AES研究提供数据基础。在所构建的数据集上的实验结果表明,所提方法在6个作文主题上的平均二次加权Kappa(QWK)系数相较于R2-BERT(Bidirectional Encoder Representations from Transformers model with Regression and Ranking)提升了1.1个百分点,验证了在AES任务中进行多尺度特征联合学习的有效性。同时,消融实验结果进一步表明了不同尺度的作文特征对评分效果的贡献。为了证明小模型在特定任务场景下的优越性,与当前流行的通用大语言模型GPT-3.5-turbo和DeepSeek-V3进行了对比。结果表明,使用所提方法的BERT(Bidirectional Encoder Representations from Transformers)模型在6个作文主题上的平均QWK比GPT-3.5-turbo和DeepSeek-V3分别高出了65.8和45.3个百分点,验证了大语言模型(LLMs)在面向领域的篇章级作文评分任务中,因缺乏大规模有监督微调数据而表现不佳的观点。 展开更多
关键词 中文作文自动评分 预训练语言模型 图神经网络 中文作文自动评分数据集 多特征学习
在线阅读 下载PDF
档案文献遗产高质量数据集建设路径研究
16
作者 王玉珏 许瑞婷 +1 位作者 焦俊杰 樊静雅 《北京档案》 北大核心 2026年第1期7-16,共10页
高质量数据集已成为国家科技竞争与文化软实力构建的重要战略资源。档案文献遗产数据作为兼具信息存证、历史记忆与数据资产属性的重要资源,其高质量数据集建设对提升文化垂直领域大语言模型性能、价值观对齐、维护文化主权具有重要意... 高质量数据集已成为国家科技竞争与文化软实力构建的重要战略资源。档案文献遗产数据作为兼具信息存证、历史记忆与数据资产属性的重要资源,其高质量数据集建设对提升文化垂直领域大语言模型性能、价值观对齐、维护文化主权具有重要意义。基于档案文献遗产数据的多元价值特性,结合高质量数据集建设的高价值应用、高知识密度、高技术含量的要求,构建面向大语言模型训练与知识服务的“九维需求矩阵”,系统剖析当前在数据资源建设、数据知识化与技术赋能方面存在的困境,并提出以“体系规划—工程建设—质量检测”为核心的“三步走”实施路径,旨在推动档案文献遗产数据从离散资源向高质量、可流通、可信赖的数据资产转化,为支撑国家文化数字化战略与人工智能发展构建高质量数据集,提供理论框架与实践参考。 展开更多
关键词 高质量数据集 档案文献遗产 数据要素 国家文化数字化 人工智能
在线阅读 下载PDF
论数据平行财产权
17
作者 熊丙万 庄鸿山 《江苏社会科学》 北大核心 2026年第1期179-186,I0004,I0005,共10页
数据要素的非竞争性与可复制性引发了广泛的“平行持有”现象,为多个数据处理者分享同一数据的利用价值提供了事实基础。但是,平行持有人之间常常缺乏足够能力对数据权属的分配作出明确约定,侵权法也难以为平行持有人的连带利益提供排... 数据要素的非竞争性与可复制性引发了广泛的“平行持有”现象,为多个数据处理者分享同一数据的利用价值提供了事实基础。但是,平行持有人之间常常缺乏足够能力对数据权属的分配作出明确约定,侵权法也难以为平行持有人的连带利益提供排他性保护。为了给数据处理者提供稳定的行为预期,有必要在财产法层面确立一套有别于物权共有、合作作品或专利先申请等赋权方案的平行财产权规则。作为一套默认财产权规则,数据平行财产权是“一数数权”原则的具象表达,旨在在不违背当事人的合作目的与重大利益期待的前提下,赋予各方并行不悖的数据使用权与经营权,从而充分挖掘数据的流通复用机会,促进数据要素市场发展。 展开更多
关键词 数据产权 平行持有 数据专用品 数据副产品 “一数数权”原则
在线阅读 下载PDF
凝聚青年力量 共话信息未来——记江苏省研究生第一届信息资源管理学术创新论坛
18
作者 刘琼 刘桂锋 《图书情报研究》 2026年第1期90-96,共7页
2025年11月7日至9日,“江苏省研究生第一届信息资源管理学术创新论坛”在江苏大学成功召开。论坛聚焦信息资源管理学科的前沿发展,旨在推动学科交叉融合与学术创新。会议共设置五场主旨报告,内容涵盖信息资源管理的学科属性与范式演进... 2025年11月7日至9日,“江苏省研究生第一届信息资源管理学术创新论坛”在江苏大学成功召开。论坛聚焦信息资源管理学科的前沿发展,旨在推动学科交叉融合与学术创新。会议共设置五场主旨报告,内容涵盖信息资源管理的学科属性与范式演进、高质量数据集建设、知识产权情报的学科体系、大语言模型在科技文献分析中的应用、数字人文中的语义表征问题等关键方向。与会专家指出,面对从“图书情报与档案管理”向“信息资源管理”乃至“数字文明”的学科转型,青年学者既迎来研究疆域的拓展,也面临学术身份与学科方向的深层拷问。在技术洪流奔腾的今天,论坛不仅搭建了学术交流的平台,更激发了对学科本质与未来路径的深刻反思。 展开更多
关键词 信息资源管理 图书情报与档案管理 学科更名 数据集 大语言模型 数字人文 语义表征
在线阅读 下载PDF
SMD残留检测的ELISA方法的建立和初步应用 被引量:15
19
作者 周新民 陈连颐 +1 位作者 王捍东 王宗元 《畜牧与兽医》 北大核心 2003年第10期8-11,共4页
用戊二醛法将SMD与载体蛋白BSA偶联 ,制备合成抗原SMD BSA作免疫原 ,同法合成包被抗原SMD OVA ,免疫健康家兔获得抗血清。分别用双向琼脂扩散试验和ELISA试验对抗血清进行定性定量测定 ,结果表明所获抗血清特异性针对SMD。用所制备的抗... 用戊二醛法将SMD与载体蛋白BSA偶联 ,制备合成抗原SMD BSA作免疫原 ,同法合成包被抗原SMD OVA ,免疫健康家兔获得抗血清。分别用双向琼脂扩散试验和ELISA试验对抗血清进行定性定量测定 ,结果表明所获抗血清特异性针对SMD。用所制备的抗血清建立间接竞争ELISA方法。优化了ELISA的工作条件 ,方阵测定确定了包被抗原最佳浓度 (50 μg/mL) ,抗血清最佳稀释度 (1∶1 0 0 ) ,酶标抗体的最适工作稀释度 (1∶50 0 ) ,并建立了ELISA标准工作曲线。工作曲线表明在 1 0~ 2 0 0 0 μg/L浓度范围内呈良好的线性关系。该法检测底限为 63μg/L,低于国际规定残留限量 (1 0 0 μg/kg)和国内规定残留限量 (30 0 μg/kg)的要求。 展开更多
关键词 smd 药物残留 残留检测 ELISA 磺胺对甲氧嘧啶 人工抗原 血清抗体
在线阅读 下载PDF
SMD单克隆抗体的制备及测定方法的建立 被引量:11
20
作者 彭会建 王捍东 +2 位作者 王宗元 唐娜 周晓昕 《畜牧与兽医》 北大核心 2003年第9期12-14,共3页
用戊二醛作为偶联剂 ,将磺胺对甲氧嘧啶 (SMD )与牛血清白蛋白 (BSA )或卵清蛋白 (OVA)偶联形成完全抗原 ,经紫外分光光度计扫描鉴定。以人工抗原免疫BALB/c小鼠 ,取脾细胞与骨髓瘤细胞 (SP2 /0 -Ag1 4)融合 ,用间接ELISA联合竞争ELISA... 用戊二醛作为偶联剂 ,将磺胺对甲氧嘧啶 (SMD )与牛血清白蛋白 (BSA )或卵清蛋白 (OVA)偶联形成完全抗原 ,经紫外分光光度计扫描鉴定。以人工抗原免疫BALB/c小鼠 ,取脾细胞与骨髓瘤细胞 (SP2 /0 -Ag1 4)融合 ,用间接ELISA联合竞争ELISA法筛选出产生针对SMD抗体的杂交瘤细胞。经克隆 ,得到 3株特异性好的阳性杂交瘤细胞 ,注入小鼠腹腔产生腹水。建立了测定SMD的ELISA法 ,其检测下限小于5ng/mL。 展开更多
关键词 smd单克隆抗体 制备 测定方法 磺胺对甲氧嘧啶 牛血清白蛋白 完全抗原 杂交瘤细胞 ELISA 间接酶联免疫吸附试验 磺胺类药物 残留检测
在线阅读 下载PDF
上一页 1 2 225 下一页 到第
使用帮助 返回顶部