期刊文献+
共找到4,075篇文章
< 1 2 204 >
每页显示 20 50 100
Top-k不相交对比模式挖掘
1
作者 王小丫 武优西 +1 位作者 王月华 李艳 《小型微型计算机系统》 北大核心 2026年第3期556-562,共7页
对比模式挖掘专注于识别不同类别数据库之间具有显著差异的模式,然而,现有的对比序列模式挖掘算法需要用户预先设置频繁阈值,导致最终的对比模式分类准确度难以达到理想效果;而且基于对比度的挖掘算法不满足反单调性,不能使用模式连接... 对比模式挖掘专注于识别不同类别数据库之间具有显著差异的模式,然而,现有的对比序列模式挖掘算法需要用户预先设置频繁阈值,导致最终的对比模式分类准确度难以达到理想效果;而且基于对比度的挖掘算法不满足反单调性,不能使用模式连接策略生成候选模式.为了解决上述问题,本文提出Top-k不相交对比模式挖掘算法TDCP,该算法采用Top-k策略,以对比度衡量模式分类能力,无需手动设置频繁阈值;采用位置索引结构计算模式支持度,有效提高算法的运行效率;结合枚举和剪枝策略来生成候选模式,既保证了模式生成的全面性又避免了大量冗余.实验表明,TDCP算法的挖掘性能和分类效果均优于其他对比算法. 展开更多
关键词 序列模式挖掘 对比模式 位置索引 top-k 候选模式生成
在线阅读 下载PDF
基于加权Voronoi图的top-k局部同位模式挖掘
2
作者 金灿 王丽珍 杨金华 《计算机科学与探索》 北大核心 2026年第3期730-746,共17页
局部同位模式(LCP)挖掘是空间同位模式挖掘的重要分支,旨在发现局部区域中频繁出现的同位模式。LCP能够揭示局部区域而非全局范围内空间特征之间的关联关系,在各种基于位置的应用领域中发挥积极的指导作用。现有LCP挖掘方法无法有效地... 局部同位模式(LCP)挖掘是空间同位模式挖掘的重要分支,旨在发现局部区域中频繁出现的同位模式。LCP能够揭示局部区域而非全局范围内空间特征之间的关联关系,在各种基于位置的应用领域中发挥积极的指导作用。现有LCP挖掘方法无法有效地识别人类活动驱动下(人为因素)形成的局部区域,并且难以设置合适的频繁度阈值去筛选不同区域的频繁模式。为了解决这些问题,提出一种新颖的基于加权维诺图(Voronoi图)的top-k LCP挖掘方法(Top-k LCPM-WVD)。该方法通过加权Voronoi图识别由于人为因素形成的LCP的分布区域,使用top-k挖掘框架高效地挖掘区域内最频繁的k个模式。同时,基于该框架设计了一系列优化策略进一步提高了挖掘效率。此外,为解决面向大规模数据集的效率问题,提出一种并行挖掘方案以加快挖掘过程,在4线程下的加速比达到1.65。在真实和合成数据集上的大量实验结果证实,与现有最先进算法相比,提出的Top-k LCPM-WVD方法能够更高效地发现可解释性的局部同位模式,其效率提升达到数十倍。 展开更多
关键词 空间模式挖掘 局部同位模式(LCP) 加权Voronoi图 top-k 并行
在线阅读 下载PDF
Synaptic pruning mechanisms and application of emerging imaging techniques in neurological disorders
3
作者 Yakang Xing Yi Mo +1 位作者 Qihui Chen Xiao Li 《Neural Regeneration Research》 2026年第5期1698-1714,共17页
Synaptic pruning is a crucial process in synaptic refinement,eliminating unstable synaptic connections in neural circuits.This process is triggered and regulated primarily by spontaneous neural activity and experience... Synaptic pruning is a crucial process in synaptic refinement,eliminating unstable synaptic connections in neural circuits.This process is triggered and regulated primarily by spontaneous neural activity and experience-dependent mechanisms.The pruning process involves multiple molecular signals and a series of regulatory activities governing the“eat me”and“don't eat me”states.Under physiological conditions,the interaction between glial cells and neurons results in the clearance of unnecessary synapses,maintaining normal neural circuit functionality via synaptic pruning.Alterations in genetic and environmental factors can lead to imbalanced synaptic pruning,thus promoting the occurrence and development of autism spectrum disorder,schizophrenia,Alzheimer's disease,and other neurological disorders.In this review,we investigated the molecular mechanisms responsible for synaptic pruning during neural development.We focus on how synaptic pruning can regulate neural circuits and its association with neurological disorders.Furthermore,we discuss the application of emerging optical and imaging technologies to observe synaptic structure and function,as well as their potential for clinical translation.Our aim was to enhance our understanding of synaptic pruning during neural development,including the molecular basis underlying the regulation of synaptic function and the dynamic changes in synaptic density,and to investigate the potential role of these mechanisms in the pathophysiology of neurological diseases,thus providing a theoretical foundation for the treatment of neurological disorders. 展开更多
关键词 CHEMOKINE COMPLEMENT experience-dependent driven synaptic pruning imaging techniques NEUROGLIA signaling pathways synapse elimination synaptic pruning
暂未订购
Modeling Pruning as a Phase Transition:A Thermodynamic Analysis of Neural Activations
4
作者 Rayeesa Mehmood Sergei Koltcov +1 位作者 Anton Surkov Vera Ignatenko 《Computers, Materials & Continua》 2026年第3期2304-2327,共24页
Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally... Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally expensive and typically requires exhaustive search.We introduce a thermodynamics-inspired framework that treats activation distributions as energy-filtered physical systems and employs the free energy of activations as a principled evaluation metric.Phase-transition-like phenomena in the free-energy profile—such as extrema,inflection points,and curvature changes—yield reliable estimates of the critical pruning threshold,providing a theoretically grounded means of predicting sharp accuracy degradation.To further enhance efficiency,we propose a renormalized free energy technique that approximates full-evaluation free energy using only the activation distribution of the unpruned network.This eliminates repeated forward passes,dramatically reducing computational overhead and achieving speedups of up to 550×for MLPs.Extensive experiments across diverse vision architectures(MLP,CNN,ResNet,MobileNet,Vision Transformer)and text models(LSTM,BERT,ELECTRA,T5,GPT-2)on multiple datasets validate the generality,robustness,and computational efficiency of our approach.Overall,this work establishes a theoretically grounded and practically effective framework for activation pruning,bridging the gap between analytical understanding and efficient deployment of sparse neural networks. 展开更多
关键词 THERMODYNAMICS activation pruning model compression SPARSITY free energy RENORMALIZATION
在线阅读 下载PDF
Mitigating Attribute Inference in Split Learning via Channel Pruning and Adversarial Training
5
作者 Afnan Alhindi Saad Al-Ahmadi Mohamed Maher Ben Ismail 《Computers, Materials & Continua》 2026年第3期1465-1489,共25页
Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subn... Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%. 展开更多
关键词 Split learning privacy-preserving split learning distributed collaborative machine learning channel pruning adversarial learning resource-constrained devices
在线阅读 下载PDF
不确定时间序列Top-k窗口聚合查询方法 被引量:1
6
作者 张航 熊浩然 何震瀛 《计算机工程》 北大核心 2025年第7期161-170,共10页
近年来,如何分析挖掘不确定时间序列数据逐渐受到业界关注。Top-k查询作为数据库领域研究的热点问题,旨在从大规模数据中检索出最符合用户查询条件的前k项结果。然而,尽管Top-k查询在其他领域已被广泛应用,针对不确定时间序列的Top-k查... 近年来,如何分析挖掘不确定时间序列数据逐渐受到业界关注。Top-k查询作为数据库领域研究的热点问题,旨在从大规模数据中检索出最符合用户查询条件的前k项结果。然而,尽管Top-k查询在其他领域已被广泛应用,针对不确定时间序列的Top-k查询研究仍然较少。这种查询可以有效帮助用户从不确定时间序列提取重要信息。提出一种新的Top-k查询问题——不确定时间序列Top-k窗口聚合查询,并针对该问题给出高效的查询方法。这个查询可以作为一个基础工具,辅助用户探索和分析不确定时间序列数据。现有能够支持这个查询的方法均存在查询效率较低或所需存储空间过高的问题。针对该问题,提出一种基于子窗口拼接策略的两级Top-k查询方法,并提出高效计算阈值上界方法解决基于子窗口拼接策略引入的阈值计算复杂难题。该方法能够以较少的预计算存储空间,高效支持不确定时间序列Top-k窗口聚合查询。为了验证所提方法的有效性,在真实和人造数据集上进行实验。实验结果表明,所提方法与基于TA的Top-k查询方法相比,明显降低了预计算列表的存储空间;与基于遍历的FSEC-S方法相比,所提方法以及使用计算阈值上界优化方法的平均查询效率分别提升了7.27倍和20.04倍。 展开更多
关键词 不确定时间序列 top-k查询 窗口 聚合查询 有序列表 阈值
在线阅读 下载PDF
基于离散度分析的Top-k组合Skyline查询算法
7
作者 董雷刚 刘国华 +1 位作者 王鑫 崔晓微 《计算机应用与软件》 北大核心 2025年第2期72-80,共9页
现有的组合Skyline查询算法不能区分组合中数据的离散度,且输出结果集很大。针对这种情况,提出基于数据离散度分析的Top-k组合Skyline查询算法。提出基于权重的组合离散系数概念及其计算方法;设置分类器将组合划分至不同的组合队列;采... 现有的组合Skyline查询算法不能区分组合中数据的离散度,且输出结果集很大。针对这种情况,提出基于数据离散度分析的Top-k组合Skyline查询算法。提出基于权重的组合离散系数概念及其计算方法;设置分类器将组合划分至不同的组合队列;采用并行处理方式对各组合队列进行计算。实验结果表明,该算法可以根据用户自定义条件准确有效地返回结果,能满足实际应用的需要。 展开更多
关键词 组合Skyline 离散度分析 top-k 离散系数 分类器 并行处理
在线阅读 下载PDF
SFPBL:Soft Filter Pruning Based on Logistic Growth Differential Equation for Neural Network 被引量:1
8
作者 Can Hu Shanqing Zhang +2 位作者 Kewei Tao Gaoming Yang Li Li 《Computers, Materials & Continua》 2025年第3期4913-4930,共18页
The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and int... The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network. 展开更多
关键词 Filter pruning channel pruning CNN complexity deep neural networks filtering theory logistic model
在线阅读 下载PDF
top-k频繁挖掘下电力敏感数据差分隐私保护 被引量:2
9
作者 奚增辉 王卫斌 +2 位作者 屈志坚 姚嵘 陆嘉铭 《电子设计工程》 2025年第10期112-115,120,共5页
由于电力系统中的数据量庞大且具有动态变化的特性,敏感性和非敏感性的电力数据都存储在数据库中。如果用户在查询数据对象时发生错误,就会造成敏感数据的隐私泄露问题。为避免上述情况的发生,提出top-k频繁挖掘下电力敏感数据差分隐私... 由于电力系统中的数据量庞大且具有动态变化的特性,敏感性和非敏感性的电力数据都存储在数据库中。如果用户在查询数据对象时发生错误,就会造成敏感数据的隐私泄露问题。为避免上述情况的发生,提出top-k频繁挖掘下电力敏感数据差分隐私保护方法。通过设置top-k项目,对电力敏感数据频繁挖掘处理。引入差分隐私,创建电力敏感数据私有账本,分析其隐私性,完善差分隐私保护方案,实现对电力敏感数据差分隐私保护。实验结果表明,在top-k频繁挖掘算法作用下,主机元件不会出现错误查询到敏感性电力数据的情况,能够较好地保护敏感数据的差分隐私。 展开更多
关键词 top-k频繁挖掘 电力敏感数据 差分隐私 私有账本
在线阅读 下载PDF
ACCF:时间预测机制驱动的top-k流测量
10
作者 胡永庆 杨含 +2 位作者 刘子源 秦广军 戴庆龙 《计算机科学》 北大核心 2025年第10期98-105,共8页
针对当前top-k流测量过滤算法依赖固定计数器阈值的问题,提出了基于活跃度预测机制的ACCF(Activity Counting Cuckoo Filter)测量结构。ACCF通过引入活跃度预测机制,利用时间序列分析和指数加权移动平均(Exponentially Weighted Moving ... 针对当前top-k流测量过滤算法依赖固定计数器阈值的问题,提出了基于活跃度预测机制的ACCF(Activity Counting Cuckoo Filter)测量结构。ACCF通过引入活跃度预测机制,利用时间序列分析和指数加权移动平均(Exponentially Weighted Moving Average,EWMA)机制,动态计算网络流的活跃度,实现对潜在的top-k流的实时识别与提前过滤。针对哈希冲突可能导致的精度损失,ACCF引入了自刷新存储表(Self-Refreshing Storage Table,SRST),用于存储踢出路径上的网络流信息。当踢出操作达到设定的MaxNumKicks值时,SRST会在局部范围内优先踢出活跃度最小的网络流项,避免重要流量信息丢失。实验结果证明,ACCF与SRST在合适的参数组合条件下,可以提前过滤65%左右的大流并减少41%左右的插入操作,并显著提升了在top-k流量测量中的精度,尤其是在与传统的Space Saving(SS),CM Sketch,LUSketch和Cuckoo Counter算法对比时,展现了明显的优势。 展开更多
关键词 top-k 活跃度 时间序列 EWMA SRST SKETCH
在线阅读 下载PDF
基于效用表的Top-k高效用挖掘算法TKUL
11
作者 高敏节 张美春 《电脑编程技巧与维护》 2025年第10期38-40,共3页
针对现有高效用项集挖掘算法存在的阈值提升缓慢、剪枝效用差等问题,提出了一种能够更加高效地挖掘效用值最大的前k个项集的算法。TKUL(minging Top-K high Utility itemsets based List)算法综合采用RIUQ、CUDQ和EPB阈值提升策略,加快... 针对现有高效用项集挖掘算法存在的阈值提升缓慢、剪枝效用差等问题,提出了一种能够更加高效地挖掘效用值最大的前k个项集的算法。TKUL(minging Top-K high Utility itemsets based List)算法综合采用RIUQ、CUDQ和EPB阈值提升策略,加快最小阈值获取的速度,大大减少了生成的非高效用项集的数量,并通过RUI和EUCPM策略进行剪枝,有效缩小了搜索空间的规模,从而提高了高效用项集的挖掘效率。 展开更多
关键词 关联规则 高效用项集 top-k项集
在线阅读 下载PDF
Computation graph pruning based on critical path retention in evolvable networks
12
作者 XIE Xiaoyan YANG Tianjiao +4 位作者 ZHU Yun LUO Xing JIN Luochen YU Jinhao REN Xun 《High Technology Letters》 2025年第3期266-272,共7页
The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heig... The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heightened architectural complexity and expanded parameter dimensionality in evolvable networks present significant implementation challenges when deployed in resource-con-strained environments.Due to the critical paths ignored,traditional pruning strategies cannot get a desired trade-off between accuracy and efficiency.For this reason,a critical path retention pruning(CPRP)method is proposed.By deeply traversing the computational graph,the dependency rela-tionship among nodes is derived.Then the nodes are grouped and sorted according to their contribu-tion value.The redundant operations are removed as much as possible while ensuring that the criti-cal path is not affected.As a result,computational efficiency is improved while a higher accuracy is maintained.On the CIFAR benchmark,the experimental results demonstrate that CPRP-induced pruning incurs accuracy degradation below 4.00%,while outperforming traditional feature-agnostic grouping methods by an average 8.98%accuracy improvement.Simultaneously,the pruned model attains a 2.41 times inference acceleration while achieving 48.92%parameter compression and 53.40%floating-point operations(FLOPs)reduction. 展开更多
关键词 evolvable network computation graph traversing dynamic routing critical path retention pruning
在线阅读 下载PDF
Optimizing BERT for Bengali Emotion Classification: Evaluating Knowledge Distillation, Pruning, and Quantization
13
作者 Md Hasibur Rahman Mohammed Arif Uddin +1 位作者 Zinnat Fowzia Ria Rashedur M.Rahman 《Computer Modeling in Engineering & Sciences》 2025年第2期1637-1666,共30页
The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classificati... The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments. 展开更多
关键词 Bengali NLP black-box distillation emotion classification model compression post-training quantization unstructured pruning
在线阅读 下载PDF
CLAD:Criterion learner and attention distillation for automated CNN pruning
14
作者 Zheng Li Jiaxin Li +2 位作者 Shaojie Liu Bo Zhao Derong Liu 《Journal of Automation and Intelligence》 2025年第4期254-265,共12页
Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and... Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and remove those deemed unimportant.However,different layers of the neural network exhibit varying filter distributions,making it inappropriate to implement the same pruning criterion for all layers.Additionally,some approaches apply different criteria from the set of pre-defined pruning rules for different layers,but the limited space leads to the difficulty of covering all layers.If criteria for all layers are manually designed,it is costly and difficult to generalize to other networks.To solve this problem,we present a novel neural network pruning method based on the Criterion Learner and Attention Distillation(CLAD).Specifically,CLAD develops a differentiable criterion learner,which is integrated into each layer of the network.The learner can automatically learn the appropriate pruning criterion according to the filter parameters of each layer,thus the requirement of manual design is eliminated.Furthermore,the criterion learner is trained end-to-end by the gradient optimization algorithm to achieve efficient pruning.In addition,attention distillation,which fully utilizes the knowledge of unpruned networks to guide the optimization of the learner and improve the pruned network performance,is introduced in the process of learner optimization.Experiments conducted on various datasets and networks demonstrate the effectiveness of the proposed method.Notably,CLAD reduces the FLOPs of Res Net-110 by about 53%on the CIFAR-10 dataset,while simultaneously improves the network's accuracy by 0.05%.Moreover,it reduces the FLOPs of Res Net-50 by about 46%on the Image Net-1K dataset,and maintains a top-1 accuracy of 75.45%. 展开更多
关键词 Neural network pruning Model compression Knowledge distillation Feature attention Polar regularization
在线阅读 下载PDF
Greedy Pruning Algorithm for DETR Architecture Networks Based on Global Optimization
15
作者 HUANG Qiubo XU Jingsai +2 位作者 ZHANG Yakui WANG Mei CHEN Dehua 《Journal of Donghua University(English Edition)》 2025年第1期96-105,共10页
End-to-end object detection Transformer(DETR)successfully established the paradigm of the Transformer architecture in the field of object detection.Its end-to-end detection process and the idea of set prediction have ... End-to-end object detection Transformer(DETR)successfully established the paradigm of the Transformer architecture in the field of object detection.Its end-to-end detection process and the idea of set prediction have become one of the hottest network architectures in recent years.There has been an abundance of work improving upon DETR.However,DETR and its variants require a substantial amount of memory resources and computational costs,and the vast number of parameters in these networks is unfavorable for model deployment.To address this issue,a greedy pruning(GP)algorithm is proposed,applied to a variant denoising-DETR(DN-DETR),which can eliminate redundant parameters in the Transformer architecture of DN-DETR.Considering the different roles of the multi-head attention(MHA)module and the feed-forward network(FFN)module in the Transformer architecture,a modular greedy pruning(MGP)algorithm is proposed.This algorithm separates the two modules and applies their respective optimal strategies and parameters.The effectiveness of the proposed algorithm is validated on the COCO 2017 dataset.The model obtained through the MGP algorithm reduces the parameters by 49%and the number of floating point operations(FLOPs)by 44%compared to the Transformer architecture of DN-DETR.At the same time,the mean average precision(mAP)of the model increases from 44.1%to 45.3%. 展开更多
关键词 model pruning object detection Transformer(DETR) Transformer architecture object detection
在线阅读 下载PDF
Hierarchical Shape Pruning for 3D Sparse Convolution Networks
16
作者 Haiyan Long Chonghao Zhang +2 位作者 Xudong Qiu Hai Chen Gang Chen 《Computers, Materials & Continua》 2025年第8期2975-2988,共14页
3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Des... 3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems. 展开更多
关键词 Shape pruning model compressing 3D sparse convolution
在线阅读 下载PDF
基于Top-k查询算法的国际贸易数据高速检索研究
17
作者 汤陈燕 《湖南邮电职业技术学院学报》 2025年第3期62-67,共6页
传统数据高速检索方法的数据检索准确率易受数据相似度高的影响,基于此,引进Top-k查询算法,以国际贸易数据为例,设计了一种针对该数据的高速检索方法。运用小波分解技术对自整合的国际贸易数据进行除杂去噪处理,基于Top-k查询算法融合... 传统数据高速检索方法的数据检索准确率易受数据相似度高的影响,基于此,引进Top-k查询算法,以国际贸易数据为例,设计了一种针对该数据的高速检索方法。运用小波分解技术对自整合的国际贸易数据进行除杂去噪处理,基于Top-k查询算法融合相似国际贸易数据,并引进Solr数据检索引擎,从多个方面对高速检索行为进行概述,由此完成国际贸易数据高速检索方法设计。对比实验验证:所提出的高速检索方法在实际应用中的检索时间和检索正确率均优于传统方法。 展开更多
关键词 top-k查询算法 国际贸易 数据检索 小波分解
在线阅读 下载PDF
基于Top-k查询算法的电力营销数据智能检索方法
18
作者 李颖昕 王其吉 +2 位作者 岳莹 许炳灿 聂明军 《工业控制计算机》 2025年第7期117-118,共2页
在面对海量、复杂且多变的电力营销数据时,简单的关键词匹配只能提取和利用较少的数据中的关键信息,导致电力营销数据检索效率低,为此研究基于Top-k查询算法的电力营销数据智能检索方法。首先,通过聚类技术将电力营销数据进行融合,实现... 在面对海量、复杂且多变的电力营销数据时,简单的关键词匹配只能提取和利用较少的数据中的关键信息,导致电力营销数据检索效率低,为此研究基于Top-k查询算法的电力营销数据智能检索方法。首先,通过聚类技术将电力营销数据进行融合,实现数据的初步分类和组织。接着,从聚类后的数据中提取关键特征量,这些特征量能够准确反映电力营销数据的核心信息和规律。然后,在数据特征量提取的基础上,利用Top-k查询算法对电力营销数据进行可信度分配。Topk查询算法能够基于用户查询请求,从海量数据中快速筛选出与用户需求最匹配的前k个结果。最后,充分考虑电力营销数据的特性和查询需求,标定索引结构实现智能检索。实验结果表明:提出的方法在检索效率上具有显著的优势,能够更好地满足电力营销数据检索的需求。 展开更多
关键词 top-k查询算法 电力营销数据 数据检索 数据智能检索 智能检索方法
在线阅读 下载PDF
A Novel Reduced Error Pruning Tree Forest with Time-Based Missing Data Imputation(REPTF-TMDI)for Traffic Flow Prediction
19
作者 Yunus Dogan Goksu Tuysuzoglu +4 位作者 Elife Ozturk Kiyak Bita Ghasemkhani Kokten Ulas Birant Semih Utku Derya Birant 《Computer Modeling in Engineering & Sciences》 2025年第8期1677-1715,共39页
Accurate traffic flow prediction(TFP)is vital for efficient and sustainable transportation management and the development of intelligent traffic systems.However,missing data in real-world traffic datasets poses a sign... Accurate traffic flow prediction(TFP)is vital for efficient and sustainable transportation management and the development of intelligent traffic systems.However,missing data in real-world traffic datasets poses a significant challenge to maintaining prediction precision.This study introduces REPTF-TMDI,a novel method that combines a Reduced Error Pruning Tree Forest(REPTree Forest)with a newly proposed Time-based Missing Data Imputation(TMDI)approach.The REP Tree Forest,an ensemble learning approach,is tailored for time-related traffic data to enhance predictive accuracy and support the evolution of sustainable urbanmobility solutions.Meanwhile,the TMDI approach exploits temporal patterns to estimate missing values reliably whenever empty fields are encountered.The proposed method was evaluated using hourly traffic flow data from a major U.S.roadway spanning 2012-2018,incorporating temporal features(e.g.,hour,day,month,year,weekday),holiday indicator,and weather conditions(temperature,rain,snow,and cloud coverage).Experimental results demonstrated that the REPTF-TMDI method outperformed conventional imputation techniques across various missing data ratios by achieving an average 11.76%improvement in terms of correlation coefficient(R).Furthermore,REPTree Forest achieved improvements of 68.62%in RMSE and 70.52%in MAE compared to existing state-of-the-art models.These findings highlight the method’s ability to significantly boost traffic flow prediction accuracy,even in the presence of missing data,thereby contributing to the broader objectives of sustainable urban transportation systems. 展开更多
关键词 Machine learning traffic flow prediction missing data imputation reduced error pruning tree(REPTree) sustainable transportation systems traffic management artificial intelligence
在线阅读 下载PDF
基于IMBS-YOLOv7的轻量化双孢蘑菇品质分级检测方法
20
作者 姜凤利 曹丰千 +2 位作者 王迪 李美璇 张芳 《沈阳农业大学学报》 北大核心 2026年第1期100-112,共13页
[目的]为提高双孢蘑菇分级检测精度并便于模型部署到移动端,提出一种基于YOLOv7的轻量化双孢蘑菇分级检测模型。[方法]首先,采用MobileNetV2作为主干网络替换YOLOv7模型的特征提取网络,通过深度可分离卷积有效减少模型参数量并提升推理... [目的]为提高双孢蘑菇分级检测精度并便于模型部署到移动端,提出一种基于YOLOv7的轻量化双孢蘑菇分级检测模型。[方法]首先,采用MobileNetV2作为主干网络替换YOLOv7模型的特征提取网络,通过深度可分离卷积有效减少模型参数量并提升推理速度;其次,引入BiFormer注意力机制,增强模型对双孢蘑菇表面纹理、形态缺陷等细微特征的提取能力;最后,采用SIoU边界框回归损失函数代替CIoU损失函数,显著提升边界框回归精度,增强模型对双孢蘑菇表面轻微缺陷的识别能力。改进后的模型命名为MBS-YOLOv7。[结果]MBS-YOLOv7模型在双孢蘑菇测试集上的平均精度均值(mAP)达到94.1%,相比原始YOLOv7模型提升1.2%,同时模型参数量减少32.8%,实现精度与速度的平衡。在此基础上,为进一步实现模型的轻量化,提出一种融合通道剪枝与知识蒸馏的轻量化模型IMBS-YOLOv7,通过稀疏训练与通道剪枝策略,筛选出最优剪枝率(0.5),并结合知识蒸馏技术,在温度参数T=10时实现软标签信息的最佳传递,有效恢复因剪枝损失的模型精度。最终,IMBS-YOLOv7在保持94.1%mAP的同时,检测速度达121 f·s^(-1),模型体积压缩至12 MB,具备良好的边缘部署能力。[结论]与Faster R-CNN、SSD、YOLOv3、YOLOv5等主流检测算法相比,IMBS-YOLOv7在双孢蘑菇数据集上综合性能最优,满足实时处理要求,为双孢蘑菇在线分级检测提供可靠的技术支持。 展开更多
关键词 双孢蘑菇 品质分级 YOLOv7 注意力机制 知识蒸馏 通道剪枝
在线阅读 下载PDF
上一页 1 2 204 下一页 到第
使用帮助 返回顶部