The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and int...The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.展开更多
目的基于Logistic回归和随机森林算法构建全身麻醉复苏延迟的预判模型并验证。方法选择2021—2023年浙江某三甲医院复苏室收治的1177例全麻患者作为研究对象,按7︰3的比例随机分为训练组和验证组两组,采用Logistic单因素+多因素回归分析...目的基于Logistic回归和随机森林算法构建全身麻醉复苏延迟的预判模型并验证。方法选择2021—2023年浙江某三甲医院复苏室收治的1177例全麻患者作为研究对象,按7︰3的比例随机分为训练组和验证组两组,采用Logistic单因素+多因素回归分析,构建全身麻醉复苏延迟的预判模型并用列线图展示。利用随机森林算法筛选全身麻醉患者复苏延迟的影响因素并按重要性排序。采用受试者操作特征曲线(Receiver operating characteristic curve,ROC)下面积(Area of the under curve,AUC)检验模型的预测效果,采用校准曲线以及决策曲线综合评价模型的预测性能。结果1177例患者复苏延迟发生99例,发生率为8.41%。Logistic回归显示性别、ASA分级、年龄、手术时间、手术种类、输液量是全麻患者复苏延迟的独立危险因素。随机森林算法结果显示复苏延迟各变量的重要性排序为手术种类、年龄、手术时间、输液量、ASA分级、性别。Logistic回归模型的训练组AUC为0.87(95%CI 0.83~0.91),验证组为0.86(95%CI 0.81~0.91)。随机森林模型训练组AUC为0.85(95%CI 0.49~1.00),验证组AUC为0.76(95%CI 0.26~1.00)。提示模型具有良好的区分能力,预测能力较高,具有一定的临床价值。结论手术种类、年龄、手术时间、输液量、ASA分级、性别是全麻患者复苏延迟的独立危险因素,根据此构建预判模型的区分度与校准度较高,有助于预测全麻患者苏醒延迟的发生,可以为临床护理干预措施的制定与实施提供参考。展开更多
基金supported by the National Natural Science Foundation of China under Grant No.62172132.
文摘The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.
文摘目的基于Logistic回归和随机森林算法构建全身麻醉复苏延迟的预判模型并验证。方法选择2021—2023年浙江某三甲医院复苏室收治的1177例全麻患者作为研究对象,按7︰3的比例随机分为训练组和验证组两组,采用Logistic单因素+多因素回归分析,构建全身麻醉复苏延迟的预判模型并用列线图展示。利用随机森林算法筛选全身麻醉患者复苏延迟的影响因素并按重要性排序。采用受试者操作特征曲线(Receiver operating characteristic curve,ROC)下面积(Area of the under curve,AUC)检验模型的预测效果,采用校准曲线以及决策曲线综合评价模型的预测性能。结果1177例患者复苏延迟发生99例,发生率为8.41%。Logistic回归显示性别、ASA分级、年龄、手术时间、手术种类、输液量是全麻患者复苏延迟的独立危险因素。随机森林算法结果显示复苏延迟各变量的重要性排序为手术种类、年龄、手术时间、输液量、ASA分级、性别。Logistic回归模型的训练组AUC为0.87(95%CI 0.83~0.91),验证组为0.86(95%CI 0.81~0.91)。随机森林模型训练组AUC为0.85(95%CI 0.49~1.00),验证组AUC为0.76(95%CI 0.26~1.00)。提示模型具有良好的区分能力,预测能力较高,具有一定的临床价值。结论手术种类、年龄、手术时间、输液量、ASA分级、性别是全麻患者复苏延迟的独立危险因素,根据此构建预判模型的区分度与校准度较高,有助于预测全麻患者苏醒延迟的发生,可以为临床护理干预措施的制定与实施提供参考。