To protect user privacy and data security,the integration of Federated Learning(FL)and blockchain has become an emerging research hotspot.However,the limited throughput and high communication complexity of traditional...To protect user privacy and data security,the integration of Federated Learning(FL)and blockchain has become an emerging research hotspot.However,the limited throughput and high communication complexity of traditional blockchains limit their application in large-scale FL tasks,and the synchronous traditional FL will also reduce the training efficiency.To address these issues,in this paper,we propose a Directed Acyclic Graph(DAG)blockchain-enabled generalized Federated Dropout(FD)learning strategy,which could improve the efficiency of FL while ensuring the model generalization.Specifically,the DAG maintained by multiple edge servers will guarantee the security and traceability of the data,and the Reputation-based Tips Selection Algorithm(RTSA)is proposed to reduce the blockchain consensus delay.Second,the semi-asynchronous training among Intelligent Devices(IDs)is adopted to improve the training efficiency,and a reputation-based FD technology is proposed to prevent overfitting of the model.In addition,a Hybrid Optimal Resource Allocation(HORA)algorithm is introduced to minimize the network delay.Finally,simulation results demonstrate the effectiveness and superiority of the proposed algorithms.展开更多
Higher education institutions are becoming increasingly concerned with the retention of their students.This work is motivated by the interest in predicting and reducing student dropout,and consequently in reducing the...Higher education institutions are becoming increasingly concerned with the retention of their students.This work is motivated by the interest in predicting and reducing student dropout,and consequently in reducing the financial losses of said institutions.Based on the characterization of the dropout problem and the application of a knowledge discovery process,an ensemble model is proposed to improve dropout prediction.The ensemble model combines the results of three models:logistic regression,neural networks,and decision tree.As a result,the model can correctly classify 89%of the students as enrolled or dropped and accurately identify 98.1%of dropouts.When compared with the Random Forest ensemble method,the proposed model demonstrates desirable characteristics to assist management in proposing actions to retain students.展开更多
目的基于传染病动力学SEAIQR(susceptible-exposed-asymptomatic-infected-quarantined-removed)模型和Dropout-LSTM(Dropout long short term memory network)模型预测西安市新型冠状病毒肺炎(COVID-19)疫情的发展趋势,为评估“动态清...目的基于传染病动力学SEAIQR(susceptible-exposed-asymptomatic-infected-quarantined-removed)模型和Dropout-LSTM(Dropout long short term memory network)模型预测西安市新型冠状病毒肺炎(COVID-19)疫情的发展趋势,为评估“动态清零”策略防控效果提供科学依据。方法考虑到西安市本轮疫情存在大量的无症状感染者、依时变化的参数以及采取的管控举措等特点,构建具有阶段性防控措施的时变SEAIQR模型。考虑到COVID-19疫情数据的时序性特征及它们之间的非线性关系,构建深度学习Dropout-LSTM模型。选用2021年12月9日-2022年1月31日西安市新增确诊病例数据进行拟合,用2022年2月1日-2022年2月7日数据评估预测效果,计算有效再生数(R_(t))并评价不同参数对疫情发展的影响。结果SEAIQR模型预测的新增确诊病例拐点预计在2021年12月26日出现,约为176例,疫情将于2022年1月24日实现“动态清零”,模型R^(2)=0.849。Dropout-LSTM模型能够体现数据的时序性与非线性特征,预测出的新增确诊病例数与实际情况高度吻合,R^(2)=0.937。Dropout-LSTM模型的MAE和RMSE均较SEAIQR模型低,说明预测结果更为理想。疫情暴发初期,R 0为5.63,自实施全面管控后,R_(t)呈逐渐下降趋势,直到2021年12月27日降至1.0以下。随着有效接触率不断缩小、管控措施的提早实施及免疫阈值的提高,新增确诊病例在到达拐点时的人数将会持续降低。结论建立的Dropout-LSTM模型实现了较准确的疫情预测,可为COVID-19疫情“动态清零”防控决策提供借鉴。展开更多
Artificial neural networks(ANN) have been extensively researched due to their significant energy-saving benefits.Hardware implementations of ANN with dropout function would be able to avoid the overfitting problem. Th...Artificial neural networks(ANN) have been extensively researched due to their significant energy-saving benefits.Hardware implementations of ANN with dropout function would be able to avoid the overfitting problem. This letter reports a dropout neuronal unit(1R1T-DNU) based on one memristor–one electrolyte-gated transistor with an ultralow energy consumption of 25 p J/spike. A dropout neural network is constructed based on such a device and has been verified by MNIST dataset, demonstrating high recognition accuracies(> 90%) within a large range of dropout probabilities up to40%. The running time can be reduced by increasing dropout probability without a significant loss in accuracy. Our results indicate the great potential of introducing such 1R1T-DNUs in full-hardware neural networks to enhance energy efficiency and to solve the overfitting problem.展开更多
This paper proposes a novel method for estimating the sparse inverse covariance matrixfor longitudinal data with informative dropouts. Based on the modified Cholesky decomposition,the sparse inverse covariance matrix ...This paper proposes a novel method for estimating the sparse inverse covariance matrixfor longitudinal data with informative dropouts. Based on the modified Cholesky decomposition,the sparse inverse covariance matrix is modelled by the autoregressive regression model,which guarantees the positive definiteness of the covariance matrix. To account for the informativedropouts, we then propose a penalized estimating equation method using the inverse probabilityweighting approach. The informative dropout propensity parameters are estimated by the generalizedmethod of moments. The asymptotic properties are investigated for the resulting estimators.Finally, we illustrate the effectiveness and feasibility of the proposed method through Monte Carlosimulations and a practical application.展开更多
Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover rando...Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover randomly selects neurons and reverts their outputs using a negative multiplier during training.This approach offers stronger regularization than conventional dropout,refining model performance by(1)mitigating overfitting,matching or even exceeding the efficacy of dropout;(2)amplifying robustness to noise;and(3)enhancing resilience against adversarial attacks.Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning.展开更多
基金supported in part by the National Key R&D Program of China under Grant 2021YFB1714100in part by the National Natural Science Foundation of China(NSFC)under Grant 62371082 and 62001076in part by the Natural Science Foundation of Chongqing under Grant CSTB2023NSCQ-MSX0726 and cstc2020jcyjmsxmX0878.
文摘To protect user privacy and data security,the integration of Federated Learning(FL)and blockchain has become an emerging research hotspot.However,the limited throughput and high communication complexity of traditional blockchains limit their application in large-scale FL tasks,and the synchronous traditional FL will also reduce the training efficiency.To address these issues,in this paper,we propose a Directed Acyclic Graph(DAG)blockchain-enabled generalized Federated Dropout(FD)learning strategy,which could improve the efficiency of FL while ensuring the model generalization.Specifically,the DAG maintained by multiple edge servers will guarantee the security and traceability of the data,and the Reputation-based Tips Selection Algorithm(RTSA)is proposed to reduce the blockchain consensus delay.Second,the semi-asynchronous training among Intelligent Devices(IDs)is adopted to improve the training efficiency,and a reputation-based FD technology is proposed to prevent overfitting of the model.In addition,a Hybrid Optimal Resource Allocation(HORA)algorithm is introduced to minimize the network delay.Finally,simulation results demonstrate the effectiveness and superiority of the proposed algorithms.
基金the National Council for Scientific and Technological Development of Brazil(CNPQ)the Coordination for the Improvement of Higher Education Personnel-Brazil(CAPES)(Grant PROAP 88887.842889/2023-00-PUC/MG,Grant PDPG 88887.708960/2022-00-PUC/MG-INFORMATICA and Finance Code 001)Minas Gerais State Research Support Foundation(FAPEMIG)under Grant No.:APQ-01929-22,and the Pontifical Catholic University of Minas Gerais,Brazil.
文摘Higher education institutions are becoming increasingly concerned with the retention of their students.This work is motivated by the interest in predicting and reducing student dropout,and consequently in reducing the financial losses of said institutions.Based on the characterization of the dropout problem and the application of a knowledge discovery process,an ensemble model is proposed to improve dropout prediction.The ensemble model combines the results of three models:logistic regression,neural networks,and decision tree.As a result,the model can correctly classify 89%of the students as enrolled or dropped and accurately identify 98.1%of dropouts.When compared with the Random Forest ensemble method,the proposed model demonstrates desirable characteristics to assist management in proposing actions to retain students.
文摘目的基于传染病动力学SEAIQR(susceptible-exposed-asymptomatic-infected-quarantined-removed)模型和Dropout-LSTM(Dropout long short term memory network)模型预测西安市新型冠状病毒肺炎(COVID-19)疫情的发展趋势,为评估“动态清零”策略防控效果提供科学依据。方法考虑到西安市本轮疫情存在大量的无症状感染者、依时变化的参数以及采取的管控举措等特点,构建具有阶段性防控措施的时变SEAIQR模型。考虑到COVID-19疫情数据的时序性特征及它们之间的非线性关系,构建深度学习Dropout-LSTM模型。选用2021年12月9日-2022年1月31日西安市新增确诊病例数据进行拟合,用2022年2月1日-2022年2月7日数据评估预测效果,计算有效再生数(R_(t))并评价不同参数对疫情发展的影响。结果SEAIQR模型预测的新增确诊病例拐点预计在2021年12月26日出现,约为176例,疫情将于2022年1月24日实现“动态清零”,模型R^(2)=0.849。Dropout-LSTM模型能够体现数据的时序性与非线性特征,预测出的新增确诊病例数与实际情况高度吻合,R^(2)=0.937。Dropout-LSTM模型的MAE和RMSE均较SEAIQR模型低,说明预测结果更为理想。疫情暴发初期,R 0为5.63,自实施全面管控后,R_(t)呈逐渐下降趋势,直到2021年12月27日降至1.0以下。随着有效接触率不断缩小、管控措施的提早实施及免疫阈值的提高,新增确诊病例在到达拐点时的人数将会持续降低。结论建立的Dropout-LSTM模型实现了较准确的疫情预测,可为COVID-19疫情“动态清零”防控决策提供借鉴。
基金Project supported by the National Key Research and Development Program of China (Grant Nos. 2021YFA1202600 and 2023YFE0208600)in part by the National Natural Science Foundation of China (Grant Nos. 62174082, 92364106, 61921005, 92364204, and 62074075)。
文摘Artificial neural networks(ANN) have been extensively researched due to their significant energy-saving benefits.Hardware implementations of ANN with dropout function would be able to avoid the overfitting problem. This letter reports a dropout neuronal unit(1R1T-DNU) based on one memristor–one electrolyte-gated transistor with an ultralow energy consumption of 25 p J/spike. A dropout neural network is constructed based on such a device and has been verified by MNIST dataset, demonstrating high recognition accuracies(> 90%) within a large range of dropout probabilities up to40%. The running time can be reduced by increasing dropout probability without a significant loss in accuracy. Our results indicate the great potential of introducing such 1R1T-DNUs in full-hardware neural networks to enhance energy efficiency and to solve the overfitting problem.
基金supported by the National Natural Science Foundation of China(Grant No.12171450).
文摘This paper proposes a novel method for estimating the sparse inverse covariance matrixfor longitudinal data with informative dropouts. Based on the modified Cholesky decomposition,the sparse inverse covariance matrix is modelled by the autoregressive regression model,which guarantees the positive definiteness of the covariance matrix. To account for the informativedropouts, we then propose a penalized estimating equation method using the inverse probabilityweighting approach. The informative dropout propensity parameters are estimated by the generalizedmethod of moments. The asymptotic properties are investigated for the resulting estimators.Finally, we illustrate the effectiveness and feasibility of the proposed method through Monte Carlosimulations and a practical application.
基金supported in part by the National Institutes of Health,Nos.R01CA237267,R01HL151561,R01EB031102,and R01EB032716.
文摘Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover randomly selects neurons and reverts their outputs using a negative multiplier during training.This approach offers stronger regularization than conventional dropout,refining model performance by(1)mitigating overfitting,matching or even exceeding the efficacy of dropout;(2)amplifying robustness to noise;and(3)enhancing resilience against adversarial attacks.Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning.