期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
A novel hybrid estimation of distribution algorithm for solving hybrid flowshop scheduling problem with unrelated parallel machine 被引量:10
1
作者 孙泽文 顾幸生 《Journal of Central South University》 SCIE EI CAS CSCD 2017年第8期1779-1788,共10页
The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this wor... The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this work, a novel mathematic model for the hybrid flow shop scheduling problem with unrelated parallel machine(HFSPUPM) was proposed. Additionally, an effective hybrid estimation of distribution algorithm was proposed to solve the HFSPUPM, taking advantage of the features in the mathematic model. In the optimization algorithm, a new individual representation method was adopted. The(EDA) structure was used for global search while the teaching learning based optimization(TLBO) strategy was used for local search. Based on the structure of the HFSPUPM, this work presents a series of discrete operations. Simulation results show the effectiveness of the proposed hybrid algorithm compared with other algorithms. 展开更多
关键词 hybrid estimation of distribution algorithm teaching learning based optimization strategy hybrid flow shop unrelated parallel machine scheduling
在线阅读 下载PDF
An Explanatory Strategy for Reducing the Risk of Privacy Leaks
2
作者 Mingting Liu Xiaozhang Liu +3 位作者 Anli Yan Xiulai Li Gengquan Xie Xin Tang 《Journal of Information Hiding and Privacy Protection》 2021年第4期181-192,共12页
As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether peo... As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability. 展开更多
关键词 machine learning model data privacy risks machine learning explanatory strategies
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部