The deep learning models hold considerable potential for clinical applications, but there are many challenges to successfully training deep learning models. Large-scale data collection is required, which is frequently...The deep learning models hold considerable potential for clinical applications, but there are many challenges to successfully training deep learning models. Large-scale data collection is required, which is frequently only possible through multi-institutional cooperation. Building large central repositories is one strategy for multi-institution studies. However, this is hampered by issues regarding data sharing, including patient privacy, data de-identification, regulation, intellectual property, and data storage. These difficulties have lessened the impracticality of central data storage. In this survey, we will look at 24 research publications that concentrate on machine learning approaches linked to privacy preservation techniques for multi-institutional data, highlighting the multiple shortcomings of the existing methodologies. Researching different approaches will be made simpler in this case based on a number of factors, such as performance measures, year of publication and journals, achievements of the strategies in numerical assessments, and other factors. A technique analysis that considers the benefits and drawbacks of the strategies is additionally provided. The article also looks at some potential areas for future research as well as the challenges associated with increasing the accuracy of privacy protection techniques. The comparative evaluation of the approaches offers a thorough justification for the research’s purpose.展开更多
As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether peo...As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability.展开更多
Protecting private data in smart homes,a popular Internet-of-Things(IoT)application,remains a significant data security and privacy challenge due to the large-scale development and distributed nature of IoT networks.R...Protecting private data in smart homes,a popular Internet-of-Things(IoT)application,remains a significant data security and privacy challenge due to the large-scale development and distributed nature of IoT networks.Recently,smart healthcare has leveraged smart home systems,thereby compounding security concerns in terms of the confidentiality of sensitive and private data and by extension the privacy of the data owner.However,proof-of-authority(PoA)-based blockchain distributed ledger technology(DLT)has emerged as a promising solution for protecting private data from indiscriminate use and thereby preserving the privacy of individuals residing in IoT-enabled smart homes.This review elicits some concerns,issues,and problems that have hindered the adoption of blockchain and IoT(BCoT)in some domains and suggests requisite solutions using the aging-in-place scenario.Implementation issues with BCoT were examined as well as the combined challenges BCoT can pose when utilised for security gains.The study discusses recent findings,opportunities,and barriers,and provides recommendations that could facilitate the continuous growth of blockchain applications in healthcare.Lastly,the study explored the potential of using a PoA-based permission blockchain with an applicable consent-based privacy model for decision-making in the information disclosure process,including the use of publisher-subscriber contracts for fine-grained access control to ensure secure data processing and sharing,as well as ethical trust in personal information disclosure,as a solution direction.The proposed authorisation framework could guarantee data ownership,conditional access management,scalable and tamper-proof data storage,and a more resilient system against threat models such as interception and insider attacks.展开更多
As a widely-used machine-learning classifier,a decision tree model can be trained and deployed at a service provider to provide classification services for clients,e.g.,remote diagnostics.To address privacy concerns r...As a widely-used machine-learning classifier,a decision tree model can be trained and deployed at a service provider to provide classification services for clients,e.g.,remote diagnostics.To address privacy concerns regarding the sensitive information in these services(i.e.,the clients’inputs,model parameters,and classification results),we propose a privacy-preserving decision tree classification scheme(PDTC)in this paper.Specifically,we first tailor an additively homomorphic encryption primitive and a secret sharing technique to design a new secure two-party comparison protocol,where the numeric inputs of each party can be privately compared as a whole instead of doing that in a bit-by-bit manner.Then,based on the comparison protocol,we exploit the structure of the decision tree to construct PDTC,where the input of a client and the model parameters of a service provider are concealed from the counterparty and the classification result is only revealed to the client.A formal simulation-based security model and the security proof demonstrate that PDTC achieves desirable security properties.In addition,performance evaluation shows that PDTC achieves a lower communication and computation overhead compared with existing schemes.展开更多
RFID(Radio Frequency IDentification)is a pioneer technology which has depicted a new lifestyle for humanity.Nowadays we observe an increase in the number of RFID applications and no one can ignore their numerous usage...RFID(Radio Frequency IDentification)is a pioneer technology which has depicted a new lifestyle for humanity.Nowadays we observe an increase in the number of RFID applications and no one can ignore their numerous usage.An important issue with RFID systems is providing privacy requirements of these systems during authentication.Recently in 2014,Cai et al.proposed two improved RFID authentication protocols based on R-RAPS(RFID Authentication Protocol Security Enhanced Rules).We investigate the privacy of their protocols based on Ouafi and Phan privacy model and show that these protocols cannot provide private authentication for RFID users.Moreover,we show that these protocols are vulnerable to impersonation,DoS and traceability attacks.Moreover,we present two improved efficient and secure authentication protocols to ameliorate the performance of Cai et al.’s schemes.Our analysis illustrates that the existing weaknesses of the discussed protocols are eliminated in our proposed protocols.展开更多
文摘The deep learning models hold considerable potential for clinical applications, but there are many challenges to successfully training deep learning models. Large-scale data collection is required, which is frequently only possible through multi-institutional cooperation. Building large central repositories is one strategy for multi-institution studies. However, this is hampered by issues regarding data sharing, including patient privacy, data de-identification, regulation, intellectual property, and data storage. These difficulties have lessened the impracticality of central data storage. In this survey, we will look at 24 research publications that concentrate on machine learning approaches linked to privacy preservation techniques for multi-institutional data, highlighting the multiple shortcomings of the existing methodologies. Researching different approaches will be made simpler in this case based on a number of factors, such as performance measures, year of publication and journals, achievements of the strategies in numerical assessments, and other factors. A technique analysis that considers the benefits and drawbacks of the strategies is additionally provided. The article also looks at some potential areas for future research as well as the challenges associated with increasing the accuracy of privacy protection techniques. The comparative evaluation of the approaches offers a thorough justification for the research’s purpose.
基金This work is supported by the National Natural Science Foundation of China(Grant No.61966011)Hainan University Education and Teaching Reform Research Project(Grant No.HDJWJG01)+3 种基金Key Research and Development Program of Hainan Province(Grant No.ZDYF2020033)Young Talents’Science and Technology Innovation Project of Hainan Association for Science and Technology(Grant No.QCXM202007)Hainan Provincial Natural Science Foundation of China(Grant No.621RC612)Hainan Provincial Natural Science Foundation of China(Grant No.2019RC107).
文摘As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability.
文摘Protecting private data in smart homes,a popular Internet-of-Things(IoT)application,remains a significant data security and privacy challenge due to the large-scale development and distributed nature of IoT networks.Recently,smart healthcare has leveraged smart home systems,thereby compounding security concerns in terms of the confidentiality of sensitive and private data and by extension the privacy of the data owner.However,proof-of-authority(PoA)-based blockchain distributed ledger technology(DLT)has emerged as a promising solution for protecting private data from indiscriminate use and thereby preserving the privacy of individuals residing in IoT-enabled smart homes.This review elicits some concerns,issues,and problems that have hindered the adoption of blockchain and IoT(BCoT)in some domains and suggests requisite solutions using the aging-in-place scenario.Implementation issues with BCoT were examined as well as the combined challenges BCoT can pose when utilised for security gains.The study discusses recent findings,opportunities,and barriers,and provides recommendations that could facilitate the continuous growth of blockchain applications in healthcare.Lastly,the study explored the potential of using a PoA-based permission blockchain with an applicable consent-based privacy model for decision-making in the information disclosure process,including the use of publisher-subscriber contracts for fine-grained access control to ensure secure data processing and sharing,as well as ethical trust in personal information disclosure,as a solution direction.The proposed authorisation framework could guarantee data ownership,conditional access management,scalable and tamper-proof data storage,and a more resilient system against threat models such as interception and insider attacks.
基金The associate editor coordinating the review of this paper and approving it for publication was X.Cheng。
文摘As a widely-used machine-learning classifier,a decision tree model can be trained and deployed at a service provider to provide classification services for clients,e.g.,remote diagnostics.To address privacy concerns regarding the sensitive information in these services(i.e.,the clients’inputs,model parameters,and classification results),we propose a privacy-preserving decision tree classification scheme(PDTC)in this paper.Specifically,we first tailor an additively homomorphic encryption primitive and a secret sharing technique to design a new secure two-party comparison protocol,where the numeric inputs of each party can be privately compared as a whole instead of doing that in a bit-by-bit manner.Then,based on the comparison protocol,we exploit the structure of the decision tree to construct PDTC,where the input of a client and the model parameters of a service provider are concealed from the counterparty and the classification result is only revealed to the client.A formal simulation-based security model and the security proof demonstrate that PDTC achieves desirable security properties.In addition,performance evaluation shows that PDTC achieves a lower communication and computation overhead compared with existing schemes.
文摘RFID(Radio Frequency IDentification)is a pioneer technology which has depicted a new lifestyle for humanity.Nowadays we observe an increase in the number of RFID applications and no one can ignore their numerous usage.An important issue with RFID systems is providing privacy requirements of these systems during authentication.Recently in 2014,Cai et al.proposed two improved RFID authentication protocols based on R-RAPS(RFID Authentication Protocol Security Enhanced Rules).We investigate the privacy of their protocols based on Ouafi and Phan privacy model and show that these protocols cannot provide private authentication for RFID users.Moreover,we show that these protocols are vulnerable to impersonation,DoS and traceability attacks.Moreover,we present two improved efficient and secure authentication protocols to ameliorate the performance of Cai et al.’s schemes.Our analysis illustrates that the existing weaknesses of the discussed protocols are eliminated in our proposed protocols.