期刊文献+
共找到995篇文章
< 1 2 50 >
每页显示 20 50 100
On Privacy-Preserved Machine Learning Using Secure Multi-Party Computing:Techniques and Trends
1
作者 Oshan Mudannayake Amila Indika +2 位作者 Upul Jayasinghe Gyu MyoungLee Janaka Alawatugoda 《Computers, Materials & Continua》 2025年第11期2527-2578,共52页
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l... The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases. 展开更多
关键词 CRYPTOGRAPHY data privacy machine learning multi-party computation privacy SMPC PPML
在线阅读 下载PDF
Towards Realizing Dynamic Statistical Publishing and Privacy Protection of Location-Based Data:An Adaptive Sampling and Grid Clustering Approach
2
作者 Yan Yan Sun Zichao +2 位作者 Adnan Mahmood Zhang Yue Quan Z.Sheng 《China Communications》 2025年第7期234-256,共23页
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The... To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods. 展开更多
关键词 adaptive sampling differential privacy dynamic statistical publishing grid clustering privacy protection sliding windows
在线阅读 下载PDF
Differential Privacy Federated Learning Based on Adaptive Adjustment
3
作者 Yanjin Cheng Wenmin Li +1 位作者 Sujuan Qin Tengfei Tu 《Computers, Materials & Continua》 2025年第3期4777-4795,共19页
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can com... Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection. 展开更多
关键词 Federated learning privacy protection differential privacy deep learning
在线阅读 下载PDF
Layer-Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy
4
作者 Zhang Xiangfei Zhang Qingchen Jiang Liming 《CAAI Transactions on Intelligence Technology》 2025年第3期929-944,共16页
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garner... Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks. 展开更多
关键词 deep learning differential privacy information security privacy protection
在线阅读 下载PDF
SensFL:Privacy-Preserving Vertical Federated Learning with Sensitive Regularization 被引量:1
5
作者 Chongzhen Zhang Zhichen Liu +4 位作者 Xiangrui Xu Fuqiang Hu Jiao Dai Baigen Cai Wei Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期385-404,共20页
In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach... In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments. 展开更多
关键词 Vertical federated learning privacy DEFENSES
在线阅读 下载PDF
On large language models safety,security,and privacy:A survey 被引量:1
6
作者 Ran Zhang Hong-Wei Li +2 位作者 Xin-Yuan Qian Wen-Bo Jiang Han-Xiao Chen 《Journal of Electronic Science and Technology》 2025年第1期1-21,共21页
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De... The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats. 展开更多
关键词 Large language models privacy issues Safety issues Security issues
在线阅读 下载PDF
AI-Enhanced Secure Data Aggregation for Smart Grids with Privacy Preservation
7
作者 Congcong Wang Chen Wang +1 位作者 Wenying Zheng Wei Gu 《Computers, Materials & Continua》 SCIE EI 2025年第1期799-816,共18页
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use... As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis. 展开更多
关键词 Smart grid data security privacy protection artificial intelligence data aggregation
在线阅读 下载PDF
Security and Privacy in Permissioned Blockchain Interoperability:A Systematic Review
8
作者 Alsoudi Dua TanFong Ang +5 位作者 Chin Soon Ku Okmi Mohammed Yu Luo Jiahui Chen Uzair Aslam Bhatti Lip Yee Por 《Computers, Materials & Continua》 2025年第11期2579-2624,共46页
Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to syst... Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems. 展开更多
关键词 Blockchain security privacy ATTACK THREAT INTEROPERABILITY cross-chain
在线阅读 下载PDF
Blockchain and Smart Contracts:An Effective Approach for the Transaction Security&Privacy in Electronic Medical Records
9
作者 Amal Al-Rasheed Hashim Ali +1 位作者 Rahim Khan Aamir Saeed 《Computers, Materials & Continua》 2025年第11期3419-3436,共18页
In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart... In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape. 展开更多
关键词 Smart-contracts internet of things privacy SECURITY blockchain EMR
在线阅读 下载PDF
Single Sign-On Security and Privacy:A Systematic Literature Review
10
作者 Abdelhadi Zineddine Yousra Belfaik +2 位作者 Abdeslam Rehaimi Yassine Sadqi Said Safi 《Computers, Materials & Continua》 2025年第9期4019-4054,共36页
With the proliferation of online services and applications,adopting Single Sign-On(SSO)mechanisms has become increasingly prevalent.SSO enables users to authenticate once and gain access to multiple services,eliminati... With the proliferation of online services and applications,adopting Single Sign-On(SSO)mechanisms has become increasingly prevalent.SSO enables users to authenticate once and gain access to multiple services,eliminating the need to provide their credentials repeatedly.However,this convenience raises concerns about user security and privacy.The increasing reliance on SSO and its potential risks make it imperative to comprehensively review the various SSO security and privacy threats,identify gaps in existing systems,and explore effective mitigation solutions.This need motivated the first systematic literature review(SLR)of SSO security and privacy,conducted in this paper.The SLR is performed based on rigorous structured research methodology with specific inclusion/exclusion criteria and focuses specifically on the Web environment.Furthermore,it encompasses a meticulous examination and thematic synthesis of 88 relevant publications selected out of 2315 journal articles and conference/proceeding papers published between 2017 and 2024 from reputable academic databases.The SLR highlights critical security and privacy threats relating to SSO systems,reveals significant gaps in existing countermeasures,and emphasizes the need for more comprehensive protection mechanisms.The findings of this SLR will serve as an invaluable resource for scientists and developers interested in enhancing the security and privacy preservation of SSO and designing more efficient and robust SSO systems,thus contributing to the development of the authentication technologies field. 展开更多
关键词 Single sign-on AUTHENTICATION OAuth2.0 OpenID connect security privacy mitigation solutions
在线阅读 下载PDF
An Efficient and Verifiable Data Aggregation Protocol with Enhanced Privacy Protection
11
作者 Yiming Zhang Wei Zhang Cong Shen 《Computers, Materials & Continua》 2025年第11期3185-3211,共27页
Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protectio... Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protection and has attracted widespread attention.Consequently,research increasingly focuses on developing more secure FL techniques.However,in real-world scenarios involving malicious entities,the accuracy of FL results is often compromised,particularly due to the threat of collusion between two servers.To address this challenge,this paper proposes an efficient and verifiable data aggregation protocol with enhanced privacy protection.After analyzing attack methods against prior schemes,we implement key improvements.Specifically,by incorporating cascaded random numbers and perturbation terms into gradients,we strengthen the privacy protection afforded by polynomial masking,effectively preventing information leakage.Furthermore,our protocol features an enhanced verification mechanism capable of detecting collusive behaviors between two servers.Accuracy testing on the MNIST and CIFAR-10 datasets demonstrates that our protocol maintains accuracy comparable to the Federated Averaging Algorithm.In scheme efficiency comparisons,while incurring only a marginal increase in verification overhead relative to the baseline scheme,our protocol achieves an average improvement of 93.13% in privacy protection and verification overhead compared to the state-of-the-art scheme.This result highlights its optimal balance between overall overhead and functionality.A current limitation is that the verificationmechanismcannot precisely pinpoint the source of anomalies within aggregated results when server-side malicious behavior occurs.Addressing this limitation will be a focus of future research. 展开更多
关键词 Data fusion federated learning privacy protection MASKING verifiability fault tolerance
在线阅读 下载PDF
FedEPC:An Efficient and Privacy-Enhancing Clustering Federated Learning Method for Sensing-Computing Fusion Scenarios
12
作者 Ning Tang Wang Luo +6 位作者 Yiwei Wang Bao Feng Shuang Yang Jiangtao Xu Daohua Zhu Zhechen Huang Wei Liang 《Computers, Materials & Continua》 2025年第11期4091-4113,共23页
With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe... With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality. 展开更多
关键词 Federated learning edge computing CLUSTERING NON-IID privacy
在线阅读 下载PDF
P2V-Fabric: Privacy-Preserving Video Using Hyperledger Fabric
13
作者 Muhammad Saad Ki-Woong Park 《Computers, Materials & Continua》 2025年第5期1881-1900,共20页
Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an es... Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further. 展开更多
关键词 Blockchain IOT hyperledger fabric verifiable credentials privacy
在线阅读 下载PDF
METAseen:analyzing network traffic and privacy policies in Web 3.0 based Metaverse
14
作者 Beiyuan Yu Yizhong Liu +2 位作者 Shanyao Ren Ziyu Zhou Jianwei Liu 《Digital Communications and Networks》 2025年第1期13-25,共13页
Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security... Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security,including massive data traffic breaches,large-scale user tracking,analysis activities,unreliable Artificial Intelligence(AI)analysis results,and social engineering security for people.In this work,we concentrate on Decentraland and Sandbox,two well-known Metaverse applications in Web 3.0.Our experiments analyze,for the first time,the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy.We develop a lightweight traffic processing approach suitable for the Web 3.0 environment,which does not rely on complex decryption or reverse engineering techniques.We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts.This method provides a new approach to de-anonymizing users'identities through Metaverse applications.Our system,METAseen,analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices.The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information(PII),device information,and Metaverse-related data.By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider,we discovered that far more than 49%of the Metaverse data flows needed to be disclosed appropriately. 展开更多
关键词 Metaverse privacy policy Traffic analysis Blockchain Data ontology
在线阅读 下载PDF
A Custom Medical Image De-identification System Based on Data Privacy
15
作者 ZHANG Jingchen WANG Jiayang +3 位作者 ZHAO Yuanzhi ZHOU Wei LUO Wei QIAN Qing 《数据与计算发展前沿(中英文)》 2025年第3期122-135,共14页
【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data... 【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets. 展开更多
关键词 de-identification system medical image data privacy DICOM data sharing
暂未订购
Blockchain-Enabled Data Secure Sharing with Privacy Protection Based on Proxy Re-Encryption in Web3.0 Applications
16
作者 Ma Jiawei Zhou Haojie +2 位作者 Wang Sidie Song Jiyuan Tian Tian 《China Communications》 2025年第5期256-272,共17页
With the rapid development of web3.0 applications,the volume of data sharing is increasing,the inefficiency of big data file sharing and the problem of data privacy leakage are becoming more and more prominent,and the... With the rapid development of web3.0 applications,the volume of data sharing is increasing,the inefficiency of big data file sharing and the problem of data privacy leakage are becoming more and more prominent,and the existing data sharing schemes have been difficult to meet the growing demand for data sharing,this paper aims at exploring a secure,efficient and privacy-protecting data sharing scheme under web3.0 applications.Specifically,this paper adopts interplanetary file system(IPFS)technology to realize the storage of large data files to solve the problem of blockchain storage capacity limitation,and utilizes ciphertext policy attribute-based encryption(CP-ABE)and proxy re-encryption(PRE)technology to realize secure multi-party sharing and finegrained access control of data.This paper provides the detailed algorithm design and implementation of data sharing phases and processes,and analyzes the algorithms from the perspectives of security,privacy protection,and performance. 展开更多
关键词 blockchain data sharing privacy protection proxy re-encryption WEB3.0
在线阅读 下载PDF
The Double-Edged Sword of Artificial Intelligence:Ethics and Privacy Protection in Future Exams
17
作者 Haidong Sun Huarui Lu Diexiang Zhao 《Journal of Contemporary Educational Research》 2025年第3期25-33,共9页
The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article ... The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article analyzes the fairness,transparency,and privacy protection issues caused by AI in exams and proposes strategic solutions.This article aims to provide guidance for the rational application of AI technology in exams,ensuring a balance between technological progress and ethical protection by strengthening laws and regulations,enhancing technological transparency,strengthening candidates’privacy rights,and improving the management measures of educational examination institutions. 展开更多
关键词 Artificial intelligence EXAMINATION ETHICS privacy protection
在线阅读 下载PDF
A Privacy Protection Scheme for Verifiable Data Element Circulation Based on Fully Homomorphic Encryption
18
作者 Song Jiyuan Gao Hongmin +3 位作者 Ye Keke Shen Yushi Ma Zhaofeng Feng Chengzhi 《China Communications》 2025年第4期223-235,共13页
With increasing demand for data circulation,ensuring data security and privacy is paramount,specifically protecting privacy while maximizing utility.Blockchain,while decentralized and transparent,faces challenges in p... With increasing demand for data circulation,ensuring data security and privacy is paramount,specifically protecting privacy while maximizing utility.Blockchain,while decentralized and transparent,faces challenges in privacy protection and data verification,especially for sensitive data.Existing schemes often suffer from inefficiency and high overhead.We propose a privacy protection scheme using BGV homomorphic encryption and Pedersen Secret Sharing.This scheme enables secure computation on encrypted data,with Pedersen sharding and verifying the private key,ensuring data consistency and immutability.The blockchain framework manages key shards,verifies secrets,and aids security auditing.This approach allows for trusted computation without revealing the underlying data.Preliminary results demonstrate the scheme's feasibility in ensuring data privacy and security,making data available but not visible.This study provides an effective solution for data sharing and privacy protection in blockchain applications. 展开更多
关键词 blockchain technology data element cir-culation data privacy homomorphic encryption se-cret sharing trusted computation
在线阅读 下载PDF
C-privacy:A social relationship-driven image customization sharing method in cyber-physical networks
19
作者 Dapeng Wu Jian Liu +3 位作者 Yangliang Wan Zhigang Yang Ruyan Wang Xinqi Lin 《Digital Communications and Networks》 2025年第2期563-573,共11页
Cyber-Physical Networks(CPN)are comprehensive systems that integrate information and physical domains,and are widely used in various fields such as online social networking,smart grids,and the Internet of Vehicles(IoV... Cyber-Physical Networks(CPN)are comprehensive systems that integrate information and physical domains,and are widely used in various fields such as online social networking,smart grids,and the Internet of Vehicles(IoV).With the increasing popularity of digital photography and Internet technology,more and more users are sharing images on CPN.However,many images are shared without any privacy processing,exposing hidden privacy risks and making sensitive content easily accessible to Artificial Intelligence(AI)algorithms.Existing image sharing methods lack fine-grained image sharing policies and cannot protect user privacy.To address this issue,we propose a social relationship-driven privacy customization protection model for publishers and co-photographers.We construct a heterogeneous social information network centered on social relationships,introduce a user intimacy evaluation method with time decay,and evaluate privacy levels considering user interest similarity.To protect user privacy while maintaining image appreciation,we design a lightweight face-swapping algorithm based on Generative Adversarial Network(GAN)to swap faces that need to be protected.Our proposed method minimizes the loss of image utility while satisfying privacy requirements,as shown by extensive theoretical and simulation analyses. 展开更多
关键词 Cyber-physical networks Customized privacy Face-swapping Heterogeneous information network Deep fakes
在线阅读 下载PDF
上一页 1 2 50 下一页 到第
使用帮助 返回顶部