The security of international date encryption algorithm (IDEA(16)), a mini IDEA cipher, against differential cryptanalysis is investigated. The results show that [DEA(16) is secure against differential cryptanal...The security of international date encryption algorithm (IDEA(16)), a mini IDEA cipher, against differential cryptanalysis is investigated. The results show that [DEA(16) is secure against differential cryptanalysis attack after 5 rounds while IDEA(8) needs 7 rounds for the same level of security. The transition matrix for IDEA(16) and its eigenvalue of second largest magnitude are computed. The storage method for the transition matrix has been optimized to speed up file I/O. The emphasis of the work lies in finding out an effective way of computing the eigenvalue of the matrix. To lower time complexity, three mature algorithms in finding eigenvalues are compared from one another and subspace iteration algorithm is employed to compute the eigenvalue of second largest module, with a precision of 0.001.展开更多
With the rapid development of the genomic sequencing technology,the cost of obtaining personal genomic data and effectively analyzing it has been gradually reduced.The analysis and utilization of genomic dam gradually...With the rapid development of the genomic sequencing technology,the cost of obtaining personal genomic data and effectively analyzing it has been gradually reduced.The analysis and utilization of genomic dam gradually entered the public view,and the leakage of genomic dam privacy has attracted the attention of researchers.The security of genomic data is not only related to the protection of personal privacy,but also related to the biological information security of the country.However,there is still no.effective genomic dam privacy protection scheme using Shangyong Mima(SM)algorithms.In this paper,we analyze the widely used genomic dam file formats and design a large genomic dam files encryption scheme based on the SM algorithms.Firstly,we design a key agreement protocol based on the SM2 asymmetric cryptography and use the SM3 hash function to guarantee the correctness of the key.Secondly,we used the SM4 symmetric cryptography to encrypt the genomic data by optimizing the packet processing of files,and improve the usability by assisting the computing platform with key management.Software implementation demonstrates that the scheme can be applied to securely transmit the genomic data in the network environment and provide an encryption method based on SM algorithms for protecting the privacy of genomic data.展开更多
With the rapid development of information technology,data security issues have received increasing attention.Data encryption and decryption technology,as a key means of ensuring data security,plays an important role i...With the rapid development of information technology,data security issues have received increasing attention.Data encryption and decryption technology,as a key means of ensuring data security,plays an important role in multiple fields such as communication security,data storage,and data recovery.This article explores the fundamental principles and interrelationships of data encryption and decryption,examines the strengths,weaknesses,and applicability of symmetric,asymmetric,and hybrid encryption algorithms,and introduces key application scenarios for data encryption and decryption technology.It examines the challenges and corresponding countermeasures related to encryption algorithm security,key management,and encryption-decryption performance.Finally,it analyzes the development trends and future prospects of data encryption and decryption technology.This article provides a systematic understanding of data encryption and decryption techniques,which has good reference value for software designers.展开更多
Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive da...Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.展开更多
In secure communications,lightweight encryption has become crucial,particularly for resource-constrained applications such as embedded devices,wireless sensor networks,and the Internet of Things(IoT).As these systems ...In secure communications,lightweight encryption has become crucial,particularly for resource-constrained applications such as embedded devices,wireless sensor networks,and the Internet of Things(IoT).As these systems proliferate,cryptographic approaches that provide robust security while minimizing computing overhead,energy consumption,and memory usage are becoming increasingly essential.This study examines lightweight encryption techniques utilizing chaotic maps to ensure secure data transmission.Two algorithms are proposed,both employing the Logistic map;the first approach utilizes two logistic chaotic maps,while the second algorithm employs a single logistic chaotic map.Algorithm 1,including a two-stage mechanism that uses chaotic maps for both transposition and key generation,is distinguished by its robustness,guaranteeing a secure encryption method.The second techniqueutilized a single logistic chaoticmapeliminating the secondchaoticmapdecreases computing complexity while maintaining security.The efficacy of both algorithms was evaluated by subjecting them to NIST randomness tests following testing on text files of varying sizes.The findings demonstrate that the double chaotic map method regularly achieves elevated unpredictability and resilience.Conversely,the singular chaotic algorithm markedly lowers the duration necessary for encryption and decryption.These data suggest that while both algorithms are effective,their choice may be contingent upon specific security and processing speed requirements in practical applications.展开更多
A basic procedure for transforming readable data into encoded forms is encryption, which ensures security when the right decryption keys are used. Hadoop is susceptible to possible cyber-attacks because it lacks built...A basic procedure for transforming readable data into encoded forms is encryption, which ensures security when the right decryption keys are used. Hadoop is susceptible to possible cyber-attacks because it lacks built-in security measures, even though it can effectively handle and store enormous datasets using the Hadoop Distributed File System (HDFS). The increasing number of data breaches emphasizes how urgently creative encryption techniques are needed in cloud-based big data settings. This paper presents Adaptive Attribute-Based Honey Encryption (AABHE), a state-of-the-art technique that combines honey encryption with Ciphertext-Policy Attribute-Based Encryption (CP-ABE) to provide improved data security. Even if intercepted, AABHE makes sure that sensitive data cannot be accessed by unauthorized parties. With a focus on protecting huge files in HDFS, the suggested approach achieves 98% security robustness and 95% encryption efficiency, outperforming other encryption methods including Ciphertext-Policy Attribute-Based Encryption (CP-ABE), Key-Policy Attribute-Based Encryption (KB-ABE), and Advanced Encryption Standard combined with Attribute-Based Encryption (AES+ABE). By fixing Hadoop’s security flaws, AABHE fortifies its protections against data breaches and enhances Hadoop’s dependability as a platform for processing and storing massive amounts of data.展开更多
With increasing demand for data circulation,ensuring data security and privacy is paramount,specifically protecting privacy while maximizing utility.Blockchain,while decentralized and transparent,faces challenges in p...With increasing demand for data circulation,ensuring data security and privacy is paramount,specifically protecting privacy while maximizing utility.Blockchain,while decentralized and transparent,faces challenges in privacy protection and data verification,especially for sensitive data.Existing schemes often suffer from inefficiency and high overhead.We propose a privacy protection scheme using BGV homomorphic encryption and Pedersen Secret Sharing.This scheme enables secure computation on encrypted data,with Pedersen sharding and verifying the private key,ensuring data consistency and immutability.The blockchain framework manages key shards,verifies secrets,and aids security auditing.This approach allows for trusted computation without revealing the underlying data.Preliminary results demonstrate the scheme's feasibility in ensuring data privacy and security,making data available but not visible.This study provides an effective solution for data sharing and privacy protection in blockchain applications.展开更多
Data clustering is an essential technique for analyzing complex datasets and continues to be a central research topic in data analysis.Traditional clustering algorithms,such as K-means,are widely used due to their sim...Data clustering is an essential technique for analyzing complex datasets and continues to be a central research topic in data analysis.Traditional clustering algorithms,such as K-means,are widely used due to their simplicity and efficiency.This paper proposes a novel Spiral Mechanism-Optimized Phasmatodea Population Evolution Algorithm(SPPE)to improve clustering performance.The SPPE algorithm introduces several enhancements to the standard Phasmatodea Population Evolution(PPE)algorithm.Firstly,a Variable Neighborhood Search(VNS)factor is incorporated to strengthen the local search capability and foster population diversity.Secondly,a position update model,incorporating a spiral mechanism,is designed to improve the algorithm’s global exploration and convergence speed.Finally,a dynamic balancing factor,guided by fitness values,adjusts the search process to balance exploration and exploitation effectively.The performance of SPPE is first validated on CEC2013 benchmark functions,where it demonstrates excellent convergence speed and superior optimization results compared to several state-of-the-art metaheuristic algorithms.To further verify its practical applicability,SPPE is combined with the K-means algorithm for data clustering and tested on seven datasets.Experimental results show that SPPE-K-means improves clustering accuracy,reduces dependency on initialization,and outperforms other clustering approaches.This study highlights SPPE’s robustness and efficiency in solving both optimization and clustering challenges,making it a promising tool for complex data analysis tasks.展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
Background:In recent years,there has been a growing trend in the utilization of observational studies that make use of routinely collected healthcare data(RCD).These studies rely on algorithms to identify specific hea...Background:In recent years,there has been a growing trend in the utilization of observational studies that make use of routinely collected healthcare data(RCD).These studies rely on algorithms to identify specific health conditions(e.g.,diabetes or sepsis)for statistical analyses.However,there has been substantial variation in the algorithm development and validation,leading to frequently suboptimal performance and posing a significant threat to the validity of study findings.Unfortunately,these issues are often overlooked.Methods:We systematically developed guidance for the development,validation,and evaluation of algorithms designed to identify health status(DEVELOP-RCD).Our initial efforts involved conducting both a narrative review and a systematic review of published studies on the concepts and methodological issues related to algorithm development,validation,and evaluation.Subsequently,we conducted an empirical study on an algorithm for identifying sepsis.Based on these findings,we formulated specific workflow and recommendations for algorithm development,validation,and evaluation within the guidance.Finally,the guidance underwent independent review by a panel of 20 external experts who then convened a consensus meeting to finalize it.Results:A standardized workflow for algorithm development,validation,and evaluation was established.Guided by specific health status considerations,the workflow comprises four integrated steps:assessing an existing algorithm’s suitability for the target health status;developing a new algorithm using recommended methods;validating the algorithm using prescribed performance measures;and evaluating the impact of the algorithm on study results.Additionally,13 good practice recommendations were formulated with detailed explanations.Furthermore,a practical study on sepsis identification was included to demonstrate the application of this guidance.Conclusions:The establishment of guidance is intended to aid researchers and clinicians in the appropriate and accurate development and application of algorithms for identifying health status from RCD.This guidance has the potential to enhance the credibility of findings from observational studies involving RCD.展开更多
To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in t...To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic ...Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.展开更多
With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed...With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed cloud environments.However,existing algorithms often suffer from low embedding capacities and are inadequate for complex data access scenarios.To address these challenges,this paper proposes a novel reversible data hiding algorithm in encrypted images based on adaptive median edge detection(AMED)and ciphertext-policy attributebased encryption(CP-ABE).This proposed algorithm enhances the conventional median edge detection(MED)by incorporating dynamic variables to improve pixel prediction accuracy.The carrier image is subsequently reconstructed using the Huffman coding technique.Encrypted image generation is then achieved by encrypting the image based on system user attributes and data access rights,with the hierarchical embedding of the group’s secret data seamlessly integrated during the encryption process using the CP-ABE scheme.Ultimately,the encrypted image is transmitted to the data hider,enabling independent embedding of the secret data and resulting in the creation of the marked encrypted image.This approach allows only the receiver to extract the authorized group’s secret data,thereby enabling fine-grained,controlled access.Test results indicate that,in contrast to current algorithms,the method introduced here considerably improves the embedding rate while preserving lossless image recovery.Specifically,the average maximum embedding rates for the(3,4)-threshold and(6,6)-threshold schemes reach 5.7853 bits per pixel(bpp)and 7.7781 bpp,respectively,across the BOSSbase,BOW-2,and USD databases.Furthermore,the algorithm facilitates permission-granting and joint-decryption capabilities.Additionally,this paper conducts a comprehensive examination of the algorithm’s robustness using metrics such as image correlation,information entropy,and number of pixel change rate(NPCR),confirming its high level of security.Overall,the algorithm can be applied in a multi-user and multi-level cloud service environment to realize the secure storage of carrier images and secret data.展开更多
Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy a...Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
Elliptic curve(EC)based cryptosystems gained more attention due to enhanced security than the existing public key cryptosystems.A substitution box(S-box)plays a vital role in securing modern symmetric key cryptosystem...Elliptic curve(EC)based cryptosystems gained more attention due to enhanced security than the existing public key cryptosystems.A substitution box(S-box)plays a vital role in securing modern symmetric key cryptosystems.However,the recently developed EC based algorithms usually trade off between computational efficiency and security,necessitating the design of a new algorithm with the desired cryptographic strength.To address these shortcomings,this paper proposes a new scheme based onMordell elliptic curve(MEC)over the complex field for generating distinct,dynamic,and highly uncorrelated S-boxes.Furthermore,we count the exact number of the obtained S-boxes,and demonstrate that the permuted version of the presented S-box is statistically optimal.The nonsingularity of the presented algorithm and the injectivity of the resultant output are explored.Rigorous theoretical analysis and experimental results demonstrate that the proposedmethod is highly effective in generating a large number of dynamic S-boxes with adequate cryptographic properties,surpassing current state-of-the-art S-box generation algorithms in terms of security.Apart fromthis,the generated S-box is benchmarked using side-channel attacks,and its performance is compared with highly nonlinear S-boxes,demonstrating comparable results.In addition,we present an application of our proposed S-box generator by incorporating it into an image encryption technique.The encrypted and decrypted images are tested by employing extensive standard security metrics,including the Number of Pixel Change Rate,the Unified Average Changing Intensity,information entropy,correlation coefficient,and histogram analysis.Moreover,the analysis is extended beyond conventional metrics to validate the new method using advanced tests,such as the NIST statistical test suite,robustness analysis,and noise and cropping attacks.Experimental outcomes show that the presented algorithm strengthens the existing encryption scheme against various well-known cryptographic attacks.展开更多
Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the correspond...Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the corresponding stealthiness condition is analyzed.To enhance system security,we advocate for a single-dimensional encryption method,showing that securing a singular data element is sufficient to shield the system from the perils of stealthy attacks.展开更多
Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the clou...Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the cloud and inference can be obtained on real-world data.In most applications,it is important to compress the vision data due to the enormous bandwidth and memory requirements.Video codecs exploit spatial and temporal correlations to achieve high compression ratios,but they are computationally expensive.This work computes the motion fields between consecutive frames to facilitate the efficient classification of videos.However,contrary to the normal practice of reconstructing the full-resolution frames through motion compensation,this work proposes to infer the class label from the block-based computed motion fields directly.Motion fields are a richer and more complex representation of motion vectors,where each motion vector carries the magnitude and direction information.This approach has two advantages:the cost of motion compensation and video decoding is avoided,and the dimensions of the input signal are highly reduced.This results in a shallower network for classification.The neural network can be trained using motion vectors in two ways:complex representations and magnitude-direction pairs.The proposed work trains a convolutional neural network on the direction and magnitude tensors of the motion fields.Our experimental results show 20×faster convergence during training,reduced overfitting,and accelerated inference on a hand gesture recognition dataset compared to full-resolution and downsampled frames.We validate the proposed methodology on the HGds dataset,achieving a testing accuracy of 99.21%,on the HMDB51 dataset,achieving 82.54%accuracy,and on the UCF101 dataset,achieving 97.13%accuracy,outperforming state-of-the-art methods in computational efficiency.展开更多
To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm ...To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.展开更多
基金Supported by the National Natural Science Foundation of China (60573032, 90604036)Participation in Research Project of Shanghai Jiao Tong University
文摘The security of international date encryption algorithm (IDEA(16)), a mini IDEA cipher, against differential cryptanalysis is investigated. The results show that [DEA(16) is secure against differential cryptanalysis attack after 5 rounds while IDEA(8) needs 7 rounds for the same level of security. The transition matrix for IDEA(16) and its eigenvalue of second largest magnitude are computed. The storage method for the transition matrix has been optimized to speed up file I/O. The emphasis of the work lies in finding out an effective way of computing the eigenvalue of the matrix. To lower time complexity, three mature algorithms in finding eigenvalues are compared from one another and subspace iteration algorithm is employed to compute the eigenvalue of second largest module, with a precision of 0.001.
基金supported by the National Key Research and Development Program of China(No.2016YFC1000307)the National Natural Science Foundation of China(No.61571024,No.61971021).
文摘With the rapid development of the genomic sequencing technology,the cost of obtaining personal genomic data and effectively analyzing it has been gradually reduced.The analysis and utilization of genomic dam gradually entered the public view,and the leakage of genomic dam privacy has attracted the attention of researchers.The security of genomic data is not only related to the protection of personal privacy,but also related to the biological information security of the country.However,there is still no.effective genomic dam privacy protection scheme using Shangyong Mima(SM)algorithms.In this paper,we analyze the widely used genomic dam file formats and design a large genomic dam files encryption scheme based on the SM algorithms.Firstly,we design a key agreement protocol based on the SM2 asymmetric cryptography and use the SM3 hash function to guarantee the correctness of the key.Secondly,we used the SM4 symmetric cryptography to encrypt the genomic data by optimizing the packet processing of files,and improve the usability by assisting the computing platform with key management.Software implementation demonstrates that the scheme can be applied to securely transmit the genomic data in the network environment and provide an encryption method based on SM algorithms for protecting the privacy of genomic data.
文摘With the rapid development of information technology,data security issues have received increasing attention.Data encryption and decryption technology,as a key means of ensuring data security,plays an important role in multiple fields such as communication security,data storage,and data recovery.This article explores the fundamental principles and interrelationships of data encryption and decryption,examines the strengths,weaknesses,and applicability of symmetric,asymmetric,and hybrid encryption algorithms,and introduces key application scenarios for data encryption and decryption technology.It examines the challenges and corresponding countermeasures related to encryption algorithm security,key management,and encryption-decryption performance.Finally,it analyzes the development trends and future prospects of data encryption and decryption technology.This article provides a systematic understanding of data encryption and decryption techniques,which has good reference value for software designers.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.
文摘In secure communications,lightweight encryption has become crucial,particularly for resource-constrained applications such as embedded devices,wireless sensor networks,and the Internet of Things(IoT).As these systems proliferate,cryptographic approaches that provide robust security while minimizing computing overhead,energy consumption,and memory usage are becoming increasingly essential.This study examines lightweight encryption techniques utilizing chaotic maps to ensure secure data transmission.Two algorithms are proposed,both employing the Logistic map;the first approach utilizes two logistic chaotic maps,while the second algorithm employs a single logistic chaotic map.Algorithm 1,including a two-stage mechanism that uses chaotic maps for both transposition and key generation,is distinguished by its robustness,guaranteeing a secure encryption method.The second techniqueutilized a single logistic chaoticmapeliminating the secondchaoticmapdecreases computing complexity while maintaining security.The efficacy of both algorithms was evaluated by subjecting them to NIST randomness tests following testing on text files of varying sizes.The findings demonstrate that the double chaotic map method regularly achieves elevated unpredictability and resilience.Conversely,the singular chaotic algorithm markedly lowers the duration necessary for encryption and decryption.These data suggest that while both algorithms are effective,their choice may be contingent upon specific security and processing speed requirements in practical applications.
基金funded by Princess Nourah bint Abdulrahman UniversityResearchers Supporting Project number (PNURSP2024R408), Princess Nourah bint AbdulrahmanUniversity, Riyadh, Saudi Arabia.
文摘A basic procedure for transforming readable data into encoded forms is encryption, which ensures security when the right decryption keys are used. Hadoop is susceptible to possible cyber-attacks because it lacks built-in security measures, even though it can effectively handle and store enormous datasets using the Hadoop Distributed File System (HDFS). The increasing number of data breaches emphasizes how urgently creative encryption techniques are needed in cloud-based big data settings. This paper presents Adaptive Attribute-Based Honey Encryption (AABHE), a state-of-the-art technique that combines honey encryption with Ciphertext-Policy Attribute-Based Encryption (CP-ABE) to provide improved data security. Even if intercepted, AABHE makes sure that sensitive data cannot be accessed by unauthorized parties. With a focus on protecting huge files in HDFS, the suggested approach achieves 98% security robustness and 95% encryption efficiency, outperforming other encryption methods including Ciphertext-Policy Attribute-Based Encryption (CP-ABE), Key-Policy Attribute-Based Encryption (KB-ABE), and Advanced Encryption Standard combined with Attribute-Based Encryption (AES+ABE). By fixing Hadoop’s security flaws, AABHE fortifies its protections against data breaches and enhances Hadoop’s dependability as a platform for processing and storing massive amounts of data.
基金supported by the National Key Research and Development Plan in China(Grant No.2020YFB1005500)。
文摘With increasing demand for data circulation,ensuring data security and privacy is paramount,specifically protecting privacy while maximizing utility.Blockchain,while decentralized and transparent,faces challenges in privacy protection and data verification,especially for sensitive data.Existing schemes often suffer from inefficiency and high overhead.We propose a privacy protection scheme using BGV homomorphic encryption and Pedersen Secret Sharing.This scheme enables secure computation on encrypted data,with Pedersen sharding and verifying the private key,ensuring data consistency and immutability.The blockchain framework manages key shards,verifies secrets,and aids security auditing.This approach allows for trusted computation without revealing the underlying data.Preliminary results demonstrate the scheme's feasibility in ensuring data privacy and security,making data available but not visible.This study provides an effective solution for data sharing and privacy protection in blockchain applications.
文摘Data clustering is an essential technique for analyzing complex datasets and continues to be a central research topic in data analysis.Traditional clustering algorithms,such as K-means,are widely used due to their simplicity and efficiency.This paper proposes a novel Spiral Mechanism-Optimized Phasmatodea Population Evolution Algorithm(SPPE)to improve clustering performance.The SPPE algorithm introduces several enhancements to the standard Phasmatodea Population Evolution(PPE)algorithm.Firstly,a Variable Neighborhood Search(VNS)factor is incorporated to strengthen the local search capability and foster population diversity.Secondly,a position update model,incorporating a spiral mechanism,is designed to improve the algorithm’s global exploration and convergence speed.Finally,a dynamic balancing factor,guided by fitness values,adjusts the search process to balance exploration and exploitation effectively.The performance of SPPE is first validated on CEC2013 benchmark functions,where it demonstrates excellent convergence speed and superior optimization results compared to several state-of-the-art metaheuristic algorithms.To further verify its practical applicability,SPPE is combined with the K-means algorithm for data clustering and tested on seven datasets.Experimental results show that SPPE-K-means improves clustering accuracy,reduces dependency on initialization,and outperforms other clustering approaches.This study highlights SPPE’s robustness and efficiency in solving both optimization and clustering challenges,making it a promising tool for complex data analysis tasks.
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
基金supported by the National Natural Science Foundation of China(82225049,72104155)the Sichuan Provincial Central Government Guides Local Science and Technology Development Special Project(2022ZYD0127)the 1·3·5 Project for Disciplines of Excellence,West China Hospital,Sichuan University(ZYGD23004).
文摘Background:In recent years,there has been a growing trend in the utilization of observational studies that make use of routinely collected healthcare data(RCD).These studies rely on algorithms to identify specific health conditions(e.g.,diabetes or sepsis)for statistical analyses.However,there has been substantial variation in the algorithm development and validation,leading to frequently suboptimal performance and posing a significant threat to the validity of study findings.Unfortunately,these issues are often overlooked.Methods:We systematically developed guidance for the development,validation,and evaluation of algorithms designed to identify health status(DEVELOP-RCD).Our initial efforts involved conducting both a narrative review and a systematic review of published studies on the concepts and methodological issues related to algorithm development,validation,and evaluation.Subsequently,we conducted an empirical study on an algorithm for identifying sepsis.Based on these findings,we formulated specific workflow and recommendations for algorithm development,validation,and evaluation within the guidance.Finally,the guidance underwent independent review by a panel of 20 external experts who then convened a consensus meeting to finalize it.Results:A standardized workflow for algorithm development,validation,and evaluation was established.Guided by specific health status considerations,the workflow comprises four integrated steps:assessing an existing algorithm’s suitability for the target health status;developing a new algorithm using recommended methods;validating the algorithm using prescribed performance measures;and evaluating the impact of the algorithm on study results.Additionally,13 good practice recommendations were formulated with detailed explanations.Furthermore,a practical study on sepsis identification was included to demonstrate the application of this guidance.Conclusions:The establishment of guidance is intended to aid researchers and clinicians in the appropriate and accurate development and application of algorithms for identifying health status from RCD.This guidance has the potential to enhance the credibility of findings from observational studies involving RCD.
基金supported by National Natural Science Foundation of China(No.62163036).
文摘To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
基金supported by the National Natural Science Foundation of China under Grant Nos.U21A20464,62066005Innovation Project of Guangxi Graduate Education under Grant No.YCSW2024313.
文摘Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.
基金the National Natural Science Foundation of China(Grant Numbers 622724786210245062102451).
文摘With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed cloud environments.However,existing algorithms often suffer from low embedding capacities and are inadequate for complex data access scenarios.To address these challenges,this paper proposes a novel reversible data hiding algorithm in encrypted images based on adaptive median edge detection(AMED)and ciphertext-policy attributebased encryption(CP-ABE).This proposed algorithm enhances the conventional median edge detection(MED)by incorporating dynamic variables to improve pixel prediction accuracy.The carrier image is subsequently reconstructed using the Huffman coding technique.Encrypted image generation is then achieved by encrypting the image based on system user attributes and data access rights,with the hierarchical embedding of the group’s secret data seamlessly integrated during the encryption process using the CP-ABE scheme.Ultimately,the encrypted image is transmitted to the data hider,enabling independent embedding of the secret data and resulting in the creation of the marked encrypted image.This approach allows only the receiver to extract the authorized group’s secret data,thereby enabling fine-grained,controlled access.Test results indicate that,in contrast to current algorithms,the method introduced here considerably improves the embedding rate while preserving lossless image recovery.Specifically,the average maximum embedding rates for the(3,4)-threshold and(6,6)-threshold schemes reach 5.7853 bits per pixel(bpp)and 7.7781 bpp,respectively,across the BOSSbase,BOW-2,and USD databases.Furthermore,the algorithm facilitates permission-granting and joint-decryption capabilities.Additionally,this paper conducts a comprehensive examination of the algorithm’s robustness using metrics such as image correlation,information entropy,and number of pixel change rate(NPCR),confirming its high level of security.Overall,the algorithm can be applied in a multi-user and multi-level cloud service environment to realize the secure storage of carrier images and secret data.
文摘Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
文摘Elliptic curve(EC)based cryptosystems gained more attention due to enhanced security than the existing public key cryptosystems.A substitution box(S-box)plays a vital role in securing modern symmetric key cryptosystems.However,the recently developed EC based algorithms usually trade off between computational efficiency and security,necessitating the design of a new algorithm with the desired cryptographic strength.To address these shortcomings,this paper proposes a new scheme based onMordell elliptic curve(MEC)over the complex field for generating distinct,dynamic,and highly uncorrelated S-boxes.Furthermore,we count the exact number of the obtained S-boxes,and demonstrate that the permuted version of the presented S-box is statistically optimal.The nonsingularity of the presented algorithm and the injectivity of the resultant output are explored.Rigorous theoretical analysis and experimental results demonstrate that the proposedmethod is highly effective in generating a large number of dynamic S-boxes with adequate cryptographic properties,surpassing current state-of-the-art S-box generation algorithms in terms of security.Apart fromthis,the generated S-box is benchmarked using side-channel attacks,and its performance is compared with highly nonlinear S-boxes,demonstrating comparable results.In addition,we present an application of our proposed S-box generator by incorporating it into an image encryption technique.The encrypted and decrypted images are tested by employing extensive standard security metrics,including the Number of Pixel Change Rate,the Unified Average Changing Intensity,information entropy,correlation coefficient,and histogram analysis.Moreover,the analysis is extended beyond conventional metrics to validate the new method using advanced tests,such as the NIST statistical test suite,robustness analysis,and noise and cropping attacks.Experimental outcomes show that the presented algorithm strengthens the existing encryption scheme against various well-known cryptographic attacks.
基金supported by the National Natural Science Foundation of China(62303353,62273030,62573320)。
文摘Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the corresponding stealthiness condition is analyzed.To enhance system security,we advocate for a single-dimensional encryption method,showing that securing a singular data element is sufficient to shield the system from the perils of stealthy attacks.
基金Supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R896).
文摘Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the cloud and inference can be obtained on real-world data.In most applications,it is important to compress the vision data due to the enormous bandwidth and memory requirements.Video codecs exploit spatial and temporal correlations to achieve high compression ratios,but they are computationally expensive.This work computes the motion fields between consecutive frames to facilitate the efficient classification of videos.However,contrary to the normal practice of reconstructing the full-resolution frames through motion compensation,this work proposes to infer the class label from the block-based computed motion fields directly.Motion fields are a richer and more complex representation of motion vectors,where each motion vector carries the magnitude and direction information.This approach has two advantages:the cost of motion compensation and video decoding is avoided,and the dimensions of the input signal are highly reduced.This results in a shallower network for classification.The neural network can be trained using motion vectors in two ways:complex representations and magnitude-direction pairs.The proposed work trains a convolutional neural network on the direction and magnitude tensors of the motion fields.Our experimental results show 20×faster convergence during training,reduced overfitting,and accelerated inference on a hand gesture recognition dataset compared to full-resolution and downsampled frames.We validate the proposed methodology on the HGds dataset,achieving a testing accuracy of 99.21%,on the HMDB51 dataset,achieving 82.54%accuracy,and on the UCF101 dataset,achieving 97.13%accuracy,outperforming state-of-the-art methods in computational efficiency.
文摘To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.