This paper describes a data transmission method using a cyclic redundancy check and inaudible frequencies.The proposed method uses inaudible high frequencies from 18 k Hz to 22 k Hz generated via the inner speaker of ...This paper describes a data transmission method using a cyclic redundancy check and inaudible frequencies.The proposed method uses inaudible high frequencies from 18 k Hz to 22 k Hz generated via the inner speaker of smart devices.Using the proposed method,the performance is evaluated by conducting data transmission tests between a smart book and smart phone.The test results confirm that the proposed method can send 32 bits of data in an average of 235 ms,the transmission success rate reaches 99.47%,and the error detection rate of the cyclic redundancy check is0.53%.展开更多
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou...With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.展开更多
Remote data monitoring system which adopts virtual instrument usually applies data sharing, acquisition and remote transmission technology via internet. It is able to finish concurrent data acquisition and processing ...Remote data monitoring system which adopts virtual instrument usually applies data sharing, acquisition and remote transmission technology via internet. It is able to finish concurrent data acquisition and processing for multi-user and multi-task and also build a personalized virtual testing environment for more people but with fewer instruments. In this paper, we' 11 elaborate on the design and implementation of information sharing platform through a typical example of how to build multi-user concurrent virtual testing environment based on the virtnal software LabVIEW.展开更多
The interconnection between query processing and data partitioning is pivotal for the acceleration of massive data processing during query execution,primarily by minimizing the number of scanned block files.Existing p...The interconnection between query processing and data partitioning is pivotal for the acceleration of massive data processing during query execution,primarily by minimizing the number of scanned block files.Existing partitioning techniques predominantly focus on query accesses on numeric columns for constructing partitions,often overlooking non-numeric columns and thus limiting optimization potential.Additionally,these techniques,despite creating fine-grained partitions from representative queries to enhance system performance,experience from notable performance declines due to unpredictable fluctuations in future queries.To tackle these issues,we introduce LRP,a learned robust partitioning system for dynamic query processing.LRP first proposes a method for data and query encoding that captures comprehensive column access patterns from historical queries.It then employs Multi-Layer Perceptron and Long Short-Term Memory networks to predict shifts in the distribution of historical queries.To create high-quality,robust partitions based on these predictions,LRP adopts a greedy beam search algorithm for optimal partition division and implements a data redundancy mechanism to share frequently accessed data across partitions.Experimental evaluations reveal that LRP yields partitions with more stable performance under incoming queries and significantly surpasses state-of-the-art partitioning methods.展开更多
During the prediction of software defect distribution, the data redundancy caused by the multi-dimensional measurement will lead to the decrease of prediction accuracy. In order to solve this problem, this paper propo...During the prediction of software defect distribution, the data redundancy caused by the multi-dimensional measurement will lead to the decrease of prediction accuracy. In order to solve this problem, this paper proposed a novel software defect prediction model based on neighborhood preserving embedded support vector machine(NPESVM) algorithm. The model uses SVM as the basic classifier of software defect distribution prediction model, and the NPE algorithm is combined to keep the local geometric structure of the data unchanged in the process of dimensionality reduction. The problem of precision reduction of SVM caused by data loss after attribute reduction is avoided. Compared with single SVM and LLE-SVM prediction algorithm, the prediction model in this paper improves the F-measure in aspect of software defect distribution prediction by 3%~4%.展开更多
Proxy Re-encryption(PRE) is greatly concerned by researchers recently. It potentially has many useful applications in network communications and file sharing. Secure distributed cryptographic file system is one of its...Proxy Re-encryption(PRE) is greatly concerned by researchers recently. It potentially has many useful applications in network communications and file sharing. Secure distributed cryptographic file system is one of its applications. But the practical applications of PRE are few. And even fewer of them are tested by systematically designed experiments. Appling a couple of representative algorithms proposed by BBS,Ateniese,Shao,et al.,a distributed file system is designed. In the system,some substitute mechanisms such as data dispersal,dynamic file sharing,are well-applied. A lot of features such as flexible authorization and data redundancy are embraced in the system. The comparison evaluation justified that the system is more practical and efficient.展开更多
Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data s...Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.展开更多
Radar anti-jamming performance evaluation is a necessary link in the process of radar development,introduction and equipment. The applications of generalized rough set theory are proposed and discussed in this paper t...Radar anti-jamming performance evaluation is a necessary link in the process of radar development,introduction and equipment. The applications of generalized rough set theory are proposed and discussed in this paper to address the problems of big data, incomplete data and redundant data in the construction of evaluation index system. Firstly, a mass of real-valued data is converted to some interval-valued data to avoid an unacceptable number of equivalence classes and classification rules, and the interval similarity relation is employed to make classifications of this interval-valued data. Meanwhile, incomplete data can be solved by a new definition of the connection degree tolerance relation for both interval-valued data and single-valued data, which makes a better description of rough set than the traditional limited tolerance relation. Then, E-condition entropy-based heuristic algorithm is applied to making attribute reduction to optimize the evaluation index system, and final decision rules can be extracted for system evaluation. Finally, the feasibility and advantage of the proposed methods are testified by a real example of radar anti-jamming performance evaluation.展开更多
The rapid growth of the Internet of Things(IoT)and embodied intelligence has increased the demand for sensor nodes that conserve energy and reduce data transmission,especially in resource-limited applications that rel...The rapid growth of the Internet of Things(IoT)and embodied intelligence has increased the demand for sensor nodes that conserve energy and reduce data transmission,especially in resource-limited applications that rely heavily on sensors.Event-based sensors have emerged to meet this demand by reducing data redundancy and lowering power consumption.Within this domain,MEMS(Micro-Electro-Mechanical Systems)inertial switches stand out as promising alternatives to traditional commercial accelerometers and gyroscopes,catering to the widespread need for inertial sensing.This review categorizes the key aspects for optimizing the performance of MEMS inertial switches,with a focus on threshold sensitivity,directional responsiveness,and contact performance.It explores the technological pathways for achieving these objectives and highlights the wide-ranging applications of MEMS inertial switches,especially in scenarios characterized by energy constraints,large-scale deployments,and harsh environments.Additionally,the current challenges faced in the field are analyzed,and future research directions are proposed to enhance the versatility and integration of MEMS inertial switches,thereby promoting their broader adoption and utility.展开更多
A consistency condition is developed for computed tomography (CT) projection data acquired from a straight-line X-ray source trajectory. The condition states that integrals of normalized projection data along detect...A consistency condition is developed for computed tomography (CT) projection data acquired from a straight-line X-ray source trajectory. The condition states that integrals of normalized projection data along detector lines parallel to the X-ray path must be equal. The projection data is required to be untruncated only along the detector lines parallel to the X-ray path, a less restrictive requirement compared to Fourier conditions that necessitate completely untruncated data. The condition is implemented numerically on simple image functions, a discretization error bound is estimated, and detection of motion inconsistencies is demonstrated. The results show that the consistency condition may be used to quantitatively compare the quality of projection data sets obtained from different scans of the same image object.展开更多
The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputi...The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputing Center in Guangzhou, National Supercomputing Center in Jinan, National Supercomputing Center in Changsha, Shanghai Supercomputing Center, and Computer Network Information Center in Chinese Academy of Sciences) into a storage system that spans the wide area network (WAN), which realizes the unified management of global storage resources in China. At present, the GVDS has been successfully deployed in the China National Grid environment. However, when accessing and sharing remote data in the WAN, the GVDS will cause redundant transmission of data and waste a lot of network bandwidth resources. In this paper, we propose an edge cache system as a supplementary system of the GVDS to improve the performance of upper-level applications accessing and sharing remote data. Specifically, we first designs the architecture of the edge cache system, and then study the key technologies of this architecture: the edge cache index mechanism based on double-layer hashing, the edge cache replacement strategy based on the GDSF algorithm, the request routing based on consistent hashing method, and the cluster member maintenance method based on the SWIM protocol. The experimental results show that the edge cache system has successfully implemented the relevant operation functions (read, write, deletion, modification, etc.) and is compatible with the POSIX interface in terms of function. Further, it can greatly reduce the amount of data transmission and increase the data access bandwidth when the accessed file is located at the edge cache system in terms of performance, i.e., its performance is close to the performance of the network file system in the local area network (LAN).展开更多
We present a theoretically-exact and stable computed tomography (CT) reconstruction algorithm that is capable of handling interrupted illumination and therefore of using all measured data at arbitrary pitch This alg...We present a theoretically-exact and stable computed tomography (CT) reconstruction algorithm that is capable of handling interrupted illumination and therefore of using all measured data at arbitrary pitch This algorithm is based on a differentiated backprojection (DBP) on M-lines. First, we discuss the problem of interrupted illumination and how it affects the DBP. Then we show that it is possible to take advantage of some properties of the DBP to compensate for the effects of interrupted illumination in a mathematically exact way. From there, we have developed an efficient algorithm which we have successfully implemented. We show encouraging preliminary results using both computer-simulated data and real data. Our results show that our method is capable of achieving a substantial reduction of image noise when decreasing the helix pitch compared with the maximum pitch case. We conclude that the proposed algorithm defines for the first time a theoretically-exact and stable reconstruction method that is capable of beneficially using all measured data at arbitrary pitch.展开更多
基金supported by Ministry of Educationunder Basic Science Research Program under Grant No.NRF-2013R1A1A2061478
文摘This paper describes a data transmission method using a cyclic redundancy check and inaudible frequencies.The proposed method uses inaudible high frequencies from 18 k Hz to 22 k Hz generated via the inner speaker of smart devices.Using the proposed method,the performance is evaluated by conducting data transmission tests between a smart book and smart phone.The test results confirm that the proposed method can send 32 bits of data in an average of 235 ms,the transmission success rate reaches 99.47%,and the error detection rate of the cyclic redundancy check is0.53%.
基金sponsored by the National Natural Science Foundation of China under grant number No. 62172353, No. 62302114, No. U20B2046 and No. 62172115Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1331007 and No. 1311022+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108
文摘With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.
文摘Remote data monitoring system which adopts virtual instrument usually applies data sharing, acquisition and remote transmission technology via internet. It is able to finish concurrent data acquisition and processing for multi-user and multi-task and also build a personalized virtual testing environment for more people but with fewer instruments. In this paper, we' 11 elaborate on the design and implementation of information sharing platform through a typical example of how to build multi-user concurrent virtual testing environment based on the virtnal software LabVIEW.
基金supported by the National Key Research and Development Program of China(Grant No.2023YFB4503600)the National Natural Science Foundation of China(Grant Nos.U23A20299,62072460,62172424,62276270,and 62322214).
文摘The interconnection between query processing and data partitioning is pivotal for the acceleration of massive data processing during query execution,primarily by minimizing the number of scanned block files.Existing partitioning techniques predominantly focus on query accesses on numeric columns for constructing partitions,often overlooking non-numeric columns and thus limiting optimization potential.Additionally,these techniques,despite creating fine-grained partitions from representative queries to enhance system performance,experience from notable performance declines due to unpredictable fluctuations in future queries.To tackle these issues,we introduce LRP,a learned robust partitioning system for dynamic query processing.LRP first proposes a method for data and query encoding that captures comprehensive column access patterns from historical queries.It then employs Multi-Layer Perceptron and Long Short-Term Memory networks to predict shifts in the distribution of historical queries.To create high-quality,robust partitions based on these predictions,LRP adopts a greedy beam search algorithm for optimal partition division and implements a data redundancy mechanism to share frequently accessed data across partitions.Experimental evaluations reveal that LRP yields partitions with more stable performance under incoming queries and significantly surpasses state-of-the-art partitioning methods.
基金supported by the National Natural Science Foundation of China(Grant No.U1636115)the PAPD fund+1 种基金the CICAEET fundthe Open Foundation of Guizhou Provincial Key Laboratory of Public Big Data(2017BDKFJJ017)
文摘During the prediction of software defect distribution, the data redundancy caused by the multi-dimensional measurement will lead to the decrease of prediction accuracy. In order to solve this problem, this paper proposed a novel software defect prediction model based on neighborhood preserving embedded support vector machine(NPESVM) algorithm. The model uses SVM as the basic classifier of software defect distribution prediction model, and the NPE algorithm is combined to keep the local geometric structure of the data unchanged in the process of dimensionality reduction. The problem of precision reduction of SVM caused by data loss after attribute reduction is avoided. Compared with single SVM and LLE-SVM prediction algorithm, the prediction model in this paper improves the F-measure in aspect of software defect distribution prediction by 3%~4%.
基金supported by National Science Foundation of China (Grant No.60842006)
文摘Proxy Re-encryption(PRE) is greatly concerned by researchers recently. It potentially has many useful applications in network communications and file sharing. Secure distributed cryptographic file system is one of its applications. But the practical applications of PRE are few. And even fewer of them are tested by systematically designed experiments. Appling a couple of representative algorithms proposed by BBS,Ateniese,Shao,et al.,a distributed file system is designed. In the system,some substitute mechanisms such as data dispersal,dynamic file sharing,are well-applied. A lot of features such as flexible authorization and data redundancy are embraced in the system. The comparison evaluation justified that the system is more practical and efficient.
基金the Common Key Technology Innovation Special of Key Industries of Chongqing Science and Technology Commission under Grant No.cstc2017zdcy-zdyfX0067.
文摘Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.
基金the Opening Project of the State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System(No.CEMEE2014K0301A)
文摘Radar anti-jamming performance evaluation is a necessary link in the process of radar development,introduction and equipment. The applications of generalized rough set theory are proposed and discussed in this paper to address the problems of big data, incomplete data and redundant data in the construction of evaluation index system. Firstly, a mass of real-valued data is converted to some interval-valued data to avoid an unacceptable number of equivalence classes and classification rules, and the interval similarity relation is employed to make classifications of this interval-valued data. Meanwhile, incomplete data can be solved by a new definition of the connection degree tolerance relation for both interval-valued data and single-valued data, which makes a better description of rough set than the traditional limited tolerance relation. Then, E-condition entropy-based heuristic algorithm is applied to making attribute reduction to optimize the evaluation index system, and final decision rules can be extracted for system evaluation. Finally, the feasibility and advantage of the proposed methods are testified by a real example of radar anti-jamming performance evaluation.
基金supported in part by the National Key Research and Development Program(Grant No.2023YFB3211200)the National Natural Science Foundation of China(Grant No.U21A6003 and L2324213)。
文摘The rapid growth of the Internet of Things(IoT)and embodied intelligence has increased the demand for sensor nodes that conserve energy and reduce data transmission,especially in resource-limited applications that rely heavily on sensors.Event-based sensors have emerged to meet this demand by reducing data redundancy and lowering power consumption.Within this domain,MEMS(Micro-Electro-Mechanical Systems)inertial switches stand out as promising alternatives to traditional commercial accelerometers and gyroscopes,catering to the widespread need for inertial sensing.This review categorizes the key aspects for optimizing the performance of MEMS inertial switches,with a focus on threshold sensitivity,directional responsiveness,and contact performance.It explores the technological pathways for achieving these objectives and highlights the wide-ranging applications of MEMS inertial switches,especially in scenarios characterized by energy constraints,large-scale deployments,and harsh environments.Additionally,the current challenges faced in the field are analyzed,and future research directions are proposed to enhance the versatility and integration of MEMS inertial switches,thereby promoting their broader adoption and utility.
基金Supported in part by NIH R01 (Nos.CA120540 and EB000225)the Illinois Department of Public Health Ticket for the CureGrant. E.Y. Sidky was supported in part by a Career Development Award from NIH SPORE (No.CA125183-03)
文摘A consistency condition is developed for computed tomography (CT) projection data acquired from a straight-line X-ray source trajectory. The condition states that integrals of normalized projection data along detector lines parallel to the X-ray path must be equal. The projection data is required to be untruncated only along the detector lines parallel to the X-ray path, a less restrictive requirement compared to Fourier conditions that necessitate completely untruncated data. The condition is implemented numerically on simple image functions, a discretization error bound is estimated, and detection of motion inconsistencies is demonstrated. The results show that the consistency condition may be used to quantitatively compare the quality of projection data sets obtained from different scans of the same image object.
基金supported by the National Key Research and Development Program of China(2018YFB0203901)the National Natural Science Foundation of China(Grant No.61772053)+1 种基金the Hebei Youth Talents Support Project(BJ2019008)the Natural Science Foundation of Hebei Province(F2020204003).
文摘The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputing Center in Guangzhou, National Supercomputing Center in Jinan, National Supercomputing Center in Changsha, Shanghai Supercomputing Center, and Computer Network Information Center in Chinese Academy of Sciences) into a storage system that spans the wide area network (WAN), which realizes the unified management of global storage resources in China. At present, the GVDS has been successfully deployed in the China National Grid environment. However, when accessing and sharing remote data in the WAN, the GVDS will cause redundant transmission of data and waste a lot of network bandwidth resources. In this paper, we propose an edge cache system as a supplementary system of the GVDS to improve the performance of upper-level applications accessing and sharing remote data. Specifically, we first designs the architecture of the edge cache system, and then study the key technologies of this architecture: the edge cache index mechanism based on double-layer hashing, the edge cache replacement strategy based on the GDSF algorithm, the request routing based on consistent hashing method, and the cluster member maintenance method based on the SWIM protocol. The experimental results show that the edge cache system has successfully implemented the relevant operation functions (read, write, deletion, modification, etc.) and is compatible with the POSIX interface in terms of function. Further, it can greatly reduce the amount of data transmission and increase the data access bandwidth when the accessed file is located at the edge cache system in terms of performance, i.e., its performance is close to the performance of the network file system in the local area network (LAN).
基金Supported by the US National Institutes of Health (NIH) (Nos. R01 EB007236 and R21 EB009168)in part by the Siemens Healthcare
文摘We present a theoretically-exact and stable computed tomography (CT) reconstruction algorithm that is capable of handling interrupted illumination and therefore of using all measured data at arbitrary pitch This algorithm is based on a differentiated backprojection (DBP) on M-lines. First, we discuss the problem of interrupted illumination and how it affects the DBP. Then we show that it is possible to take advantage of some properties of the DBP to compensate for the effects of interrupted illumination in a mathematically exact way. From there, we have developed an efficient algorithm which we have successfully implemented. We show encouraging preliminary results using both computer-simulated data and real data. Our results show that our method is capable of achieving a substantial reduction of image noise when decreasing the helix pitch compared with the maximum pitch case. We conclude that the proposed algorithm defines for the first time a theoretically-exact and stable reconstruction method that is capable of beneficially using all measured data at arbitrary pitch.