Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
The advent of the digital age has consistently provided impetus for facilitating global trade,as evidenced by the numerous customs clearance documents and participants involved in the international trade process,inclu...The advent of the digital age has consistently provided impetus for facilitating global trade,as evidenced by the numerous customs clearance documents and participants involved in the international trade process,including enterprises,agents,and government departments.However,the urgent issue that requires immediate attention is how to achieve secure and efficient cross-border data sharing among these government departments and enterprises in complex trade processes.In addressing this need,this paper proposes a data exchange architecture employing Multi-Authority Attribute-Based Encryption(MA-ABE)in combination with blockchain technology.This scheme supports proxy decryption,attribute revocation,and policy update,while allowing each participating entity to manage their keys autonomously,ensuring system security and enhancing trust among participants.In order to enhance system decentralization,a mechanism has been designed in the architecture where multiple institutions interact with smart contracts and jointly participate in the generation of public parameters.Integration with the multi-party process execution engine Caterpillar has been shown to boost the transparency of cross-border information flow and cooperation between different organizations.The scheme ensures the auditability of data access control information and the visualization of on-chain data sharing.The MA-ABE scheme is statically secure under the q-Decisional Parallel Bilinear Diffie-Hellman Exponent(q-DPBDHE2)assumption in the random oracle model,and can resist ciphertext rollback attacks to achieve true backward and forward security.Theoretical analysis and experimental results demonstrate the appropriateness of the scheme for cross-border data collaboration between different institutions.展开更多
1 Introduction Information technology has been playing an ever-increasing role in geoscience.Sphisicated database platforms are essential for geological data storage,analysis and exchange of Big Data(Feblowitz,2013;Zh...1 Introduction Information technology has been playing an ever-increasing role in geoscience.Sphisicated database platforms are essential for geological data storage,analysis and exchange of Big Data(Feblowitz,2013;Zhang et al.,2016;Teng et al.,2016;Tian and Li,2018).The United States has built an information-sharing platform for state-owned scientific data as a national strategy.展开更多
This paper first puts forward a case based system framework based on data mining techniques. Then the paper examines the possibility of using neural networks as a method of retrieval in such a case based system. In ...This paper first puts forward a case based system framework based on data mining techniques. Then the paper examines the possibility of using neural networks as a method of retrieval in such a case based system. In this system we propose data mining algorithms to discover case knowledge and other algorithms.展开更多
Since web based GIS processes large size spatial geographic information on internet, we should try to improve the efficiency of spatial data query processing and transmission. This paper presents two efficient metho...Since web based GIS processes large size spatial geographic information on internet, we should try to improve the efficiency of spatial data query processing and transmission. This paper presents two efficient methods for this purpose: division transmission and progressive transmission methods. In division transmission method, a map can be divided into several parts, called “tiles”, and only tiles can be transmitted at the request of a client. In progressive transmission method, a map can be split into several phase views based on the significance of vertices, and a server produces a target object and then transmits it progressively when this spatial object is requested from a client. In order to achieve these methods, the algorithms, “tile division”, “priority order estimation” and the strategies for data transmission are proposed in this paper, respectively. Compared with such traditional methods as “map total transmission” and “layer transmission”, the web based GIS data transmission, proposed in this paper, is advantageous in the increase of the data transmission efficiency by a great margin.展开更多
An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid in...An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid integration technology. Multimode output waveguides in the silica AWG with 2% refractive index difference are used to obtain fiat-top spectra. The output waveguide facet is polished to 45° bevel to change the light propagation direction into the mesa-type PIN PD, which simplifies the packaging process. The experimentM results show that the single channel I dB bandwidth of AWG ranges from 2.12nm to 3.06nm, the ROSA responsivity ranges from 0.097 A/W to 0.158A/W, and the 3dB bandwidth is up to 11 GHz. It is promising to be applied in the eight-lane WDM transmission system in data center interconnection.展开更多
This paper considers the problem of applying data mining techniques to aeronautical field.The truncation method,which is one of the techniques in the aeronautical data mining,can be used to efficiently handle the air-...This paper considers the problem of applying data mining techniques to aeronautical field.The truncation method,which is one of the techniques in the aeronautical data mining,can be used to efficiently handle the air-combat behavior data.The technique of air-combat behavior data mining based on the truncation method is proposed to discover the air-combat rules or patterns.The simulation platform of the air-combat behavior data mining that supports two fighters is implemented.The simulation experimental results show that the proposed air-combat behavior data mining technique based on the truncation method is feasible whether in efficiency or in effectiveness.展开更多
Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime,...Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime, so as to prolong the lifetime of the whole WSN. In this paper, we propose a path-based data aggregation scheme (PBDAS) for grid-based wireless sensor networks. In order to extend the lifetime of a WSN, we construct a grid infrastructure by partitioning the whole sensor field into a grid of cells. Each cell has a head responsible for aggregating its own data with the data sensed by the others in the same cell and then transmitting out. In order to efficiently and rapidly transmit the data to the base station (BS), we link each cell head to form a chain. Each cell head on the chain takes turn becoming the chain leader responsible for transmitting data to the BS. Aggregated data moves from head to head along the chain, and finally the chain leader transmits to the BS. In PBDAS, only the cell heads need to transmit data toward the BS. Therefore, the data transmissions to the BS substantially decrease. Besides, the cell heads and chain leader are designated in turn according to the energy level so that the energy depletion of nodes is evenly distributed. Simulation results show that the proposed PBDAS extends the lifetime of sensor nodes, so as to make the lifetime of the whole network longer.展开更多
The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO...The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO40,and PriEco3000 component in a composite base oil system on the performance of lubricants.The study was conducted under small laboratory sample conditions,and a data expansion method using the Gaussian Copula function was proposed to improve the prediction ability of the hybrid model.The study also compared four optimization algorithms,sticky mushroom algorithm(SMA),genetic algorithm(GA),whale optimization algorithm(WOA),and seagull optimization algorithm(SOA),to predict the kinematic viscosity at 40℃,kinematic viscosity at 100℃,viscosity index,and oxidation induction time performance of the lubricant.The results showed that the Gaussian Copula function data expansion method improved the prediction ability of the hybrid model in the case of small samples.The SOA-GBDT hybrid model had the fastest convergence speed for the samples and the best prediction effect,with determination coefficients(R^(2))for the four indicators of lubricants reaching 0.98,0.99,0.96 and 0.96,respectively.Thus,this model can significantly reduce the model’s prediction error and has good prediction ability.展开更多
Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offer...Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.展开更多
Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading...Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.展开更多
The traditional threat score based on fixed thresholds for precipitation verification is sensitive to intensity forecast bias. In this study, the neighborhood precipitation threat score is modified by defining the thr...The traditional threat score based on fixed thresholds for precipitation verification is sensitive to intensity forecast bias. In this study, the neighborhood precipitation threat score is modified by defining the thresholds in terms of the percentiles of overall precipitation instead of fixed threshold values. The impact of intensity forecast bias on the calculated threat score is reduced. The method is tested with the forecasts of a tropical storm that re-intensified after making landfall and caused heavy flooding. The forecasts are produced with and without radar data assimilation. The forecast with assimilation of both radial velocity and reflectivity produce precipitation patterns that better match observations but have large positive intensity bias. When using fixed thresholds, the neighborhood threat scores fail to yield high scores for forecasts that have good pattern match with observations, due to large intensity bias. In contrast, the percentile-based neighborhood method yields the highest score for the forecast with the best pattern match and the smallest position error. The percentile-based method also yields scores that are more consistent with object-based verifications, which are less sensitive to intensity bias, demonstrating the potential value of percentile-based verification.展开更多
Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from nume...Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.展开更多
Long noncoding RNAs (IncRNAs) have been increasingly implicated in a variety of human diseases, including autoimmune disease (Wu et al., 2015), neurodegenerative diseases (Wapinski and Chang, 2011) and cancer (...Long noncoding RNAs (IncRNAs) have been increasingly implicated in a variety of human diseases, including autoimmune disease (Wu et al., 2015), neurodegenerative diseases (Wapinski and Chang, 2011) and cancer (Huarte, 2015). Due to recent advances in next-generation sequencing technologies, tens of thousands of lncRNAs have been identified and annotated, a number of them have been proven to be functional in diverse biological processes through various mechanisms.展开更多
Power transmission lines are a critical component of the entire power system,and ice accretion incidents caused by various types of power systems can result in immeasurable harm.Currently,network models used for ice d...Power transmission lines are a critical component of the entire power system,and ice accretion incidents caused by various types of power systems can result in immeasurable harm.Currently,network models used for ice detection on power transmission lines require a substantial amount of sample data to support their training,and their drawback is that detection accuracy is significantly affected by the inaccurate annotation among training dataset.Therefore,we propose a transformer-based detection model,structured into two stages to collectively address the impact of inaccurate datasets on model training.In the first stage,a spatial similarity enhancement(SSE)module is designed to leverage spatial information to enhance the construction of the detection framework,thereby improving the accuracy of the detector.In the second stage,a target similarity enhancement(TSE)module is introduced to enhance object-related features,reducing the impact of inaccurate data on model training,thereby expanding global correlation.Additionally,by incorporating a multi-head adaptive attention window(MAAW),spatial information is combined with category information to achieve information interaction.Simultaneously,a quasi-wavelet structure,compatible with deep learning,is employed to highlight subtle features at different scales.Experimental results indicate that the proposed model in this paper outperforms existing mainstream detection models,demonstrating superior performance and stability.展开更多
The consultation intention of emergency decision-makers in urban rail transit(URT)is input into the emergency knowledge base in the form of domain questions to obtain emergency decision support services.This approach ...The consultation intention of emergency decision-makers in urban rail transit(URT)is input into the emergency knowledge base in the form of domain questions to obtain emergency decision support services.This approach facilitates the rapid collection of complete knowledge and rules to form effective decisions.However,the current structured degree of the URT emergency knowledge base remains low,and the domain questions lack labeled datasets,resulting in a large deviation between the consultation outcomes and the intended objectives.To address this issue,this paper proposes a question intention recognition model for the URT emergency domain,leveraging knowledge graph(KG)and data enhancement technology.First,a structured storage of emergency cases and emergency plans is realized based on KG.Subsequently,a comprehensive question template is developed,and the labeled dataset of emergency domain questions in URT is generated through the KG.Lastly,data enhancement is applied by prompt learning and the NLP Chinese Data Augmentation(NLPCDA)tool,and the intention recognition model combining Generalized Auto-regression Pre-training for Language Understanding(XLNet)and Recurrent Convolutional Neural Network for Text Classification(TextRCNN)is constructed.Word embeddings are generated by XLNet,context information is further captured using Bidirectional Long Short-Term Memory Neural Network(BiLSTM),and salient features are extracted with Convolutional Neural Network(CNN).Experimental results demonstrate that the proposed model can enhance the clarity of classification and the identification of domain questions,thereby providing supportive knowledge for emergency decision-making in URT.展开更多
Cloud storage service reduces the burden of data users by storing users' data files in the cloud. But, the files might be modified in the cloud. So, data users hope to check data files integrity periodically. In a pu...Cloud storage service reduces the burden of data users by storing users' data files in the cloud. But, the files might be modified in the cloud. So, data users hope to check data files integrity periodically. In a public auditing protocol, there is a trusted auditor who has certain ability to help users to check the integrity of data files. With the advantages of no public key management and verification, researchers focus on public auditing protocol in ID-based cryptography recently. However, some existing protocols are vulnerable to forgery attack. In this paper, based on ID-based signature technology, by strengthening information authentication and the computing power of the auditor, we propose an ID-based public auditing protocol for cloud data integrity checking. We also prove that the proposed protocol is secure in the random oracle model under the assumption that the Diffie-Hellman problem is hard. Furthermore, we compare the proposed protocol with other two ID-based auditing protocols in security features, communication efficiency and computation cost. The comparisons show that the proposed protocol satisfies more security features with lower computation cost.展开更多
The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can...The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.展开更多
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金supported by Hainan Provincial Natural Science Foundation of China Nos.622RC617,624RC485Open Foundation of State Key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications)(SKLNST-2023-1-07).
文摘The advent of the digital age has consistently provided impetus for facilitating global trade,as evidenced by the numerous customs clearance documents and participants involved in the international trade process,including enterprises,agents,and government departments.However,the urgent issue that requires immediate attention is how to achieve secure and efficient cross-border data sharing among these government departments and enterprises in complex trade processes.In addressing this need,this paper proposes a data exchange architecture employing Multi-Authority Attribute-Based Encryption(MA-ABE)in combination with blockchain technology.This scheme supports proxy decryption,attribute revocation,and policy update,while allowing each participating entity to manage their keys autonomously,ensuring system security and enhancing trust among participants.In order to enhance system decentralization,a mechanism has been designed in the architecture where multiple institutions interact with smart contracts and jointly participate in the generation of public parameters.Integration with the multi-party process execution engine Caterpillar has been shown to boost the transparency of cross-border information flow and cooperation between different organizations.The scheme ensures the auditability of data access control information and the visualization of on-chain data sharing.The MA-ABE scheme is statically secure under the q-Decisional Parallel Bilinear Diffie-Hellman Exponent(q-DPBDHE2)assumption in the random oracle model,and can resist ciphertext rollback attacks to achieve true backward and forward security.Theoretical analysis and experimental results demonstrate the appropriateness of the scheme for cross-border data collaboration between different institutions.
基金granted by the National Science&Technology Major Projects of China(Grant No.2016ZX05033).
文摘1 Introduction Information technology has been playing an ever-increasing role in geoscience.Sphisicated database platforms are essential for geological data storage,analysis and exchange of Big Data(Feblowitz,2013;Zhang et al.,2016;Teng et al.,2016;Tian and Li,2018).The United States has built an information-sharing platform for state-owned scientific data as a national strategy.
基金Supported by the National Science of China(6 0 0 75 0 15 ) and Key Project of Scientific and Technological Departmentin Anhui
文摘This paper first puts forward a case based system framework based on data mining techniques. Then the paper examines the possibility of using neural networks as a method of retrieval in such a case based system. In this system we propose data mining algorithms to discover case knowledge and other algorithms.
文摘Since web based GIS processes large size spatial geographic information on internet, we should try to improve the efficiency of spatial data query processing and transmission. This paper presents two efficient methods for this purpose: division transmission and progressive transmission methods. In division transmission method, a map can be divided into several parts, called “tiles”, and only tiles can be transmitted at the request of a client. In progressive transmission method, a map can be split into several phase views based on the significance of vertices, and a server produces a target object and then transmits it progressively when this spatial object is requested from a client. In order to achieve these methods, the algorithms, “tile division”, “priority order estimation” and the strategies for data transmission are proposed in this paper, respectively. Compared with such traditional methods as “map total transmission” and “layer transmission”, the web based GIS data transmission, proposed in this paper, is advantageous in the increase of the data transmission efficiency by a great margin.
基金Supported by the National High Technology Research and Development Program of China under Grant No 2015AA016902the National Natural Science Foundation of China under Grant Nos 61435013 and 61405188the K.C.Wong Education Foundation
文摘An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid integration technology. Multimode output waveguides in the silica AWG with 2% refractive index difference are used to obtain fiat-top spectra. The output waveguide facet is polished to 45° bevel to change the light propagation direction into the mesa-type PIN PD, which simplifies the packaging process. The experimentM results show that the single channel I dB bandwidth of AWG ranges from 2.12nm to 3.06nm, the ROSA responsivity ranges from 0.097 A/W to 0.158A/W, and the 3dB bandwidth is up to 11 GHz. It is promising to be applied in the eight-lane WDM transmission system in data center interconnection.
文摘This paper considers the problem of applying data mining techniques to aeronautical field.The truncation method,which is one of the techniques in the aeronautical data mining,can be used to efficiently handle the air-combat behavior data.The technique of air-combat behavior data mining based on the truncation method is proposed to discover the air-combat rules or patterns.The simulation platform of the air-combat behavior data mining that supports two fighters is implemented.The simulation experimental results show that the proposed air-combat behavior data mining technique based on the truncation method is feasible whether in efficiency or in effectiveness.
基金supported by the NSC under Grant No.NSC-101-2221-E-239-032 and NSC-102-2221-E-239-020
文摘Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime, so as to prolong the lifetime of the whole WSN. In this paper, we propose a path-based data aggregation scheme (PBDAS) for grid-based wireless sensor networks. In order to extend the lifetime of a WSN, we construct a grid infrastructure by partitioning the whole sensor field into a grid of cells. Each cell has a head responsible for aggregating its own data with the data sensed by the others in the same cell and then transmitting out. In order to efficiently and rapidly transmit the data to the base station (BS), we link each cell head to form a chain. Each cell head on the chain takes turn becoming the chain leader responsible for transmitting data to the BS. Aggregated data moves from head to head along the chain, and finally the chain leader transmits to the BS. In PBDAS, only the cell heads need to transmit data toward the BS. Therefore, the data transmissions to the BS substantially decrease. Besides, the cell heads and chain leader are designated in turn according to the energy level so that the energy depletion of nodes is evenly distributed. Simulation results show that the proposed PBDAS extends the lifetime of sensor nodes, so as to make the lifetime of the whole network longer.
基金financial support extended for this academic work by the Beijing Natural Science Foundation(Grant 2232066)the Open Project Foundation of State Key Laboratory of Solid Lubrication(Grant LSL-2212).
文摘The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO40,and PriEco3000 component in a composite base oil system on the performance of lubricants.The study was conducted under small laboratory sample conditions,and a data expansion method using the Gaussian Copula function was proposed to improve the prediction ability of the hybrid model.The study also compared four optimization algorithms,sticky mushroom algorithm(SMA),genetic algorithm(GA),whale optimization algorithm(WOA),and seagull optimization algorithm(SOA),to predict the kinematic viscosity at 40℃,kinematic viscosity at 100℃,viscosity index,and oxidation induction time performance of the lubricant.The results showed that the Gaussian Copula function data expansion method improved the prediction ability of the hybrid model in the case of small samples.The SOA-GBDT hybrid model had the fastest convergence speed for the samples and the best prediction effect,with determination coefficients(R^(2))for the four indicators of lubricants reaching 0.98,0.99,0.96 and 0.96,respectively.Thus,this model can significantly reduce the model’s prediction error and has good prediction ability.
文摘Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.
文摘Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.
基金Supported by National Natural Science Foundation of China(61304079,61125306,61034002)the Open Research Project from SKLMCCS(20120106)+1 种基金the Fundamental Research Funds for the Central Universities(FRF-TP-13-018A)the China Postdoctoral Science.Foundation(201_3M_5305_27)
基金primarily supported by the National 973 Fundamental Research Program of China(Grant No.2013CB430103)the Department of Transportation Federal Aviation Administration(Grant No.NA17RJ1227)through the National Oceanic and Atmospheric Administration+1 种基金supported by the National Science Foundation of China(Grant No.41405100)the Fundamental Research Funds for the Central Universities(Grant No.20620140343)
文摘The traditional threat score based on fixed thresholds for precipitation verification is sensitive to intensity forecast bias. In this study, the neighborhood precipitation threat score is modified by defining the thresholds in terms of the percentiles of overall precipitation instead of fixed threshold values. The impact of intensity forecast bias on the calculated threat score is reduced. The method is tested with the forecasts of a tropical storm that re-intensified after making landfall and caused heavy flooding. The forecasts are produced with and without radar data assimilation. The forecast with assimilation of both radial velocity and reflectivity produce precipitation patterns that better match observations but have large positive intensity bias. When using fixed thresholds, the neighborhood threat scores fail to yield high scores for forecasts that have good pattern match with observations, due to large intensity bias. In contrast, the percentile-based neighborhood method yields the highest score for the forecast with the best pattern match and the smallest position error. The percentile-based method also yields scores that are more consistent with object-based verifications, which are less sensitive to intensity bias, demonstrating the potential value of percentile-based verification.
基金supported by Fundamental Research Program of Shanxi Province(Nos.202203021211088,202403021212254,202403021221109)Graduate Research Innovation Project in Shanxi Province(No.2024KY616).
文摘Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.
基金supported by grants from the National Key R&D Program of China (2017YFA0106700)National Natural Science Foundation of China (81772614, U1611261, 81772586 and 81602461)+3 种基金China Postdoctoral Science Foundation (2017M610573)Young Elite Scientists Sponsorship Program by CAST (2017QNRC001)Guangdong Province Universities and Colleges Pearl River Scholar Funded Scheme (2017, to J. Zheng)Fundamental Research Funds for the Central Universities (SYSU:17ykzd32)
文摘Long noncoding RNAs (IncRNAs) have been increasingly implicated in a variety of human diseases, including autoimmune disease (Wu et al., 2015), neurodegenerative diseases (Wapinski and Chang, 2011) and cancer (Huarte, 2015). Due to recent advances in next-generation sequencing technologies, tens of thousands of lncRNAs have been identified and annotated, a number of them have been proven to be functional in diverse biological processes through various mechanisms.
文摘Power transmission lines are a critical component of the entire power system,and ice accretion incidents caused by various types of power systems can result in immeasurable harm.Currently,network models used for ice detection on power transmission lines require a substantial amount of sample data to support their training,and their drawback is that detection accuracy is significantly affected by the inaccurate annotation among training dataset.Therefore,we propose a transformer-based detection model,structured into two stages to collectively address the impact of inaccurate datasets on model training.In the first stage,a spatial similarity enhancement(SSE)module is designed to leverage spatial information to enhance the construction of the detection framework,thereby improving the accuracy of the detector.In the second stage,a target similarity enhancement(TSE)module is introduced to enhance object-related features,reducing the impact of inaccurate data on model training,thereby expanding global correlation.Additionally,by incorporating a multi-head adaptive attention window(MAAW),spatial information is combined with category information to achieve information interaction.Simultaneously,a quasi-wavelet structure,compatible with deep learning,is employed to highlight subtle features at different scales.Experimental results indicate that the proposed model in this paper outperforms existing mainstream detection models,demonstrating superior performance and stability.
基金supported in part by the National Natural Science Foundation of China.The funding numbers 62433005,62272036,62132003,and 62173167.
文摘The consultation intention of emergency decision-makers in urban rail transit(URT)is input into the emergency knowledge base in the form of domain questions to obtain emergency decision support services.This approach facilitates the rapid collection of complete knowledge and rules to form effective decisions.However,the current structured degree of the URT emergency knowledge base remains low,and the domain questions lack labeled datasets,resulting in a large deviation between the consultation outcomes and the intended objectives.To address this issue,this paper proposes a question intention recognition model for the URT emergency domain,leveraging knowledge graph(KG)and data enhancement technology.First,a structured storage of emergency cases and emergency plans is realized based on KG.Subsequently,a comprehensive question template is developed,and the labeled dataset of emergency domain questions in URT is generated through the KG.Lastly,data enhancement is applied by prompt learning and the NLP Chinese Data Augmentation(NLPCDA)tool,and the intention recognition model combining Generalized Auto-regression Pre-training for Language Understanding(XLNet)and Recurrent Convolutional Neural Network for Text Classification(TextRCNN)is constructed.Word embeddings are generated by XLNet,context information is further captured using Bidirectional Long Short-Term Memory Neural Network(BiLSTM),and salient features are extracted with Convolutional Neural Network(CNN).Experimental results demonstrate that the proposed model can enhance the clarity of classification and the identification of domain questions,thereby providing supportive knowledge for emergency decision-making in URT.
基金Supported by the Applied Basic and Advanced Technology Research Programs of Tianjin(15JCYBJC15900)the National Natural Science Foundation of China(51378350)
文摘Cloud storage service reduces the burden of data users by storing users' data files in the cloud. But, the files might be modified in the cloud. So, data users hope to check data files integrity periodically. In a public auditing protocol, there is a trusted auditor who has certain ability to help users to check the integrity of data files. With the advantages of no public key management and verification, researchers focus on public auditing protocol in ID-based cryptography recently. However, some existing protocols are vulnerable to forgery attack. In this paper, based on ID-based signature technology, by strengthening information authentication and the computing power of the auditor, we propose an ID-based public auditing protocol for cloud data integrity checking. We also prove that the proposed protocol is secure in the random oracle model under the assumption that the Diffie-Hellman problem is hard. Furthermore, we compare the proposed protocol with other two ID-based auditing protocols in security features, communication efficiency and computation cost. The comparisons show that the proposed protocol satisfies more security features with lower computation cost.
文摘The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.