Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extrac...Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.展开更多
There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are di...There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are difficult to discover unknown signals while recognizing known ones.In this paper,a compact manifold mixup feature-based open-set recognition approach(OR-CMMF)is proposed to address the above problem.First,the proposed approach utilizes the center loss to constrain decision boundaries so that it obtains the compact latent signal feature representations and extends the low-confidence feature space.Second,the latent signal feature representations are used to construct synthetic representations as substitutes for unknown categories of signals.Then,these constructed representations can occupy the extended low-confidence space.Finally,the proposed approach applies the distillation loss to adjust the decision boundaries between the known categories signals and the constructed unknown categories substitutes so that it accurately discovers unknown signals.The OR-CMMF approach outperformed other state-of-the-art open-set recognition methods in comprehensive recognition performance and running time,as demonstrated by simulation experiments on two public datasets RML2016.10a and ORACLE.展开更多
THE USE OF KNOWLEDGE GRAPH IN NATURAL SCIENCE Knowledge graph is a field of Artificial Intelligence(AI)that aims to represent knowledge in the form of graphs,consisting of nodes and edges which represent entities and ...THE USE OF KNOWLEDGE GRAPH IN NATURAL SCIENCE Knowledge graph is a field of Artificial Intelligence(AI)that aims to represent knowledge in the form of graphs,consisting of nodes and edges which represent entities and relationships between nodes respectively(Aidan et al.,2022).Although the knowledge graph was popularized recently due to use of this idea in Google’s search engine in 2012(Amit,2012),its root can be traced back to the emergence of the Semantic Web as well as earlier works in ontology(Aggarwal,2021).展开更多
The massive connectivity and limited energy pose significant challenges to deploy the enormous devices in energy-efficient and environmentally friendly in the Internet of Things(IoT).Motivated by these challenges,this...The massive connectivity and limited energy pose significant challenges to deploy the enormous devices in energy-efficient and environmentally friendly in the Internet of Things(IoT).Motivated by these challenges,this paper investigates the energy efficiency(EE)maximization problem for downlink cooperative non-orthogonal multiple access(C-NOMA)systems with hardware impairments(HIs).The base station(BS)communicates with several users via a half-duplex(HD)amplified-and-forward(AF)relay.First,we formulate the EE maximization problem of the system under HIs by jointly optimizing transmit power and power allocated coefficient(PAC)at BS,and transmit power at the relay.The original EE maximization problem is a non-convex problem,which is challenging to give the optimal solution directly.First,we use fractional programming to convert the EE maximization problem as a series of subtraction form subproblems.Then,variable substitution and block coordinate descent(BCD)method are used to handle the sub-problems.Next,a resource allocation algorithm is proposed to maximize the EE of the systems.Finally,simulation results show that the proposed algorithm outperforms the downlink cooperative orthogonal multiple access(C-OMA)scheme.展开更多
Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science ...Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science and technology,a large number of textual reports have accumulated in the field of geology.However,many non-hot topics and non-English speaking regions are neglected in mainstream geoscience databases for geological information mining,making it more challenging for some researchers to extract necessary information from these texts.Natural Language Processing(NLP)has obvious advantages in processing large amounts of textual data.The objective of this paper is to identify geological named entities from Chinese geological texts using NLP techniques.We propose the RoBERTa-Prompt-Tuning-NER method,which leverages the concept of Prompt Learning and requires only a small amount of annotated data to train superior models for recognizing geological named entities in low-resource dataset configurations.The RoBERTa layer captures context-based information and longer-distance dependencies through dynamic word vectors.Finally,we conducted experiments on the constructed Geological Named Entity Recognition(GNER)dataset.Our experimental results show that the proposed model achieves the highest F1 score of 80.64%among the four baseline algorithms,demonstrating the reliability and robustness of using the model for Named Entity Recognition of geological texts.展开更多
In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classif...In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classifiers on large signal datasets with redundant samples requires significant memory and high costs.This paper proposes a support databased core-set selection method(SD)for signal recognition,aiming to screen a representative subset that approximates the large signal dataset.Specifically,this subset can be identified by employing the labeled information during the early stages of model training,as some training samples are labeled as supporting data frequently.This support data is crucial for model training and can be found using a border sample selector.Simulation results demonstrate that the SD method minimizes the impact on model recognition performance while reducing the dataset size,and outperforms five other state-of-the-art core-set selection methods when the fraction of training sample kept is less than or equal to 0.3 on the RML2016.04C dataset or 0.5 on the RML22 dataset.The SD method is particularly helpful for signal recognition tasks with limited memory and computing resources.展开更多
The Greenberger–Horne–Zeilinger(GHZ)paradox shows that it is possible to create a multipartite state involving three or more particles in which the measurement outcomes of the particles are correlated in a way that ...The Greenberger–Horne–Zeilinger(GHZ)paradox shows that it is possible to create a multipartite state involving three or more particles in which the measurement outcomes of the particles are correlated in a way that cannot be explained by classical physics.We extend it to witness quantum networks.We first extend the GHZ paradox to simultaneously verify the GHZ state and Einstein–Podolsky–Rosen states on triangle networks.We then extend the GHZ paradox to witness the entanglement of chain networks consisting of multiple GHZ states.All the present results are robust against the noise.展开更多
Research in the field ofmedical image is an important part of themedical robot to operate human organs.Amedical robot is the intersection ofmulti-disciplinary research fields,in whichmedical image is an important dire...Research in the field ofmedical image is an important part of themedical robot to operate human organs.Amedical robot is the intersection ofmulti-disciplinary research fields,in whichmedical image is an important direction and has achieved fruitful results.In this paper,amethodof soft tissue surface feature tracking basedonadepthmatching network is proposed.This method is described based on the triangular matching algorithm.First,we construct a self-made sample set for training the depth matching network from the first N frames of speckle matching data obtained by the triangle matching algorithm.The depth matching network is pre-trained on the ORL face data set and then trained on the self-made training set.After the training,the speckle matching is carried out in the subsequent frames to obtain the speckle matching matrix between the subsequent frames and the first frame.From this matrix,the inter-frame feature matching results can be obtained.In this way,the inter-frame speckle tracking is completed.On this basis,the results of this method are compared with the matching results based on the convolutional neural network.The experimental results show that the proposed method has higher matching accuracy.In particular,the accuracy of the MNIST handwritten data set has reached more than 90%.展开更多
With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification...With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.展开更多
文摘Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.
基金fully supported by National Natural Science Foundation of China(61871422)Natural Science Foundation of Sichuan Province(2023NSFSC1422)Central Universities of South west Minzu University(ZYN2022032)。
文摘There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are difficult to discover unknown signals while recognizing known ones.In this paper,a compact manifold mixup feature-based open-set recognition approach(OR-CMMF)is proposed to address the above problem.First,the proposed approach utilizes the center loss to constrain decision boundaries so that it obtains the compact latent signal feature representations and extends the low-confidence feature space.Second,the latent signal feature representations are used to construct synthetic representations as substitutes for unknown categories of signals.Then,these constructed representations can occupy the extended low-confidence space.Finally,the proposed approach applies the distillation loss to adjust the decision boundaries between the known categories signals and the constructed unknown categories substitutes so that it accurately discovers unknown signals.The OR-CMMF approach outperformed other state-of-the-art open-set recognition methods in comprehensive recognition performance and running time,as demonstrated by simulation experiments on two public datasets RML2016.10a and ORACLE.
基金financially supported by the National Natural Science Foundation of China (Nos.42050102,42050101)。
文摘THE USE OF KNOWLEDGE GRAPH IN NATURAL SCIENCE Knowledge graph is a field of Artificial Intelligence(AI)that aims to represent knowledge in the form of graphs,consisting of nodes and edges which represent entities and relationships between nodes respectively(Aidan et al.,2022).Although the knowledge graph was popularized recently due to use of this idea in Google’s search engine in 2012(Amit,2012),its root can be traced back to the emergence of the Semantic Web as well as earlier works in ontology(Aggarwal,2021).
基金partially supported by the National Natural Science Foundation of China under Grant 61701064Chongqing Natural Science Foundation under Grant cstc2019jcyj-msxmX0264Sichuan Science and Technology Program under Grant 2022YFQ0017。
文摘The massive connectivity and limited energy pose significant challenges to deploy the enormous devices in energy-efficient and environmentally friendly in the Internet of Things(IoT).Motivated by these challenges,this paper investigates the energy efficiency(EE)maximization problem for downlink cooperative non-orthogonal multiple access(C-NOMA)systems with hardware impairments(HIs).The base station(BS)communicates with several users via a half-duplex(HD)amplified-and-forward(AF)relay.First,we formulate the EE maximization problem of the system under HIs by jointly optimizing transmit power and power allocated coefficient(PAC)at BS,and transmit power at the relay.The original EE maximization problem is a non-convex problem,which is challenging to give the optimal solution directly.First,we use fractional programming to convert the EE maximization problem as a series of subtraction form subproblems.Then,variable substitution and block coordinate descent(BCD)method are used to handle the sub-problems.Next,a resource allocation algorithm is proposed to maximize the EE of the systems.Finally,simulation results show that the proposed algorithm outperforms the downlink cooperative orthogonal multiple access(C-OMA)scheme.
基金supported by the National Natural Science Foundation of China(Nos.42488201,42172137,42050104,and 42050102)the National Key R&D Program of China(No.2023YFF0804000)Sichuan Provincial Youth Science&Technology Innovative Research Group Fund(No.2022JDTD0004)
文摘Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science and technology,a large number of textual reports have accumulated in the field of geology.However,many non-hot topics and non-English speaking regions are neglected in mainstream geoscience databases for geological information mining,making it more challenging for some researchers to extract necessary information from these texts.Natural Language Processing(NLP)has obvious advantages in processing large amounts of textual data.The objective of this paper is to identify geological named entities from Chinese geological texts using NLP techniques.We propose the RoBERTa-Prompt-Tuning-NER method,which leverages the concept of Prompt Learning and requires only a small amount of annotated data to train superior models for recognizing geological named entities in low-resource dataset configurations.The RoBERTa layer captures context-based information and longer-distance dependencies through dynamic word vectors.Finally,we conducted experiments on the constructed Geological Named Entity Recognition(GNER)dataset.Our experimental results show that the proposed model achieves the highest F1 score of 80.64%among the four baseline algorithms,demonstrating the reliability and robustness of using the model for Named Entity Recognition of geological texts.
基金supported by National Natural Science Foundation of China(62371098)Natural Science Foundation of Sichuan Province(2023NSFSC1422)+1 种基金National Key Research and Development Program of China(2021YFB2900404)Central Universities of South west Minzu University(ZYN2022032).
文摘In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classifiers on large signal datasets with redundant samples requires significant memory and high costs.This paper proposes a support databased core-set selection method(SD)for signal recognition,aiming to screen a representative subset that approximates the large signal dataset.Specifically,this subset can be identified by employing the labeled information during the early stages of model training,as some training samples are labeled as supporting data frequently.This support data is crucial for model training and can be found using a border sample selector.Simulation results demonstrate that the SD method minimizes the impact on model recognition performance while reducing the dataset size,and outperforms five other state-of-the-art core-set selection methods when the fraction of training sample kept is less than or equal to 0.3 on the RML2016.04C dataset or 0.5 on the RML22 dataset.The SD method is particularly helpful for signal recognition tasks with limited memory and computing resources.
基金supported by the National Natural Science Foundation of China(Nos.62172341,12204386)Sichuan Natural Science Foundation(Nos.2024NSFSC1365,2024NSFSC1375 and 2023NSFSC0447)。
文摘The Greenberger–Horne–Zeilinger(GHZ)paradox shows that it is possible to create a multipartite state involving three or more particles in which the measurement outcomes of the particles are correlated in a way that cannot be explained by classical physics.We extend it to witness quantum networks.We first extend the GHZ paradox to simultaneously verify the GHZ state and Einstein–Podolsky–Rosen states on triangle networks.We then extend the GHZ paradox to witness the entanglement of chain networks consisting of multiple GHZ states.All the present results are robust against the noise.
基金supported by the Sichuan Science and Technology Program (Grant:2021YFQ0003,Acquired by Wenfeng Zheng).
文摘Research in the field ofmedical image is an important part of themedical robot to operate human organs.Amedical robot is the intersection ofmulti-disciplinary research fields,in whichmedical image is an important direction and has achieved fruitful results.In this paper,amethodof soft tissue surface feature tracking basedonadepthmatching network is proposed.This method is described based on the triangular matching algorithm.First,we construct a self-made sample set for training the depth matching network from the first N frames of speckle matching data obtained by the triangle matching algorithm.The depth matching network is pre-trained on the ORL face data set and then trained on the self-made training set.After the training,the speckle matching is carried out in the subsequent frames to obtain the speckle matching matrix between the subsequent frames and the first frame.From this matrix,the inter-frame feature matching results can be obtained.In this way,the inter-frame speckle tracking is completed.On this basis,the results of this method are compared with the matching results based on the convolutional neural network.The experimental results show that the proposed method has higher matching accuracy.In particular,the accuracy of the MNIST handwritten data set has reached more than 90%.
基金Supported by the Sichuan Science and Technology Program (2021YFQ0003).
文摘With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.