Devices in Industrial Internet of Things are vulnerable to voice adversarial attacks.Studying adversarial speech samples is crucial for enhancing the security of automatic speech recognition systems in Industrial Inte...Devices in Industrial Internet of Things are vulnerable to voice adversarial attacks.Studying adversarial speech samples is crucial for enhancing the security of automatic speech recognition systems in Industrial Internet of Things devices.Current black-box attack methods often face challenges such as complex search processes and excessive perturbation generation.To address these issues,this paper proposes a black-box voice adversarial attack method based on enhanced neural predictors.This method searches for minimal perturbations in the perturbation space,employing an optimization process guided by a self-attention neural predictor to identify the optimal perturbation direction.This direction is then applied to the original sample to generate adversarial samples.To improve search efficiency,a pruning strategy is designed to discard samples below a threshold in the early search stages,reducing the number of searches.Additionally,a dynamic factor based on feedback from querying the automatic speech recognition system is introduced to adaptively adjust the search step size,further accelerating the search process.To validate the performance of the proposed method,experiments are conducted on the LibriSpeech dataset.Compared with the mainstream methods,the proposed method improves the signal-to-noise ratio by 0.8 dB,increases sample similarity by 0.43%,and reduces the average number of queries by 7%.Experimental results demonstrate that the proposed method offers better attack effectiveness and stealthiness.展开更多
Social robot accounts controlled by artificial intelligence or humans are active in social networks,bringing negative impacts to network security and social life.Existing social robot detection methods based on graph ...Social robot accounts controlled by artificial intelligence or humans are active in social networks,bringing negative impacts to network security and social life.Existing social robot detection methods based on graph neural networks suffer from the problem of many social network nodes and complex relationships,which makes it difficult to accurately describe the difference between the topological relations of nodes,resulting in low detection accuracy of social robots.This paper proposes a social robot detection method with the use of an improved neural network.First,social relationship subgraphs are constructed by leveraging the user’s social network to disentangle intricate social relationships effectively.Then,a linear modulated graph attention residual network model is devised to extract the node and network topology features of the social relation subgraph,thereby generating comprehensive social relation subgraph features,and the feature-wise linear modulation module of the model can better learn the differences between the nodes.Next,user text content and behavioral gene sequences are extracted to construct social behavioral features combined with the social relationship subgraph features.Finally,social robots can be more accurately identified by combining user behavioral and relationship features.By carrying out experimental studies based on the publicly available datasets TwiBot-20 and Cresci-15,the suggested method’s detection accuracies can achieve 86.73%and 97.86%,respectively.Compared with the existing mainstream approaches,the accuracy of the proposed method is 2.2%and 1.35%higher on the two datasets.The results show that the method proposed in this paper can effectively detect social robots and maintain a healthy ecological environment of social networks.展开更多
In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is de...In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is dependent on a single video input source and few visual labels,and there is a problem with semantic alignment between video contents and generated natural sentences,which are not suitable for accurately comprehending and describing the video contents.To address this issue,this paper proposes a video captioning method by semantic topic-guided generation.First,a 3D convolutional neural network is utilized to extract the spatiotemporal features of videos during the encoding.Then,the semantic topics of video data are extracted using the visual labels retrieved from similar video data.In the decoding,a decoder is constructed by combining a novel Enhance-TopK sampling algorithm with a Generative Pre-trained Transformer-2 deep neural network,which decreases the influence of“deviation”in the semantic mapping process between videos and texts by jointly decoding a baseline and semantic topics of video contents.During this process,the designed Enhance-TopK sampling algorithm can alleviate a long-tail problem by dynamically adjusting the probability distribution of the predicted words.Finally,the experiments are conducted on two publicly used Microsoft Research Video Description andMicrosoft Research-Video to Text datasets.The experimental results demonstrate that the proposed method outperforms several state-of-art approaches.Specifically,the performance indicators Bilingual Evaluation Understudy,Metric for Evaluation of Translation with Explicit Ordering,Recall Oriented Understudy for Gisting Evaluation-longest common subsequence,and Consensus-based Image Description Evaluation of the proposed method are improved by 1.2%,0.1%,0.3%,and 2.4% on the Microsoft Research Video Description dataset,and 0.1%,1.0%,0.1%,and 2.8% on the Microsoft Research-Video to Text dataset,respectively,compared with the existing video captioning methods.As a result,the proposed method can generate video captioning that is more closely aligned with human natural language expression habits.展开更多
Currently,the video captioning models based on an encoder-decoder mainly rely on a single video input source.The contents of video captioning are limited since few studies employed external corpus information to guide...Currently,the video captioning models based on an encoder-decoder mainly rely on a single video input source.The contents of video captioning are limited since few studies employed external corpus information to guide the generation of video captioning,which is not conducive to the accurate descrip-tion and understanding of video content.To address this issue,a novel video captioning method guided by a sentence retrieval generation network(ED-SRG)is proposed in this paper.First,a ResNeXt network model,an efficient convolutional network for online video understanding(ECO)model,and a long short-term memory(LSTM)network model are integrated to construct an encoder-decoder,which is utilized to extract the 2D features,3D features,and object features of video data respectively.These features are decoded to generate textual sentences that conform to video content for sentence retrieval.Then,a sentence-transformer network model is employed to retrieve different sentences in an external corpus that are semantically similar to the above textual sentences.The candidate sentences are screened out through similarity measurement.Finally,a novel GPT-2 network model is constructed based on GPT-2 network structure.The model introduces a designed random selector to randomly select predicted words with a high probability in the corpus,which is used to guide and generate textual sentences that are more in line with human natural language expressions.The proposed method in this paper is compared with several existing works by experiments.The results show that the indicators BLEU-4,CIDEr,ROUGE_L,and METEOR are improved by 3.1%,1.3%,0.3%,and 1.5%on a public dataset MSVD and 1.3%,0.5%,0.2%,1.9%on a public dataset MSR-VTT respectively.It can be seen that the proposed method in this paper can generate video captioning with richer semantics than several state-of-the-art approaches.展开更多
基金supported in part by the Natural Science Foundation of China under Grant 62273272,Grant 62303375,and Grant 61873277in part by the Key Research and Development Program of Shaanxi Province under Grant 2024CY2-GJHX-49 and Grant 2024CY2-GJHX-43+1 种基金in part by the Youth Innovation Team of Shaanxi Universitiesand in part by the Key Scientific Research Programof Education Department of Shaanxi Province under Grant 24JR111.
文摘Devices in Industrial Internet of Things are vulnerable to voice adversarial attacks.Studying adversarial speech samples is crucial for enhancing the security of automatic speech recognition systems in Industrial Internet of Things devices.Current black-box attack methods often face challenges such as complex search processes and excessive perturbation generation.To address these issues,this paper proposes a black-box voice adversarial attack method based on enhanced neural predictors.This method searches for minimal perturbations in the perturbation space,employing an optimization process guided by a self-attention neural predictor to identify the optimal perturbation direction.This direction is then applied to the original sample to generate adversarial samples.To improve search efficiency,a pruning strategy is designed to discard samples below a threshold in the early search stages,reducing the number of searches.Additionally,a dynamic factor based on feedback from querying the automatic speech recognition system is introduced to adaptively adjust the search step size,further accelerating the search process.To validate the performance of the proposed method,experiments are conducted on the LibriSpeech dataset.Compared with the mainstream methods,the proposed method improves the signal-to-noise ratio by 0.8 dB,increases sample similarity by 0.43%,and reduces the average number of queries by 7%.Experimental results demonstrate that the proposed method offers better attack effectiveness and stealthiness.
基金This work was supported in part by the National Natural Science Foundation of China under Grants 62273272,62303375 and 61873277in part by the Key Research and Development Program of Shaanxi Province under Grant 2023-YBGY-243+2 种基金in part by the Natural Science Foundation of Shaanxi Province under Grants 2022JQ-606 and 2020-JQ758in part by the Research Plan of Department of Education of Shaanxi Province under Grant 21JK0752in part by the Youth Innovation Team of Shaanxi Universities.
文摘Social robot accounts controlled by artificial intelligence or humans are active in social networks,bringing negative impacts to network security and social life.Existing social robot detection methods based on graph neural networks suffer from the problem of many social network nodes and complex relationships,which makes it difficult to accurately describe the difference between the topological relations of nodes,resulting in low detection accuracy of social robots.This paper proposes a social robot detection method with the use of an improved neural network.First,social relationship subgraphs are constructed by leveraging the user’s social network to disentangle intricate social relationships effectively.Then,a linear modulated graph attention residual network model is devised to extract the node and network topology features of the social relation subgraph,thereby generating comprehensive social relation subgraph features,and the feature-wise linear modulation module of the model can better learn the differences between the nodes.Next,user text content and behavioral gene sequences are extracted to construct social behavioral features combined with the social relationship subgraph features.Finally,social robots can be more accurately identified by combining user behavioral and relationship features.By carrying out experimental studies based on the publicly available datasets TwiBot-20 and Cresci-15,the suggested method’s detection accuracies can achieve 86.73%and 97.86%,respectively.Compared with the existing mainstream approaches,the accuracy of the proposed method is 2.2%and 1.35%higher on the two datasets.The results show that the method proposed in this paper can effectively detect social robots and maintain a healthy ecological environment of social networks.
基金supported in part by the National Natural Science Foundation of China under Grant 61873277in part by the Natural Science Basic Research Plan in Shaanxi Province of China underGrant 2020JQ-758in part by the Chinese Postdoctoral Science Foundation under Grant 2020M673446.
文摘In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is dependent on a single video input source and few visual labels,and there is a problem with semantic alignment between video contents and generated natural sentences,which are not suitable for accurately comprehending and describing the video contents.To address this issue,this paper proposes a video captioning method by semantic topic-guided generation.First,a 3D convolutional neural network is utilized to extract the spatiotemporal features of videos during the encoding.Then,the semantic topics of video data are extracted using the visual labels retrieved from similar video data.In the decoding,a decoder is constructed by combining a novel Enhance-TopK sampling algorithm with a Generative Pre-trained Transformer-2 deep neural network,which decreases the influence of“deviation”in the semantic mapping process between videos and texts by jointly decoding a baseline and semantic topics of video contents.During this process,the designed Enhance-TopK sampling algorithm can alleviate a long-tail problem by dynamically adjusting the probability distribution of the predicted words.Finally,the experiments are conducted on two publicly used Microsoft Research Video Description andMicrosoft Research-Video to Text datasets.The experimental results demonstrate that the proposed method outperforms several state-of-art approaches.Specifically,the performance indicators Bilingual Evaluation Understudy,Metric for Evaluation of Translation with Explicit Ordering,Recall Oriented Understudy for Gisting Evaluation-longest common subsequence,and Consensus-based Image Description Evaluation of the proposed method are improved by 1.2%,0.1%,0.3%,and 2.4% on the Microsoft Research Video Description dataset,and 0.1%,1.0%,0.1%,and 2.8% on the Microsoft Research-Video to Text dataset,respectively,compared with the existing video captioning methods.As a result,the proposed method can generate video captioning that is more closely aligned with human natural language expression habits.
基金supported in part by the National Natural Science Foundation of China under Grants 62273272 and 61873277in part by the Chinese Postdoctoral Science Foundation under Grant 2020M673446+1 种基金in part by the Key Research and Development Program of Shaanxi Province under Grant 2023-YBGY-243in part by the Youth Innovation Team of Shaanxi Universities.
文摘Currently,the video captioning models based on an encoder-decoder mainly rely on a single video input source.The contents of video captioning are limited since few studies employed external corpus information to guide the generation of video captioning,which is not conducive to the accurate descrip-tion and understanding of video content.To address this issue,a novel video captioning method guided by a sentence retrieval generation network(ED-SRG)is proposed in this paper.First,a ResNeXt network model,an efficient convolutional network for online video understanding(ECO)model,and a long short-term memory(LSTM)network model are integrated to construct an encoder-decoder,which is utilized to extract the 2D features,3D features,and object features of video data respectively.These features are decoded to generate textual sentences that conform to video content for sentence retrieval.Then,a sentence-transformer network model is employed to retrieve different sentences in an external corpus that are semantically similar to the above textual sentences.The candidate sentences are screened out through similarity measurement.Finally,a novel GPT-2 network model is constructed based on GPT-2 network structure.The model introduces a designed random selector to randomly select predicted words with a high probability in the corpus,which is used to guide and generate textual sentences that are more in line with human natural language expressions.The proposed method in this paper is compared with several existing works by experiments.The results show that the indicators BLEU-4,CIDEr,ROUGE_L,and METEOR are improved by 3.1%,1.3%,0.3%,and 1.5%on a public dataset MSVD and 1.3%,0.5%,0.2%,1.9%on a public dataset MSR-VTT respectively.It can be seen that the proposed method in this paper can generate video captioning with richer semantics than several state-of-the-art approaches.