The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and ot...The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and other risks.Detecting AI-generated text has thus become essential to safeguard the authenticity and reliability of digital information.This survey reviews recent progress in detection methods,categorizing approaches into passive and active categories based on their reliance on intrinsic textual features or embedded signals.Passive detection is further divided into surface linguistic feature-based and language model-based methods,whereas active detection encompasses watermarking-based and semantic retrieval-based approaches.This taxonomy enables systematic comparison of methodological differences in model dependency,applicability,and robustness.A key challenge for AI-generated text detection is that existing detectors are highly vulnerable to adversarial attacks,particularly paraphrasing,which substantially compromises their effectiveness.Addressing this gap highlights the need for future research on enhancing robustness and cross-domain generalization.By synthesizing current advances and limitations,this survey provides a structured reference for the field and outlines pathways toward more reliable and scalable detection solutions.展开更多
Synthetic speech detection is an essential task in the field of voice security,aimed at identifying deceptive voice attacks generated by text-to-speech(TTS)systems or voice conversion(VC)systems.In this paper,we propo...Synthetic speech detection is an essential task in the field of voice security,aimed at identifying deceptive voice attacks generated by text-to-speech(TTS)systems or voice conversion(VC)systems.In this paper,we propose a synthetic speech detection model called TFTransformer,which integrates both local and global features to enhance detection capabilities by effectively modeling local and global dependencies.Structurally,the model is divided into two main components:a front-end and a back-end.The front-end of the model uses a combination of SincLayer and two-dimensional(2D)convolution to extract high-level feature maps(HFM)containing local dependency of the input speech signals.The back-end uses time-frequency Transformer module to process these feature maps and further capture global dependency.Furthermore,we propose TFTransformer-SE,which incorporates a channel attention mechanism within the 2D convolutional blocks.This enhancement aims to more effectively capture local dependencies,thereby improving the model’s performance.The experiments were conducted on the ASVspoof 2021 LA dataset,and the results showed that the model achieved an equal error rate(EER)of 3.37%without data augmentation.Additionally,we evaluated the model using the ASVspoof 2019 LA dataset,achieving an EER of 0.84%,also without data augmentation.This demonstrates that combining local and global dependencies in the time-frequency domain can significantly improve detection accuracy.展开更多
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S...The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.展开更多
The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situati...The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.展开更多
In order to recognize people's annoyance emotions in the working environment and evaluate emotional well- being, emotional speech in a work environment is induced to obtain adequate samples of emotional speech, and a...In order to recognize people's annoyance emotions in the working environment and evaluate emotional well- being, emotional speech in a work environment is induced to obtain adequate samples of emotional speech, and a Mandarin database with two thousands samples is built. In searching for annoyance-type emotion features, the prosodic feature and the voice quality feature parameters of the emotional statements are extracted first. Then an improved back propagation (BP) neural network based on the shuffled frog leaping algorithm (SFLA) is proposed to recognize the emotion. The recognition capability of the BP, radical basis function (RBF) and the SFLA neural networks are compared experimentally. The results show that the recognition ratio of the SFLA neural network is 4. 7% better than that of the BP neural network and 4. 3% better than that of the RBF neural network. The experimental results demonstrate that the random initial data trained by the SFLA can optimize the connection weights and thresholds of the neural network, speed up the convergence and improve the recognition rate.展开更多
Accurate endpoint detection is a necessary capability for speech recognition.A new energy measure method based on the empirical mode decomposition(EMD)algorithm and Teager energy operator(TEO)is proposed to locate end...Accurate endpoint detection is a necessary capability for speech recognition.A new energy measure method based on the empirical mode decomposition(EMD)algorithm and Teager energy operator(TEO)is proposed to locate endpoint intervals of a speech signal embedded in noise.With the EMD,the noise signals can be decomposed into different numbers of sub-signals called intrinsic mode functions(IMFs),which is a zero-mean AM-FM component.Then TEO can be used to extract the desired feature of the modulation energy for IMF components.In order to show the effectiveness of the proposed method,examples are presented to show that the new measure is more effective than traditional measures.The present experimental results show that the measure can be used to improve the performance of endpoint detection algorithms and the accuracy of this algorithm is quite satisfactory and acceptable.展开更多
In this work, a novel voice activity detection (VAD) algorithm that uses speech absence probability (SAP) based on Teager energy (TE) was proposed for speech enhancement. The proposed method employs local SAP (...In this work, a novel voice activity detection (VAD) algorithm that uses speech absence probability (SAP) based on Teager energy (TE) was proposed for speech enhancement. The proposed method employs local SAP (LSAP) based on the TE of noisy speech as a feature parameter for voice activity detection (VAD) in each frequency subband, rather than conventional LSAP. Results show that the TE operator can enhance the abiTity to discriminate speech and noise and further suppress noise components. Therefore, TE-based LSAP provides a better representation of LSAP, resulting in improved VAD for estimating noise power in a speech enhancement algorithm. In addition, the presented method utilizes TE-based global SAP (GSAP) derived in each frame as the weighting parameter for modifying the adopted TE operator and improving its performance. The proposed algorithm was evaluated by objective and subjective quality tests under various environments, and was shown to produce better results than the conventional method.展开更多
Wireless multimedia sensor networks (WMSN) are emerging to serve for the collection of acoustic and image information. In the WMSN, the microphone is usually employed to function as sensor nodes for the acquisition of...Wireless multimedia sensor networks (WMSN) are emerging to serve for the collection of acoustic and image information. In the WMSN, the microphone is usually employed to function as sensor nodes for the acquisition of acoustic data. However, those microphone sensors are needed to be placed close with sound source and cannot detect sound signal through certain obstacles. To overcome the shortcomings of microphone sensor, we develop a new type of bioradar sensor to achieve non-contact speech detection and investigate theoretically the mechanism of bioradar for speech detection. Results show that the system can successfully detect speech at some distance and even through non-metallic objects with certain thickness. In addition, in order to suppress the noise and improve the quality of the detected speech, we use spectral subtraction and Wiener filtering algorithm respectively to enhance the bioradar speech and evaluate the performance of the two methods using spectrogram.展开更多
In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-wri...In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-writing is difficult, some countermeasure methods for surrounding noise are indispensable. In this study, a signal detection method to remove the noise for actual speech signals is proposed by using Bayesian estimation with the aid of bone-conducted speech. More specifically, by introducing Bayes’ theorem based on the observation of air-conducted speech contaminated by surrounding background noise, a new type of algorithm for noise removal is theoretically derived. In the proposed speech detection method, bone-conducted speech is utilized in order to obtain precise estimation for speech signals. The effectiveness of the proposed method is experimentally confirmed by applying it to air- and bone-conducted speeches measured in real environment under the existence of surrounding background noise.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
A method of robust speech endpoint detection in airplane cockpit voice background is presented. Based on the analysis of background noise character, a complex Laplacian distribution model directly aiming at noisy spee...A method of robust speech endpoint detection in airplane cockpit voice background is presented. Based on the analysis of background noise character, a complex Laplacian distribution model directly aiming at noisy speech is established. Then the likelihood ratio test based on binary hypothesis test is carried out. The decision criterion of conventional maximum a posterior incorporating the inter-frame correlation leads to two separate thresholds. Speech endpoint detection decision is finally made depend on the previous frame and the observed spectrum, and the speech endpoint is searched based on the decision. Compared with the typical algorithms, the proposed method operates robust in the airplane cockpit voice background.展开更多
In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that op...In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.展开更多
Automatic identification of cyberbullying is a problem that is gaining traction,especially in the Machine Learning areas.Not only is it complicated,but it has also become a pressing necessity,considering how social me...Automatic identification of cyberbullying is a problem that is gaining traction,especially in the Machine Learning areas.Not only is it complicated,but it has also become a pressing necessity,considering how social media has become an integral part of adolescents’lives and how serious the impacts of cyberbullying and online harassment can be,particularly among teenagers.This paper contains a systematic literature review of modern strategies,machine learning methods,and technical means for detecting cyberbullying and the aggressive command of an individual in the information space of the Internet.We undertake an in-depth review of 13 papers from four scientific databases.The article provides an overview of scientific literature to analyze the problem of cyberbullying detection from the point of view of machine learning and natural language processing.In this review,we consider a cyberbullying detection framework on social media platforms,which includes data collection,data processing,feature selection,feature extraction,and the application ofmachine learning to classify whether texts contain cyberbullying or not.This article seeks to guide future research on this topic toward a more consistent perspective with the phenomenon’s description and depiction,allowing future solutions to be more practical and effective.展开更多
Diagnosing a baby’s feelings poses a challenge for both doctors and parents because babies cannot explain their feelings through expression or speech.Understanding the emotions of babies and their associated expressi...Diagnosing a baby’s feelings poses a challenge for both doctors and parents because babies cannot explain their feelings through expression or speech.Understanding the emotions of babies and their associated expressions during different sensations such as hunger,pain,etc.,is a complicated task.In infancy,all communication and feelings are propagated through cryspeech,which is a natural phenomenon.Several clinical methods can be used to diagnose a baby’s diseases,but nonclinical methods of diagnosing a baby’s feelings are lacking.As such,in this study,we aimed to identify babies’feelings and emotions through their cry using a nonclinical method.Changes in the cry sound can be identified using our method and used to assess the baby’s feelings.We considered the frequency of the cries from the energy of the sound.The feelings represented by the infant’s cry are judged to represent certain sensations expressed by the child using the optimal frequency of the recognition of a real-world audio sound.We used machine learning and artificial intelligence to distinguish cry tones in real time through feature analysis.The experimental group consisted of 50%each male and female babies,and we determined the relevancy of the results against different parameters.This application produced real-time results after recognizing a child’s cry sounds.The novelty of our work is that we,for the first time,successfully derived the feelings of young children through the cry-speech of the child,showing promise for end-user applications.展开更多
Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well a...Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.展开更多
A detection system for American English glides/w y r 1] in a knowledge-based automatic speech recognition system is presented. The method uses detection of dips in band-limited energy to total energy ratios, instead o...A detection system for American English glides/w y r 1] in a knowledge-based automatic speech recognition system is presented. The method uses detection of dips in band-limited energy to total energy ratios, instead of detecting dips along the unmodified band-limited energy contours. By using band-limited energy ratio, the dip detection is applicable in not only intervocalic regions but also in non-intervocalic regions. A Gaussian mixture model(GMM) based classifier is then used to separate the detected vowels and nasals. This approach is tested using the TIMIT corpus and results in an overall detection rate of 69.5 %, which is a 4.7 % absolute increase in detection rate compared with an hidden Markov model (HMM) based phone recognizer.展开更多
The recognition of pathological voice is considered a difficult task for speech analysis.Moreover,otolaryngologists needed to rely on oral communication with patients to discover traces of voice pathologies like dysph...The recognition of pathological voice is considered a difficult task for speech analysis.Moreover,otolaryngologists needed to rely on oral communication with patients to discover traces of voice pathologies like dysphonia that are caused by voice alteration of vocal folds and their accuracy is between 60%–70%.To enhance detection accuracy and reduce processing speed of dysphonia detection,a novel approach is proposed in this paper.We have leveraged Linear Discriminant Analysis(LDA)to train multiple Machine Learning(ML)models for dysphonia detection.Several ML models are utilized like Support Vector Machine(SVM),Logistic Regression,and K-nearest neighbor(K-NN)to predict the voice pathologies based on features like Mel-Frequency Cepstral Coefficients(MFCC),Fundamental Frequency(F0),Shimmer(%),Jitter(%),and Harmonic to Noise Ratio(HNR).The experiments were performed using Saarbrucken Voice Data-base(SVD)and a privately collected dataset.The K-fold cross-validation approach was incorporated to increase the robustness and stability of the ML models.According to the experimental results,our proposed approach has a 70%increase in processing speed over Principal Component Analysis(PCA)and performs remarkably well with a recognition accuracy of 95.24%on the SVD dataset surpassing the previous best accuracy of 82.37%.In the case of the private dataset,our proposed method achieved an accuracy rate of 93.37%.It can be an effective non-invasive method to detect dysphonia.展开更多
During childhood,the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables.However,little is known about how children perceive synchrony for natural and ...During childhood,the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables.However,little is known about how children perceive synchrony for natural and continuous speech.This study investigated young children’s gaze patterns while they were watching movies of two identical speakers telling stories side by side.Only one speaker’s lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms.Children aged 3–6 years(n=94,52.13%males)showed an overall preference for the synchronous speaker,with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages.However,viewing time to the synchronous speech was significantly longer in the auditory-leading(AL)condition compared with that in the visual-leading(VL)condition,suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood.When further examining gaze patterns on dynamic faces,we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos.Attention to detail,one dimension of autistic traits featured by local processing,has been found to be correlated with worse performances in speech synchrony processing.These findings extended previous research by showing the development of speech synchrony perception in young children,and may have implications for clinical populations(e.g.,autism)with impaired multisensory integration.展开更多
基金supported in part by the Science and Technology Innovation Program of Hunan Province under Grant 2025RC3166the National Natural Science Foundation of China under Grant 62572176the National Key R&D Program of China under Grant 2024YFF0618800.
文摘The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and other risks.Detecting AI-generated text has thus become essential to safeguard the authenticity and reliability of digital information.This survey reviews recent progress in detection methods,categorizing approaches into passive and active categories based on their reliance on intrinsic textual features or embedded signals.Passive detection is further divided into surface linguistic feature-based and language model-based methods,whereas active detection encompasses watermarking-based and semantic retrieval-based approaches.This taxonomy enables systematic comparison of methodological differences in model dependency,applicability,and robustness.A key challenge for AI-generated text detection is that existing detectors are highly vulnerable to adversarial attacks,particularly paraphrasing,which substantially compromises their effectiveness.Addressing this gap highlights the need for future research on enhancing robustness and cross-domain generalization.By synthesizing current advances and limitations,this survey provides a structured reference for the field and outlines pathways toward more reliable and scalable detection solutions.
基金supported by project ZR2022MF330 supported by Shandong Provincial Natural Science Foundationthe National Natural Science Foundation of China under Grant No.61701286.
文摘Synthetic speech detection is an essential task in the field of voice security,aimed at identifying deceptive voice attacks generated by text-to-speech(TTS)systems or voice conversion(VC)systems.In this paper,we propose a synthetic speech detection model called TFTransformer,which integrates both local and global features to enhance detection capabilities by effectively modeling local and global dependencies.Structurally,the model is divided into two main components:a front-end and a back-end.The front-end of the model uses a combination of SincLayer and two-dimensional(2D)convolution to extract high-level feature maps(HFM)containing local dependency of the input speech signals.The back-end uses time-frequency Transformer module to process these feature maps and further capture global dependency.Furthermore,we propose TFTransformer-SE,which incorporates a channel attention mechanism within the 2D convolutional blocks.This enhancement aims to more effectively capture local dependencies,thereby improving the model’s performance.The experiments were conducted on the ASVspoof 2021 LA dataset,and the results showed that the model achieved an equal error rate(EER)of 3.37%without data augmentation.Additionally,we evaluated the model using the ASVspoof 2019 LA dataset,achieving an EER of 0.84%,also without data augmentation.This demonstrates that combining local and global dependencies in the time-frequency domain can significantly improve detection accuracy.
基金the research project LaTe4PoliticES(PID2022-138099OB-I00)funded by MCIN/AEI/10.13039/501100011033 and the European Fund for Regional Development(ERDF)-a way to make Europe.Tomás Bernal-Beltrán is supported by University of Murcia through the predoctoral programme.
文摘The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.
文摘The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.
基金The National Natural Science Foundation of China(No.61375028,61301219)China Postdoctoral Science Foundation(No.2012M520973)the Scientific Research Funds of Nanjing Institute of Technology(No.ZKJ201202)
文摘In order to recognize people's annoyance emotions in the working environment and evaluate emotional well- being, emotional speech in a work environment is induced to obtain adequate samples of emotional speech, and a Mandarin database with two thousands samples is built. In searching for annoyance-type emotion features, the prosodic feature and the voice quality feature parameters of the emotional statements are extracted first. Then an improved back propagation (BP) neural network based on the shuffled frog leaping algorithm (SFLA) is proposed to recognize the emotion. The recognition capability of the BP, radical basis function (RBF) and the SFLA neural networks are compared experimentally. The results show that the recognition ratio of the SFLA neural network is 4. 7% better than that of the BP neural network and 4. 3% better than that of the RBF neural network. The experimental results demonstrate that the random initial data trained by the SFLA can optimize the connection weights and thresholds of the neural network, speed up the convergence and improve the recognition rate.
基金supported by the National Natural Science Foundation of China under Grant No.60771033
文摘Accurate endpoint detection is a necessary capability for speech recognition.A new energy measure method based on the empirical mode decomposition(EMD)algorithm and Teager energy operator(TEO)is proposed to locate endpoint intervals of a speech signal embedded in noise.With the EMD,the noise signals can be decomposed into different numbers of sub-signals called intrinsic mode functions(IMFs),which is a zero-mean AM-FM component.Then TEO can be used to extract the desired feature of the modulation energy for IMF components.In order to show the effectiveness of the proposed method,examples are presented to show that the new measure is more effective than traditional measures.The present experimental results show that the measure can be used to improve the performance of endpoint detection algorithms and the accuracy of this algorithm is quite satisfactory and acceptable.
基金Project supported by Inha University Research GrantProject(10031764) supported by the Strategic Technology Development Program of Ministry of Knowledge Economy, Korea
文摘In this work, a novel voice activity detection (VAD) algorithm that uses speech absence probability (SAP) based on Teager energy (TE) was proposed for speech enhancement. The proposed method employs local SAP (LSAP) based on the TE of noisy speech as a feature parameter for voice activity detection (VAD) in each frequency subband, rather than conventional LSAP. Results show that the TE operator can enhance the abiTity to discriminate speech and noise and further suppress noise components. Therefore, TE-based LSAP provides a better representation of LSAP, resulting in improved VAD for estimating noise power in a speech enhancement algorithm. In addition, the presented method utilizes TE-based global SAP (GSAP) derived in each frame as the weighting parameter for modifying the adopted TE operator and improving its performance. The proposed algorithm was evaluated by objective and subjective quality tests under various environments, and was shown to produce better results than the conventional method.
文摘Wireless multimedia sensor networks (WMSN) are emerging to serve for the collection of acoustic and image information. In the WMSN, the microphone is usually employed to function as sensor nodes for the acquisition of acoustic data. However, those microphone sensors are needed to be placed close with sound source and cannot detect sound signal through certain obstacles. To overcome the shortcomings of microphone sensor, we develop a new type of bioradar sensor to achieve non-contact speech detection and investigate theoretically the mechanism of bioradar for speech detection. Results show that the system can successfully detect speech at some distance and even through non-metallic objects with certain thickness. In addition, in order to suppress the noise and improve the quality of the detected speech, we use spectral subtraction and Wiener filtering algorithm respectively to enhance the bioradar speech and evaluate the performance of the two methods using spectrogram.
文摘In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-writing is difficult, some countermeasure methods for surrounding noise are indispensable. In this study, a signal detection method to remove the noise for actual speech signals is proposed by using Bayesian estimation with the aid of bone-conducted speech. More specifically, by introducing Bayes’ theorem based on the observation of air-conducted speech contaminated by surrounding background noise, a new type of algorithm for noise removal is theoretically derived. In the proposed speech detection method, bone-conducted speech is utilized in order to obtain precise estimation for speech signals. The effectiveness of the proposed method is experimentally confirmed by applying it to air- and bone-conducted speeches measured in real environment under the existence of surrounding background noise.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
文摘A method of robust speech endpoint detection in airplane cockpit voice background is presented. Based on the analysis of background noise character, a complex Laplacian distribution model directly aiming at noisy speech is established. Then the likelihood ratio test based on binary hypothesis test is carried out. The decision criterion of conventional maximum a posterior incorporating the inter-frame correlation leads to two separate thresholds. Speech endpoint detection decision is finally made depend on the previous frame and the observed spectrum, and the speech endpoint is searched based on the decision. Compared with the typical algorithms, the proposed method operates robust in the airplane cockpit voice background.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.This study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.
文摘Automatic identification of cyberbullying is a problem that is gaining traction,especially in the Machine Learning areas.Not only is it complicated,but it has also become a pressing necessity,considering how social media has become an integral part of adolescents’lives and how serious the impacts of cyberbullying and online harassment can be,particularly among teenagers.This paper contains a systematic literature review of modern strategies,machine learning methods,and technical means for detecting cyberbullying and the aggressive command of an individual in the information space of the Internet.We undertake an in-depth review of 13 papers from four scientific databases.The article provides an overview of scientific literature to analyze the problem of cyberbullying detection from the point of view of machine learning and natural language processing.In this review,we consider a cyberbullying detection framework on social media platforms,which includes data collection,data processing,feature selection,feature extraction,and the application ofmachine learning to classify whether texts contain cyberbullying or not.This article seeks to guide future research on this topic toward a more consistent perspective with the phenomenon’s description and depiction,allowing future solutions to be more practical and effective.
基金This research was funded by the Deanship of Scientific Research,Najran University,Kingdom of Saudi Arabia,grant number NU/RC/SERC/11/5.
文摘Diagnosing a baby’s feelings poses a challenge for both doctors and parents because babies cannot explain their feelings through expression or speech.Understanding the emotions of babies and their associated expressions during different sensations such as hunger,pain,etc.,is a complicated task.In infancy,all communication and feelings are propagated through cryspeech,which is a natural phenomenon.Several clinical methods can be used to diagnose a baby’s diseases,but nonclinical methods of diagnosing a baby’s feelings are lacking.As such,in this study,we aimed to identify babies’feelings and emotions through their cry using a nonclinical method.Changes in the cry sound can be identified using our method and used to assess the baby’s feelings.We considered the frequency of the cries from the energy of the sound.The feelings represented by the infant’s cry are judged to represent certain sensations expressed by the child using the optimal frequency of the recognition of a real-world audio sound.We used machine learning and artificial intelligence to distinguish cry tones in real time through feature analysis.The experimental group consisted of 50%each male and female babies,and we determined the relevancy of the results against different parameters.This application produced real-time results after recognizing a child’s cry sounds.The novelty of our work is that we,for the first time,successfully derived the feelings of young children through the cry-speech of the child,showing promise for end-user applications.
文摘Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.
基金The Ministry of Knowledge Economy,Korea,under the Infor mation Technology Research Center support program supervised by the National IT Industry Promotion Agency(NIPA-2012-H0301-12-2006)
文摘A detection system for American English glides/w y r 1] in a knowledge-based automatic speech recognition system is presented. The method uses detection of dips in band-limited energy to total energy ratios, instead of detecting dips along the unmodified band-limited energy contours. By using band-limited energy ratio, the dip detection is applicable in not only intervocalic regions but also in non-intervocalic regions. A Gaussian mixture model(GMM) based classifier is then used to separate the detected vowels and nasals. This approach is tested using the TIMIT corpus and results in an overall detection rate of 69.5 %, which is a 4.7 % absolute increase in detection rate compared with an hidden Markov model (HMM) based phone recognizer.
文摘The recognition of pathological voice is considered a difficult task for speech analysis.Moreover,otolaryngologists needed to rely on oral communication with patients to discover traces of voice pathologies like dysphonia that are caused by voice alteration of vocal folds and their accuracy is between 60%–70%.To enhance detection accuracy and reduce processing speed of dysphonia detection,a novel approach is proposed in this paper.We have leveraged Linear Discriminant Analysis(LDA)to train multiple Machine Learning(ML)models for dysphonia detection.Several ML models are utilized like Support Vector Machine(SVM),Logistic Regression,and K-nearest neighbor(K-NN)to predict the voice pathologies based on features like Mel-Frequency Cepstral Coefficients(MFCC),Fundamental Frequency(F0),Shimmer(%),Jitter(%),and Harmonic to Noise Ratio(HNR).The experiments were performed using Saarbrucken Voice Data-base(SVD)and a privately collected dataset.The K-fold cross-validation approach was incorporated to increase the robustness and stability of the ML models.According to the experimental results,our proposed approach has a 70%increase in processing speed over Principal Component Analysis(PCA)and performs remarkably well with a recognition accuracy of 95.24%on the SVD dataset surpassing the previous best accuracy of 82.37%.In the case of the private dataset,our proposed method achieved an accuracy rate of 93.37%.It can be an effective non-invasive method to detect dysphonia.
基金supported by a grant from National Science Foundation China(31970997)the CAS Key Laboratory of Mental Health,Institute of Psychologythe Philip K.H.Wong Foundation.
文摘During childhood,the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables.However,little is known about how children perceive synchrony for natural and continuous speech.This study investigated young children’s gaze patterns while they were watching movies of two identical speakers telling stories side by side.Only one speaker’s lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms.Children aged 3–6 years(n=94,52.13%males)showed an overall preference for the synchronous speaker,with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages.However,viewing time to the synchronous speech was significantly longer in the auditory-leading(AL)condition compared with that in the visual-leading(VL)condition,suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood.When further examining gaze patterns on dynamic faces,we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos.Attention to detail,one dimension of autistic traits featured by local processing,has been found to be correlated with worse performances in speech synchrony processing.These findings extended previous research by showing the development of speech synchrony perception in young children,and may have implications for clinical populations(e.g.,autism)with impaired multisensory integration.