Deep learning is an effective and useful technique that has been widely applied in a variety of fields, including computer vision, machine vision, and natural language processing. Deepfakes uses deep learning technolo...Deep learning is an effective and useful technique that has been widely applied in a variety of fields, including computer vision, machine vision, and natural language processing. Deepfakes uses deep learning technology to manipulate images and videos of a person that humans cannot differentiate them from the real one. In recent years, many studies have been conducted to understand how deepfakes work and many approaches based on deep learning have been introduced to detect deepfakes videos or images. In this paper, we conduct a comprehensive review of deepfakes creation and detection technologies using deep learning approaches. In addition, we give a thorough analysis of various technologies and their application in deepfakes detection. Our study will be beneficial for researchers in this field as it will cover the recent state-of-art methods that discover deepfakes videos or images in social contents. In addition, it will help comparison with the existing works because of the detailed description of the latest methods and dataset used in this domain.展开更多
Some case studies are presented ranging from geological fakes and frauds,homicides and one environmental forensic case.Fakes may be true geological materials such as created fossils or gems and precious stones,or wher...Some case studies are presented ranging from geological fakes and frauds,homicides and one environmental forensic case.Fakes may be true geological materials such as created fossils or gems and precious stones,or where geological methods are used to analyse fakes,such as the stones or ceramics used in making archaeological or art forgeries(e.g.,mineral pigments in paintings).Fakes have also been created for reasons of academic rivalry,career advancement and religious belief.Fraud commonly involves over-stated claims of ore content associated with mining and the oil and gas industry.The range of geological fakes,the uses of geological methods in detecting fakes,and the extent of fraud in the mining sector are all extensive and sometimes incredible.The homicide is case presented to demonstrate how the types of geological investigation described in the rest of this volume may be applied.We include an environmental forensic case for similar reasons,to show that forensic geology may be applied to more than homicides and fakery.展开更多
The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employi...The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employing deep learning to analyze speech or emotional content.Because of how clever these videos are frequently,Manipulation is challenging to spot.Social media are the most frequent and dangerous targets since they are weak outlets that are open to extortion or slander a human.In earlier times,it was not so easy to alter the videos,which required expertise in the domain and time.Nowadays,the generation of fake videos has become easier and with a high level of realism in the video.Deepfakes are forgeries and altered visual data that appear in still photos or video footage.Numerous automatic identification systems have been developed to solve this issue,however they are constrained to certain datasets and performpoorly when applied to different datasets.This study aims to develop an ensemble learning model utilizing a convolutional neural network(CNN)to handle deepfakes or Face2Face.We employed ensemble learning,a technique combining many classifiers to achieve higher prediction performance than a single classifier,boosting themodel’s accuracy.The performance of the generated model is evaluated on Face Forensics.This work is about building a new powerful model for automatically identifying deep fake videos with the DeepFake-Detection-Challenges(DFDC)dataset.We test our model using the DFDC,one of the most difficult datasets and get an accuracy of 96%.展开更多
Imagine my surprise on buying a copy of Pink Floyd's album The Wall when I entered China six weeks ago, to find that the song name Comfortably Numb had somehow been translated into Come Partably Numb. And my amuse...Imagine my surprise on buying a copy of Pink Floyd's album The Wall when I entered China six weeks ago, to find that the song name Comfortably Numb had somehow been translated into Come Partably Numb. And my amusement turned to despair once I discovered that only one of the two discs actually worked. Twelve yuan not so well spent. This was my introduction to the cheap but dubious world of China's CD pirates. Since arriving here, the problem of ille-展开更多
Cyber-Physical Networks(CPN)are comprehensive systems that integrate information and physical domains,and are widely used in various fields such as online social networking,smart grids,and the Internet of Vehicles(IoV...Cyber-Physical Networks(CPN)are comprehensive systems that integrate information and physical domains,and are widely used in various fields such as online social networking,smart grids,and the Internet of Vehicles(IoV).With the increasing popularity of digital photography and Internet technology,more and more users are sharing images on CPN.However,many images are shared without any privacy processing,exposing hidden privacy risks and making sensitive content easily accessible to Artificial Intelligence(AI)algorithms.Existing image sharing methods lack fine-grained image sharing policies and cannot protect user privacy.To address this issue,we propose a social relationship-driven privacy customization protection model for publishers and co-photographers.We construct a heterogeneous social information network centered on social relationships,introduce a user intimacy evaluation method with time decay,and evaluate privacy levels considering user interest similarity.To protect user privacy while maintaining image appreciation,we design a lightweight face-swapping algorithm based on Generative Adversarial Network(GAN)to swap faces that need to be protected.Our proposed method minimizes the loss of image utility while satisfying privacy requirements,as shown by extensive theoretical and simulation analyses.展开更多
NCPA Children’s Opera Commission Effendi&His Double.Date:May 23-June 1,2025,Venue:National Center for the Performing Arts.The opera tells a story about a figure named Effendi waging a battle of wits against his d...NCPA Children’s Opera Commission Effendi&His Double.Date:May 23-June 1,2025,Venue:National Center for the Performing Arts.The opera tells a story about a figure named Effendi waging a battle of wits against his double in Sun City.He maintains justice by punishing the fake Effendi,who cheats the citizens of Sun City.展开更多
The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform ...The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.展开更多
This article is a part of a larger study called The Abstract Truth of Media focused on the topic of the fictional media content opinions presented and perceived as truth.It will explore the abstract nature of truth in...This article is a part of a larger study called The Abstract Truth of Media focused on the topic of the fictional media content opinions presented and perceived as truth.It will explore the abstract nature of truth in online media and its different forms.These media truths are types of fictional stories with certain effects on the public rather than a truthful presentation of the facts.Thus,the end goal of mass media today is not to tell the truth,but to create moral communities based on common experience and beliefs.Articles,opinions,and news in media are seen as a narrative strategy that can be understood only through storytelling analysis.Here the focus is on the understanding of Truth and Untruth in online media as well as the connection of Internet media technology with the increase of disinformation online.The new media model creates hostile groups instead of generating consent for the nation-state,the new online media model within through,Pseudo-communication,manipulation,delusion,lies,propaganda,and deliberate causing of moral anger.“The end of the truth”means that the truth on the Internet is lost among the vast amount of information and the lack of regulation regarding the correctness of the published data.Instead of truth,media researchers formally talk about post-truth,fake news,and“alternative facts”.Truth on the Internet is more like“Truthiness”or a belief that a statement is true based on the intuition or understanding of individuals,regardless of evidence,logic or facts.The subject of research is the connection between every new technology in mass media and the truth of the information and the effects on the consensus in society.Since the beginning of the 21st century,misinformation on the Internet has increased with the development of online media and social networks,and it is a problem of social peace and consent in every country.展开更多
These days,social media has grown to be an integral part of people’s lives.However,it involves the possibility of exposure to“fake news”,which may contain information that is intentionally or inaccurately false to ...These days,social media has grown to be an integral part of people’s lives.However,it involves the possibility of exposure to“fake news”,which may contain information that is intentionally or inaccurately false to promote particular political or economic interests.The main objective of this work is to use the co-attention mechanism in a Combined Graph neural network model(CMCG)to capture the relationship between user profile features and user preferences in order to detect fake news and examine the influence of various social media features on fake news detection.The proposed approach includes three modules.The first one creates a Graph Neural Network(GNN)based model to learn user profile properties,while the second module encodes news content,user historical posts,and news sharing cascading on social media as user preferences GNN-based model.The inter-dependencies between user profiles and user preferences are handled through the third module using a co-attention mechanism for capturing the relationship between the two GNN-based models.We conducted several experiments on two commonly used fake news datasets,Politifact and Gossipcop,where our approach achieved 98.53%accuracy on the Gossipcop dataset and 96.77%accuracy on the Politifact dataset.These results illustrate the effectiveness of the CMCG approach for fake news detection,as it combines various information from different modalities to achieve relatively high performances.展开更多
Social media has significantly accelerated the rapid dissemination of information,but it also boosts propagation of fake news,posing serious challenges to public awareness and social stability.In real-world contexts,t...Social media has significantly accelerated the rapid dissemination of information,but it also boosts propagation of fake news,posing serious challenges to public awareness and social stability.In real-world contexts,the volume of trustable information far exceeds that of rumors,resulting in a class imbalance that leads models to prioritize the majority class during training.This focus diminishes the model’s ability to recognize minority class samples.Furthermore,models may experience overfitting when encountering these minority samples,further compromising their generalization capabilities.Unlike node-level classification tasks,fake news detection in social networks operates on graph-level samples,where traditional interpolation and oversampling methods struggle to effectively generate high-quality graph-level samples.This challenge complicates the identification of new instances of false information.To address this issue,this paper introduces the FHGraph(Fake News Hunting Graph)framework,which employs a generative data augmentation approach and a latent diffusion model to create graph structures that align with news communication patterns.Using the few-sample learning capabilities of large language models(LLMs),the framework generates diverse texts for minority class nodes.FHGraph comprises a hierarchical multiview graph contrastive learning module,in which two horizontal views and three vertical levels are utilized for self-supervised learning,resulting in more optimized representations.Experimental results show that FHGraph significantly outperforms state-of-the-art(SOTA)graph-level class imbalance methods and SOTA graph-level contrastive learning methods.Specifically,FHGraph has achieved a 2%increase in F1 Micro and a 2.5%increase in F1 Macro in the PHEME dataset,as well as a 3.5%improvement in F1 Micro and a 4.3%improvement in F1 Macro on RumorEval dataset.展开更多
With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of...With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of multimodal approaches for fake news detection has gained significant attention.To solve the problems existing in previous multi-modal fake news detection algorithms,such as insufficient feature extraction and insufficient use of semantic relations between modes,this paper proposes the MFFFND-Co(Multimodal Feature Fusion Fake News Detection with Co-Attention Block)model.First,the model deeply explores the textual content,image content,and frequency domain features.Then,it employs a Co-Attention mechanism for cross-modal fusion.Additionally,a semantic consistency detectionmodule is designed to quantify semantic deviations,thereby enhancing the performance of fake news detection.Experimentally verified on two commonly used datasets,Twitter and Weibo,the model achieved F1 scores of 90.0% and 94.0%,respectively,significantly outperforming the pre-modified MFFFND(Multimodal Feature Fusion Fake News Detection with Attention Block)model and surpassing other baseline models.This improves the accuracy of detecting fake information in artificial intelligence detection and engineering software detection.展开更多
The search for mechanical properties of materials reached a highly acclaimed level, when indentations could be analysed on the basis of elastic theory for hardness and elastic modulus. The mathematical formulas proved...The search for mechanical properties of materials reached a highly acclaimed level, when indentations could be analysed on the basis of elastic theory for hardness and elastic modulus. The mathematical formulas proved to be very complicated, and various trials were published between the 1900s and 2000s. The development of indentation instruments and the wish to make the application in numerous steps easier, led in 1992 to trials with iterations by using relative values instead of absolute ones. Excessive iterations of computers with 3 + 8 free parameters of the loading and unloading curves became possible and were implemented into the instruments and worldwide standards. The physical formula for hardness was defined as force over area. For the conical, pyramidal, and spherical indenters, one simply took the projected area for the calculation of the indentation depth from the projected area, adjusted it later by the iterations with respect to fused quartz or aluminium as standard materials, and called it “contact height”. Continuously measured indentation loading curves were formulated as loading force over depth square. The unloading curves after release of the indenter used the initial steepness of the pressure relief for the calculation of what was (and is) incorrectly called “Young’s modulus”. But it is not unidirectional. And for the spherical indentations’ loading curve, they defined the indentation force over depth raised to 3/2 (but without R/h correction). They till now (2025) violate the energy law, because they use all applied force for the indenter depth and ignore the obvious sidewise force upon indentation (cf. e.g. the wood cleaving). The various refinements led to more and more complicated formulas that could not be reasonably calculated with them. One decided to use 3 + 8 free-parameter iterations for fitting to the (poor) standards of fused quartz or aluminium. The mechanical values of these were considered to be “true”. This is till now the worldwide standard of DIN-ISO-ASTM-14577, avoiding overcomplicated formulas with their complexity. Some of these are shown in the Introduction Section. By doing so, one avoided the understanding of indentation results on a physical basis. However, we open a simple way to obtain absolute values (though still on the blackbox instrument’s unsuitable force calibration). We do not iterate but calculate algebraically on the basis of the correct, physically deduced exponent of the loading force parabolas with h3/2 instead of false “h2” (for the spherical indentation, there is a calotte-radius over depth correction), and we reveal the physical errors taken up in the official worldwide “14577-Standard”. Importantly, we reveal the hitherto fully overlooked phase transitions under load that are not detectable with the false exponent. Phase-transition twinning is even present and falsifies the iteration standards. Instead of elasticity theory, we use the well-defined geometry of these indentations. By doing so, we reach simple algebraically calculable formulas and find the physical indentation hardness of materials with their onset depth, onset force and energy, as well as their phase-transition energy (temperature dependent also its activation energy). The most important phase transitions are our absolute algebraically calculated results. The now most easily obtained phase transitions under load are very dangerous because they produce polymorph interfaces between the changed and the unchanged material. It was found and published by high-enlargement microscopy (5000-fold) that these trouble spots are the sites for the development of stable, 1 to 2 µm long, micro-cracks (stable for months). If however, a force higher than the one of their formation occurs to them, these grow to catastrophic crash. That works equally with turbulences at the pickle fork of airliners. After the publication of these facts and after three fatal crashing had occurred in a short sequence, FAA (Federal Aviation Agency) reacted by rechecking all airplanes for such micro cracks. These were now found in a new fleet of airliners from where the three crashed ones came. These were previously overlooked. FAA became aware of that risk and grounded 290 (certainly all) of them, because the material of these did not have higher phase-transition onset and energy than other airplanes with better material. They did so despite the 14577-Standard that does not find (and thus formally forbids) phase transitions under indenter load with the false exponent on the indentation parabola. However, this “Standard” will, despite the present author’s well-founded petition, not be corrected for the next 5 years.展开更多
Deep learning is a practical and efficient technique that has been used extensively in many domains. Using deep learning technology, deepfakes create fake images of a person that people cannot distinguish from the rea...Deep learning is a practical and efficient technique that has been used extensively in many domains. Using deep learning technology, deepfakes create fake images of a person that people cannot distinguish from the real one. Recently, many researchers have focused on understanding how deepkakes work and detecting using deep learning approaches. This paper introduces an explainable deepfake framework for images creation and classification. The framework consists of three main parts: the first approach is called Instant ID which is used to create deepfacke images from the original one;the second approach called Xception classifies the real and deepfake images;the third approach called Local Interpretable Model (LIME) provides a method for interpreting the predictions of any machine learning model in a local and interpretable manner. Our study proposes deepfake approach that achieves 100% precision and 100% accuracy for deepfake creation and classification. Furthermore, the results highlight the superior performance of the proposed model in deep fake creation and classification.展开更多
Fake reviews,also known as deceptive opinions,are used to mislead people and have gained more importance recently.This is due to the rapid increase in online marketing transactions,such as selling and purchasing.E-com...Fake reviews,also known as deceptive opinions,are used to mislead people and have gained more importance recently.This is due to the rapid increase in online marketing transactions,such as selling and purchasing.E-commerce provides a facility for customers to post reviews and comment about the product or service when purchased.New customers usually go through the posted reviews or comments on the website before making a purchase decision.However,the current challenge is how new individuals can distinguish truthful reviews from fake ones,which later deceives customers,inflicts losses,and tarnishes the reputation of companies.The present paper attempts to develop an intelligent system that can detect fake reviews on ecommerce platforms using n-grams of the review text and sentiment scores given by the reviewer.The proposed methodology adopted in this study used a standard fake hotel review dataset for experimenting and data preprocessing methods and a term frequency-Inverse document frequency(TF-IDF)approach for extracting features and their representation.For detection and classification,n-grams of review texts were inputted into the constructed models to be classified as fake or truthful.However,the experiments were carried out using four different supervised machine-learning techniques and were trained and tested on a dataset collected from the Trip Advisor website.The classification results of these experiments showed that na飗e Bayes(NB),support vector machine(SVM),adaptive boosting(AB),and random forest(RF)received 88%,93%,94%,and 95%,respectively,based on testing accuracy and tje F1-score.The obtained results were compared with existing works that used the same dataset,and the proposed methods outperformed the comparable methods in terms of accuracy.展开更多
Social media is a platform to express one′s views and opinions freely and has made communication easier than it was before.This also opens up an opportunity for people to spread fake news intentionally.The ease of ac...Social media is a platform to express one′s views and opinions freely and has made communication easier than it was before.This also opens up an opportunity for people to spread fake news intentionally.The ease of access to a variety of news sources on the web also brings the problem of people being exposed to fake news and possibly believing such news.This makes it important for us to detect and flag such content on social media.With the current rate of news generated on social media,it is difficult to differentiate between genuine news and hoaxes without knowing the source of the news.This paper discusses approaches to detection of fake news using only the features of the text of the news,without using any other related metadata.We observe that a combination of stylometric features and text-based word vector representations through ensemble methods can predict fake news with an accuracy of up to 95.49%.展开更多
Nowadays,the usage of socialmedia platforms is rapidly increasing,and rumours or false information are also rising,especially among Arab nations.This false information is harmful to society and individuals.Blocking an...Nowadays,the usage of socialmedia platforms is rapidly increasing,and rumours or false information are also rising,especially among Arab nations.This false information is harmful to society and individuals.Blocking and detecting the spread of fake news in Arabic becomes critical.Several artificial intelligence(AI)methods,including contemporary transformer techniques,BERT,were used to detect fake news.Thus,fake news in Arabic is identified by utilizing AI approaches.This article develops a new hunterprey optimization with hybrid deep learning-based fake news detection(HPOHDL-FND)model on the Arabic corpus.The HPOHDL-FND technique undergoes extensive data pre-processing steps to transform the input data into a useful format.Besides,the HPOHDL-FND technique utilizes long-term memory with a recurrent neural network(LSTM-RNN)model for fake news detection and classification.Finally,hunter prey optimization(HPO)algorithm is exploited for optimal modification of the hyperparameters related to the LSTM-RNN model.The performance validation of the HPOHDL-FND technique is tested using two Arabic datasets.The outcomes exemplified better performance over the other existing techniques with maximum accuracy of 96.57%and 93.53%on Covid19Fakes and satirical datasets,respectively.展开更多
As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea...As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.展开更多
In recent years,social media platforms have gained immense popularity.As a result,there has been a tremendous increase in content on social media platforms.This content can be related to an individual’s sentiments,th...In recent years,social media platforms have gained immense popularity.As a result,there has been a tremendous increase in content on social media platforms.This content can be related to an individual’s sentiments,thoughts,stories,advertisements,and news,among many other content types.With the recent increase in online content,the importance of identifying fake and real news has increased.Although,there is a lot of work present to detect fake news,a study on Fuzzy CRNN was not explored into this direction.In this work,a system is designed to classify fake and real news using fuzzy logic.The initial feature extraction process is done using a convolutional recurrent neural network(CRNN).After the extraction of features,word indexing is done with high dimensionality.Then,based on the indexing measures,the ranking process identifies whether news is fake or real.The fuzzy CRNN model is trained to yield outstanding resultswith 99.99±0.01%accuracy.This work utilizes three different datasets(LIAR,LIAR-PLUS,and ISOT)to find the most accurate model.展开更多
文摘Deep learning is an effective and useful technique that has been widely applied in a variety of fields, including computer vision, machine vision, and natural language processing. Deepfakes uses deep learning technology to manipulate images and videos of a person that humans cannot differentiate them from the real one. In recent years, many studies have been conducted to understand how deepfakes work and many approaches based on deep learning have been introduced to detect deepfakes videos or images. In this paper, we conduct a comprehensive review of deepfakes creation and detection technologies using deep learning approaches. In addition, we give a thorough analysis of various technologies and their application in deepfakes detection. Our study will be beneficial for researchers in this field as it will cover the recent state-of-art methods that discover deepfakes videos or images in social contents. In addition, it will help comparison with the existing works because of the detailed description of the latest methods and dataset used in this domain.
文摘Some case studies are presented ranging from geological fakes and frauds,homicides and one environmental forensic case.Fakes may be true geological materials such as created fossils or gems and precious stones,or where geological methods are used to analyse fakes,such as the stones or ceramics used in making archaeological or art forgeries(e.g.,mineral pigments in paintings).Fakes have also been created for reasons of academic rivalry,career advancement and religious belief.Fraud commonly involves over-stated claims of ore content associated with mining and the oil and gas industry.The range of geological fakes,the uses of geological methods in detecting fakes,and the extent of fraud in the mining sector are all extensive and sometimes incredible.The homicide is case presented to demonstrate how the types of geological investigation described in the rest of this volume may be applied.We include an environmental forensic case for similar reasons,to show that forensic geology may be applied to more than homicides and fakery.
文摘The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employing deep learning to analyze speech or emotional content.Because of how clever these videos are frequently,Manipulation is challenging to spot.Social media are the most frequent and dangerous targets since they are weak outlets that are open to extortion or slander a human.In earlier times,it was not so easy to alter the videos,which required expertise in the domain and time.Nowadays,the generation of fake videos has become easier and with a high level of realism in the video.Deepfakes are forgeries and altered visual data that appear in still photos or video footage.Numerous automatic identification systems have been developed to solve this issue,however they are constrained to certain datasets and performpoorly when applied to different datasets.This study aims to develop an ensemble learning model utilizing a convolutional neural network(CNN)to handle deepfakes or Face2Face.We employed ensemble learning,a technique combining many classifiers to achieve higher prediction performance than a single classifier,boosting themodel’s accuracy.The performance of the generated model is evaluated on Face Forensics.This work is about building a new powerful model for automatically identifying deep fake videos with the DeepFake-Detection-Challenges(DFDC)dataset.We test our model using the DFDC,one of the most difficult datasets and get an accuracy of 96%.
文摘Imagine my surprise on buying a copy of Pink Floyd's album The Wall when I entered China six weeks ago, to find that the song name Comfortably Numb had somehow been translated into Come Partably Numb. And my amusement turned to despair once I discovered that only one of the two discs actually worked. Twelve yuan not so well spent. This was my introduction to the cheap but dubious world of China's CD pirates. Since arriving here, the problem of ille-
基金supported in part by National Natural Science Foundation of China(62271096,U20A20157)Natural Science Foundation of Chongqing,China(cstc2020jcyj-zdxmX0024,CSTB2022NSCQMSX0600)+5 种基金University Innovation Research Group of Chongqing(CXQT20017)Program for Innovation Team Building at Institutions of Higher Education in Chongqing(CXTDX201601020)Science and Technology Research Program of Chongqing Municipal Education Commission(KJQN202000626)Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)the Science and Technology Research Program of Chongqing Municipal Education Commission under Grant KJQN202000626Chongqing Municipal Technology Innovation and Application Development Special Key Project(cstc2020jscx-dxwtBX0053)。
文摘Cyber-Physical Networks(CPN)are comprehensive systems that integrate information and physical domains,and are widely used in various fields such as online social networking,smart grids,and the Internet of Vehicles(IoV).With the increasing popularity of digital photography and Internet technology,more and more users are sharing images on CPN.However,many images are shared without any privacy processing,exposing hidden privacy risks and making sensitive content easily accessible to Artificial Intelligence(AI)algorithms.Existing image sharing methods lack fine-grained image sharing policies and cannot protect user privacy.To address this issue,we propose a social relationship-driven privacy customization protection model for publishers and co-photographers.We construct a heterogeneous social information network centered on social relationships,introduce a user intimacy evaluation method with time decay,and evaluate privacy levels considering user interest similarity.To protect user privacy while maintaining image appreciation,we design a lightweight face-swapping algorithm based on Generative Adversarial Network(GAN)to swap faces that need to be protected.Our proposed method minimizes the loss of image utility while satisfying privacy requirements,as shown by extensive theoretical and simulation analyses.
文摘NCPA Children’s Opera Commission Effendi&His Double.Date:May 23-June 1,2025,Venue:National Center for the Performing Arts.The opera tells a story about a figure named Effendi waging a battle of wits against his double in Sun City.He maintains justice by punishing the fake Effendi,who cheats the citizens of Sun City.
文摘The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.
文摘This article is a part of a larger study called The Abstract Truth of Media focused on the topic of the fictional media content opinions presented and perceived as truth.It will explore the abstract nature of truth in online media and its different forms.These media truths are types of fictional stories with certain effects on the public rather than a truthful presentation of the facts.Thus,the end goal of mass media today is not to tell the truth,but to create moral communities based on common experience and beliefs.Articles,opinions,and news in media are seen as a narrative strategy that can be understood only through storytelling analysis.Here the focus is on the understanding of Truth and Untruth in online media as well as the connection of Internet media technology with the increase of disinformation online.The new media model creates hostile groups instead of generating consent for the nation-state,the new online media model within through,Pseudo-communication,manipulation,delusion,lies,propaganda,and deliberate causing of moral anger.“The end of the truth”means that the truth on the Internet is lost among the vast amount of information and the lack of regulation regarding the correctness of the published data.Instead of truth,media researchers formally talk about post-truth,fake news,and“alternative facts”.Truth on the Internet is more like“Truthiness”or a belief that a statement is true based on the intuition or understanding of individuals,regardless of evidence,logic or facts.The subject of research is the connection between every new technology in mass media and the truth of the information and the effects on the consensus in society.Since the beginning of the 21st century,misinformation on the Internet has increased with the development of online media and social networks,and it is a problem of social peace and consent in every country.
基金funded by Umm Al-Qura University,Saudi Arabia under grant number:25UQU4300346GSSR05.
文摘These days,social media has grown to be an integral part of people’s lives.However,it involves the possibility of exposure to“fake news”,which may contain information that is intentionally or inaccurately false to promote particular political or economic interests.The main objective of this work is to use the co-attention mechanism in a Combined Graph neural network model(CMCG)to capture the relationship between user profile features and user preferences in order to detect fake news and examine the influence of various social media features on fake news detection.The proposed approach includes three modules.The first one creates a Graph Neural Network(GNN)based model to learn user profile properties,while the second module encodes news content,user historical posts,and news sharing cascading on social media as user preferences GNN-based model.The inter-dependencies between user profiles and user preferences are handled through the third module using a co-attention mechanism for capturing the relationship between the two GNN-based models.We conducted several experiments on two commonly used fake news datasets,Politifact and Gossipcop,where our approach achieved 98.53%accuracy on the Gossipcop dataset and 96.77%accuracy on the Politifact dataset.These results illustrate the effectiveness of the CMCG approach for fake news detection,as it combines various information from different modalities to achieve relatively high performances.
基金supported by the National Key R&D Program of China(Grant No.2022YFB3104601)the Big Data Computing Center of Southeast University.
文摘Social media has significantly accelerated the rapid dissemination of information,but it also boosts propagation of fake news,posing serious challenges to public awareness and social stability.In real-world contexts,the volume of trustable information far exceeds that of rumors,resulting in a class imbalance that leads models to prioritize the majority class during training.This focus diminishes the model’s ability to recognize minority class samples.Furthermore,models may experience overfitting when encountering these minority samples,further compromising their generalization capabilities.Unlike node-level classification tasks,fake news detection in social networks operates on graph-level samples,where traditional interpolation and oversampling methods struggle to effectively generate high-quality graph-level samples.This challenge complicates the identification of new instances of false information.To address this issue,this paper introduces the FHGraph(Fake News Hunting Graph)framework,which employs a generative data augmentation approach and a latent diffusion model to create graph structures that align with news communication patterns.Using the few-sample learning capabilities of large language models(LLMs),the framework generates diverse texts for minority class nodes.FHGraph comprises a hierarchical multiview graph contrastive learning module,in which two horizontal views and three vertical levels are utilized for self-supervised learning,resulting in more optimized representations.Experimental results show that FHGraph significantly outperforms state-of-the-art(SOTA)graph-level class imbalance methods and SOTA graph-level contrastive learning methods.Specifically,FHGraph has achieved a 2%increase in F1 Micro and a 2.5%increase in F1 Macro in the PHEME dataset,as well as a 3.5%improvement in F1 Micro and a 4.3%improvement in F1 Macro on RumorEval dataset.
基金supported by Communication University of China(HG23035)partly supported by the Fundamental Research Funds for the Central Universities(CUC230A013).
文摘With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of multimodal approaches for fake news detection has gained significant attention.To solve the problems existing in previous multi-modal fake news detection algorithms,such as insufficient feature extraction and insufficient use of semantic relations between modes,this paper proposes the MFFFND-Co(Multimodal Feature Fusion Fake News Detection with Co-Attention Block)model.First,the model deeply explores the textual content,image content,and frequency domain features.Then,it employs a Co-Attention mechanism for cross-modal fusion.Additionally,a semantic consistency detectionmodule is designed to quantify semantic deviations,thereby enhancing the performance of fake news detection.Experimentally verified on two commonly used datasets,Twitter and Weibo,the model achieved F1 scores of 90.0% and 94.0%,respectively,significantly outperforming the pre-modified MFFFND(Multimodal Feature Fusion Fake News Detection with Attention Block)model and surpassing other baseline models.This improves the accuracy of detecting fake information in artificial intelligence detection and engineering software detection.
文摘The search for mechanical properties of materials reached a highly acclaimed level, when indentations could be analysed on the basis of elastic theory for hardness and elastic modulus. The mathematical formulas proved to be very complicated, and various trials were published between the 1900s and 2000s. The development of indentation instruments and the wish to make the application in numerous steps easier, led in 1992 to trials with iterations by using relative values instead of absolute ones. Excessive iterations of computers with 3 + 8 free parameters of the loading and unloading curves became possible and were implemented into the instruments and worldwide standards. The physical formula for hardness was defined as force over area. For the conical, pyramidal, and spherical indenters, one simply took the projected area for the calculation of the indentation depth from the projected area, adjusted it later by the iterations with respect to fused quartz or aluminium as standard materials, and called it “contact height”. Continuously measured indentation loading curves were formulated as loading force over depth square. The unloading curves after release of the indenter used the initial steepness of the pressure relief for the calculation of what was (and is) incorrectly called “Young’s modulus”. But it is not unidirectional. And for the spherical indentations’ loading curve, they defined the indentation force over depth raised to 3/2 (but without R/h correction). They till now (2025) violate the energy law, because they use all applied force for the indenter depth and ignore the obvious sidewise force upon indentation (cf. e.g. the wood cleaving). The various refinements led to more and more complicated formulas that could not be reasonably calculated with them. One decided to use 3 + 8 free-parameter iterations for fitting to the (poor) standards of fused quartz or aluminium. The mechanical values of these were considered to be “true”. This is till now the worldwide standard of DIN-ISO-ASTM-14577, avoiding overcomplicated formulas with their complexity. Some of these are shown in the Introduction Section. By doing so, one avoided the understanding of indentation results on a physical basis. However, we open a simple way to obtain absolute values (though still on the blackbox instrument’s unsuitable force calibration). We do not iterate but calculate algebraically on the basis of the correct, physically deduced exponent of the loading force parabolas with h3/2 instead of false “h2” (for the spherical indentation, there is a calotte-radius over depth correction), and we reveal the physical errors taken up in the official worldwide “14577-Standard”. Importantly, we reveal the hitherto fully overlooked phase transitions under load that are not detectable with the false exponent. Phase-transition twinning is even present and falsifies the iteration standards. Instead of elasticity theory, we use the well-defined geometry of these indentations. By doing so, we reach simple algebraically calculable formulas and find the physical indentation hardness of materials with their onset depth, onset force and energy, as well as their phase-transition energy (temperature dependent also its activation energy). The most important phase transitions are our absolute algebraically calculated results. The now most easily obtained phase transitions under load are very dangerous because they produce polymorph interfaces between the changed and the unchanged material. It was found and published by high-enlargement microscopy (5000-fold) that these trouble spots are the sites for the development of stable, 1 to 2 µm long, micro-cracks (stable for months). If however, a force higher than the one of their formation occurs to them, these grow to catastrophic crash. That works equally with turbulences at the pickle fork of airliners. After the publication of these facts and after three fatal crashing had occurred in a short sequence, FAA (Federal Aviation Agency) reacted by rechecking all airplanes for such micro cracks. These were now found in a new fleet of airliners from where the three crashed ones came. These were previously overlooked. FAA became aware of that risk and grounded 290 (certainly all) of them, because the material of these did not have higher phase-transition onset and energy than other airplanes with better material. They did so despite the 14577-Standard that does not find (and thus formally forbids) phase transitions under indenter load with the false exponent on the indentation parabola. However, this “Standard” will, despite the present author’s well-founded petition, not be corrected for the next 5 years.
文摘Deep learning is a practical and efficient technique that has been used extensively in many domains. Using deep learning technology, deepfakes create fake images of a person that people cannot distinguish from the real one. Recently, many researchers have focused on understanding how deepkakes work and detecting using deep learning approaches. This paper introduces an explainable deepfake framework for images creation and classification. The framework consists of three main parts: the first approach is called Instant ID which is used to create deepfacke images from the original one;the second approach called Xception classifies the real and deepfake images;the third approach called Local Interpretable Model (LIME) provides a method for interpreting the predictions of any machine learning model in a local and interpretable manner. Our study proposes deepfake approach that achieves 100% precision and 100% accuracy for deepfake creation and classification. Furthermore, the results highlight the superior performance of the proposed model in deep fake creation and classification.
文摘Fake reviews,also known as deceptive opinions,are used to mislead people and have gained more importance recently.This is due to the rapid increase in online marketing transactions,such as selling and purchasing.E-commerce provides a facility for customers to post reviews and comment about the product or service when purchased.New customers usually go through the posted reviews or comments on the website before making a purchase decision.However,the current challenge is how new individuals can distinguish truthful reviews from fake ones,which later deceives customers,inflicts losses,and tarnishes the reputation of companies.The present paper attempts to develop an intelligent system that can detect fake reviews on ecommerce platforms using n-grams of the review text and sentiment scores given by the reviewer.The proposed methodology adopted in this study used a standard fake hotel review dataset for experimenting and data preprocessing methods and a term frequency-Inverse document frequency(TF-IDF)approach for extracting features and their representation.For detection and classification,n-grams of review texts were inputted into the constructed models to be classified as fake or truthful.However,the experiments were carried out using four different supervised machine-learning techniques and were trained and tested on a dataset collected from the Trip Advisor website.The classification results of these experiments showed that na飗e Bayes(NB),support vector machine(SVM),adaptive boosting(AB),and random forest(RF)received 88%,93%,94%,and 95%,respectively,based on testing accuracy and tje F1-score.The obtained results were compared with existing works that used the same dataset,and the proposed methods outperformed the comparable methods in terms of accuracy.
文摘Social media is a platform to express one′s views and opinions freely and has made communication easier than it was before.This also opens up an opportunity for people to spread fake news intentionally.The ease of access to a variety of news sources on the web also brings the problem of people being exposed to fake news and possibly believing such news.This makes it important for us to detect and flag such content on social media.With the current rate of news generated on social media,it is difficult to differentiate between genuine news and hoaxes without knowing the source of the news.This paper discusses approaches to detection of fake news using only the features of the text of the news,without using any other related metadata.We observe that a combination of stylometric features and text-based word vector representations through ensemble methods can predict fake news with an accuracy of up to 95.49%.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under Grant Number(120/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4331004DSR32).
文摘Nowadays,the usage of socialmedia platforms is rapidly increasing,and rumours or false information are also rising,especially among Arab nations.This false information is harmful to society and individuals.Blocking and detecting the spread of fake news in Arabic becomes critical.Several artificial intelligence(AI)methods,including contemporary transformer techniques,BERT,were used to detect fake news.Thus,fake news in Arabic is identified by utilizing AI approaches.This article develops a new hunterprey optimization with hybrid deep learning-based fake news detection(HPOHDL-FND)model on the Arabic corpus.The HPOHDL-FND technique undergoes extensive data pre-processing steps to transform the input data into a useful format.Besides,the HPOHDL-FND technique utilizes long-term memory with a recurrent neural network(LSTM-RNN)model for fake news detection and classification.Finally,hunter prey optimization(HPO)algorithm is exploited for optimal modification of the hyperparameters related to the LSTM-RNN model.The performance validation of the HPOHDL-FND technique is tested using two Arabic datasets.The outcomes exemplified better performance over the other existing techniques with maximum accuracy of 96.57%and 93.53%on Covid19Fakes and satirical datasets,respectively.
基金the National Natural Science Foundation of China(No.62302540)with author F.F.S.For more information,please visit their website at https://www.nsfc.gov.cn/.Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)+1 种基金where F.F.S is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/.The research is also supported by the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422)for more information,you can visit https://kjt.henan.gov.cn/2022/09-02/2599082.html.Lastly,it receives funding from the Natural Science Foundation of Zhongyuan University of Technology(No.K2023QN018),where F.F.S is an author.You can find more information at https://www.zut.edu.cn/.
文摘As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.
文摘In recent years,social media platforms have gained immense popularity.As a result,there has been a tremendous increase in content on social media platforms.This content can be related to an individual’s sentiments,thoughts,stories,advertisements,and news,among many other content types.With the recent increase in online content,the importance of identifying fake and real news has increased.Although,there is a lot of work present to detect fake news,a study on Fuzzy CRNN was not explored into this direction.In this work,a system is designed to classify fake and real news using fuzzy logic.The initial feature extraction process is done using a convolutional recurrent neural network(CRNN).After the extraction of features,word indexing is done with high dimensionality.Then,based on the indexing measures,the ranking process identifies whether news is fake or real.The fuzzy CRNN model is trained to yield outstanding resultswith 99.99±0.01%accuracy.This work utilizes three different datasets(LIAR,LIAR-PLUS,and ISOT)to find the most accurate model.