In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by ...In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by fusing lip images and audio signals. The main method used is lip-audio matching detection technology based on the Siamese neural network, combined with MFCC (Mel Frequency Cepstrum Coefficient) feature extraction of band-pass filters, an improved dual-branch Siamese network structure, and a two-stream network structure design. Firstly, the video stream is preprocessed to extract lip images, and the audio stream is preprocessed to extract MFCC features. Then, these features are processed separately through the two branches of the Siamese network. Finally, the model is trained and optimized through fully connected layers and loss functions. The experimental results show that the testing accuracy of the model in this study on the LRW (Lip Reading in the Wild) dataset reaches 92.3%;the recall rate is 94.3%;the F1 score is 93.3%, significantly better than the results of CNN (Convolutional Neural Networks) and LSTM (Long Short-Term Memory) models. In the validation of multi-resolution image streams, the highest accuracy of dual-resolution image streams reaches 94%. Band-pass filters can effectively improve the signal-to-noise ratio of deep forgery video detection when processing different types of audio signals. The real-time processing performance of the model is also excellent, and it achieves an average score of up to 5 in user research. These data demonstrate that the method proposed in this study can effectively fuse visual and audio information in deep forgery video detection, accurately identify inconsistencies between video and audio, and thus verify the effectiveness of lip-audio modality fusion technology in improving detection performance.展开更多
With the continuous advancement of unmanned technology in various application domains,the development and deployment of blind-spot-free panoramic video systems have gained increasing importance.Such systems are partic...With the continuous advancement of unmanned technology in various application domains,the development and deployment of blind-spot-free panoramic video systems have gained increasing importance.Such systems are particularly critical in battlefield environments,where advanced panoramic video processing and wireless communication technologies are essential to enable remote control and autonomous operation of unmanned ground vehicles(UGVs).However,conventional video surveillance systems suffer from several limitations,including limited field of view,high processing latency,low reliability,excessive resource consumption,and significant transmission delays.These shortcomings impede the widespread adoption of UGVs in battlefield settings.To overcome these challenges,this paper proposes a novel multi-channel video capture and stitching system designed for real-time video processing.The system integrates the Speeded-Up Robust Features(SURF)algorithm and the Fast Library for Approximate Nearest Neighbors(FLANN)algorithm to execute essential operations such as feature detection,descriptor computation,image matching,homography estimation,and seamless image fusion.The fused panoramic video is then encoded and assembled to produce a seamless output devoid of stitching artifacts and shadows.Furthermore,H.264 video compression is employed to reduce the data size of the video stream without sacrificing visual quality.Using the Real-Time Streaming Protocol(RTSP),the compressed stream is transmitted efficiently,supporting real-time remote monitoring and control of UGVs in dynamic battlefield environments.Experimental results indicate that the proposed system achieves high stability,flexibility,and low latency.With a wireless link latency of 30 ms,the end-to-end video transmission latency remains around 140 ms,enabling smooth video communication.The system can tolerate packet loss rates(PLR)of up to 20%while maintaining usable video quality(with latency around 200 ms).These properties make it well-suited for mobile communication scenarios demanding high real-time video performance.展开更多
Scalable simulation leveraging real-world data plays an essential role in advancing autonomous driving,owing to its efficiency and applicability in both training and evaluating algorithms.Consequently,there has been i...Scalable simulation leveraging real-world data plays an essential role in advancing autonomous driving,owing to its efficiency and applicability in both training and evaluating algorithms.Consequently,there has been increasing attention on generating highly realistic and consistent driving videos,particularly those involving viewpoint changes guided by the control commands or trajectories of ego vehicles.However,current reconstruction approaches,such as Neural Radiance Fields and 3D Gaussian Splatting,frequently suffer from limited generalization and depend on substantial input data.Meanwhile,2D generative models,though capable of producing unknown scenes,still have room for improvement in terms of coherence and visual realism.To overcome these challenges,we introduce GenScene,a world model that synthesizes front-view driving videos conditioned on trajectories.A new temporal module is presented to improve video consistency by extracting the global context of each frame,calculating relationships of frames using these global representations,and fusing frame contexts accordingly.Moreover,we propose an innovative attention mechanism that computes relations of pixels within each frame and pixels in the corresponding window range of the initial frame.Extensive experiments show that our approach surpasses various state-of-the-art models in driving video generation,and the introduced modules contribute significantly to model performance.This work establishes a new paradigm for goal-oriented video synthesis in autonomous driving,which facilitates on-demand simulation to expedite algorithm development.展开更多
Background:This study aims to investigate the underlying mechanisms between parental marital conflict and adolescent short video dependence by constructing a chain mediation model,focusing on the mediating roles of ex...Background:This study aims to investigate the underlying mechanisms between parental marital conflict and adolescent short video dependence by constructing a chain mediation model,focusing on the mediating roles of experiential avoidance and emotional disturbance(anxiety,depression,and stress).Methods:Conducted in January 2025,the research recruited 4125 adolescents from multiple Chinese provinces through convenience sampling;after data cleaning,3957 valid participants(1959 males,1998 females)were included.Using a cross-sectional design,measures included parental marital conflict,experiential avoidance,anxiety,depression,stress,and short video dependence.Results:Pearson correlation analysis revealed significant positive correlations among all variables.Mediation analysis using the SPSS PROCESS macro showed that parental marital conflict directly predicted short video dependence(β=0.269,p<0.001),and also significantly predicted experiential avoidance(β=0.519,p<0.001),anxiety(β=0.072,p<0.001),depression(β=0.067,p<0.001),and stress(β=0.048,p<0.05).Experiential avoidance further predicted anxiety(β=0.521,p<0.001),depression(β=0.489,p<0.001),stress(β=0.408,p<0.001),and short video dependence(β=0.244,p<0.001).While both anxiety(β=0.050,p<0.05)and depression(β=0.116,p<0.001)positively predicted short video dependence,stress did not(β=0.019,p=0.257).Overall,experiential avoidance,anxiety,depression,and stress significantly mediated the relationship between parental marital conflict and short video dependence.Conclusion:These findings confirm that parental marital conflict not only directly influences adolescent short video dependence but also operates through a chain mediation pathway involving experiential avoidance and emotional disturbance,highlighting central psychological mechanisms and providing theoretical support for integrated mental health and behavioral interventions.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
Background:In the Chinese context,the impact of short video applications on the psychological well-being of older adults is contested.While often examined through a pathological lens of addiction,this perspective may ...Background:In the Chinese context,the impact of short video applications on the psychological well-being of older adults is contested.While often examined through a pathological lens of addiction,this perspective may overlook paradoxical,context-dependent positive outcomes.Therefore,the main objective of this study is to challenge the traditional Compensatory Internet Use Theory by proposing and testing a chained mediation model that explores a paradoxical pathway from social support to life satisfaction via problematic social media use.Methods:Data were collected between July and August 2025 via the Credamo online survey platform,yielding 384 valid responses from Chinese older adults aged 60 and above.Key constructs were assessed using the Social Support Rating Scale(SSRS),Bergen Social Media Addiction Scale(BSMAS),Simplified UCLA Loneliness Scale,and Satisfaction with Life Scale(SWLS).A chained mediation model was tested using stepwise regression and non-parametric bootstrapping(5000 resamples),controlling for age,gender,household income,and health status.Results:The analysis revealed a paradoxical pathway,which was clarified by a key statistical suppression effect.Social support significantly and positively predicted problematic usage(β=0.157,p=0.002).After controlling for the suppressor effect of social support,problematic usage in turn negatively predicted social connectedness(β=−0.177,p<0.001).Finally,reduced social connectedness—reflecting a state of solitude—positively predicted life satisfaction(β=−0.227,p<0.001).Conclusion:The findings suggest that for older adults with sufficient offline social support,these resources may serve a“social empowerment”function.This empowerment allows behaviors measured as“problematic usage”to be theoretically reframed as a form of“deep immersive entertainment”.This immersion appears to occur alongside a state of“high-quality solitude”,which ultimately is associated with higher life satisfaction.This study provides a novel,non-pathological theoretical perspective on the consequences of high engagement with emerging social media,offering empirical grounds for non-abstinence-based intervention strategies.展开更多
文摘In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by fusing lip images and audio signals. The main method used is lip-audio matching detection technology based on the Siamese neural network, combined with MFCC (Mel Frequency Cepstrum Coefficient) feature extraction of band-pass filters, an improved dual-branch Siamese network structure, and a two-stream network structure design. Firstly, the video stream is preprocessed to extract lip images, and the audio stream is preprocessed to extract MFCC features. Then, these features are processed separately through the two branches of the Siamese network. Finally, the model is trained and optimized through fully connected layers and loss functions. The experimental results show that the testing accuracy of the model in this study on the LRW (Lip Reading in the Wild) dataset reaches 92.3%;the recall rate is 94.3%;the F1 score is 93.3%, significantly better than the results of CNN (Convolutional Neural Networks) and LSTM (Long Short-Term Memory) models. In the validation of multi-resolution image streams, the highest accuracy of dual-resolution image streams reaches 94%. Band-pass filters can effectively improve the signal-to-noise ratio of deep forgery video detection when processing different types of audio signals. The real-time processing performance of the model is also excellent, and it achieves an average score of up to 5 in user research. These data demonstrate that the method proposed in this study can effectively fuse visual and audio information in deep forgery video detection, accurately identify inconsistencies between video and audio, and thus verify the effectiveness of lip-audio modality fusion technology in improving detection performance.
基金supported by the National Natural Science Foundation of China(Grant No.72334003)the National Key Research and Development Program of China(Grant No.2022YFB2702804)+1 种基金the Shandong Key Research and Development Program(Grant No.2020ZLYS09)the Jinan Program(Grant No.2021GXRC084-2).
文摘With the continuous advancement of unmanned technology in various application domains,the development and deployment of blind-spot-free panoramic video systems have gained increasing importance.Such systems are particularly critical in battlefield environments,where advanced panoramic video processing and wireless communication technologies are essential to enable remote control and autonomous operation of unmanned ground vehicles(UGVs).However,conventional video surveillance systems suffer from several limitations,including limited field of view,high processing latency,low reliability,excessive resource consumption,and significant transmission delays.These shortcomings impede the widespread adoption of UGVs in battlefield settings.To overcome these challenges,this paper proposes a novel multi-channel video capture and stitching system designed for real-time video processing.The system integrates the Speeded-Up Robust Features(SURF)algorithm and the Fast Library for Approximate Nearest Neighbors(FLANN)algorithm to execute essential operations such as feature detection,descriptor computation,image matching,homography estimation,and seamless image fusion.The fused panoramic video is then encoded and assembled to produce a seamless output devoid of stitching artifacts and shadows.Furthermore,H.264 video compression is employed to reduce the data size of the video stream without sacrificing visual quality.Using the Real-Time Streaming Protocol(RTSP),the compressed stream is transmitted efficiently,supporting real-time remote monitoring and control of UGVs in dynamic battlefield environments.Experimental results indicate that the proposed system achieves high stability,flexibility,and low latency.With a wireless link latency of 30 ms,the end-to-end video transmission latency remains around 140 ms,enabling smooth video communication.The system can tolerate packet loss rates(PLR)of up to 20%while maintaining usable video quality(with latency around 200 ms).These properties make it well-suited for mobile communication scenarios demanding high real-time video performance.
基金supported by the Cultivation Program for Major Scientific Research Projects of Harbin Institute of Technology(ZDXMPY20180109).
文摘Scalable simulation leveraging real-world data plays an essential role in advancing autonomous driving,owing to its efficiency and applicability in both training and evaluating algorithms.Consequently,there has been increasing attention on generating highly realistic and consistent driving videos,particularly those involving viewpoint changes guided by the control commands or trajectories of ego vehicles.However,current reconstruction approaches,such as Neural Radiance Fields and 3D Gaussian Splatting,frequently suffer from limited generalization and depend on substantial input data.Meanwhile,2D generative models,though capable of producing unknown scenes,still have room for improvement in terms of coherence and visual realism.To overcome these challenges,we introduce GenScene,a world model that synthesizes front-view driving videos conditioned on trajectories.A new temporal module is presented to improve video consistency by extracting the global context of each frame,calculating relationships of frames using these global representations,and fusing frame contexts accordingly.Moreover,we propose an innovative attention mechanism that computes relations of pixels within each frame and pixels in the corresponding window range of the initial frame.Extensive experiments show that our approach surpasses various state-of-the-art models in driving video generation,and the introduced modules contribute significantly to model performance.This work establishes a new paradigm for goal-oriented video synthesis in autonomous driving,which facilitates on-demand simulation to expedite algorithm development.
文摘Background:This study aims to investigate the underlying mechanisms between parental marital conflict and adolescent short video dependence by constructing a chain mediation model,focusing on the mediating roles of experiential avoidance and emotional disturbance(anxiety,depression,and stress).Methods:Conducted in January 2025,the research recruited 4125 adolescents from multiple Chinese provinces through convenience sampling;after data cleaning,3957 valid participants(1959 males,1998 females)were included.Using a cross-sectional design,measures included parental marital conflict,experiential avoidance,anxiety,depression,stress,and short video dependence.Results:Pearson correlation analysis revealed significant positive correlations among all variables.Mediation analysis using the SPSS PROCESS macro showed that parental marital conflict directly predicted short video dependence(β=0.269,p<0.001),and also significantly predicted experiential avoidance(β=0.519,p<0.001),anxiety(β=0.072,p<0.001),depression(β=0.067,p<0.001),and stress(β=0.048,p<0.05).Experiential avoidance further predicted anxiety(β=0.521,p<0.001),depression(β=0.489,p<0.001),stress(β=0.408,p<0.001),and short video dependence(β=0.244,p<0.001).While both anxiety(β=0.050,p<0.05)and depression(β=0.116,p<0.001)positively predicted short video dependence,stress did not(β=0.019,p=0.257).Overall,experiential avoidance,anxiety,depression,and stress significantly mediated the relationship between parental marital conflict and short video dependence.Conclusion:These findings confirm that parental marital conflict not only directly influences adolescent short video dependence but also operates through a chain mediation pathway involving experiential avoidance and emotional disturbance,highlighting central psychological mechanisms and providing theoretical support for integrated mental health and behavioral interventions.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
基金funded by the Guangxi Philosophy and Social Science Research Project,grant number 24XWC002.
文摘Background:In the Chinese context,the impact of short video applications on the psychological well-being of older adults is contested.While often examined through a pathological lens of addiction,this perspective may overlook paradoxical,context-dependent positive outcomes.Therefore,the main objective of this study is to challenge the traditional Compensatory Internet Use Theory by proposing and testing a chained mediation model that explores a paradoxical pathway from social support to life satisfaction via problematic social media use.Methods:Data were collected between July and August 2025 via the Credamo online survey platform,yielding 384 valid responses from Chinese older adults aged 60 and above.Key constructs were assessed using the Social Support Rating Scale(SSRS),Bergen Social Media Addiction Scale(BSMAS),Simplified UCLA Loneliness Scale,and Satisfaction with Life Scale(SWLS).A chained mediation model was tested using stepwise regression and non-parametric bootstrapping(5000 resamples),controlling for age,gender,household income,and health status.Results:The analysis revealed a paradoxical pathway,which was clarified by a key statistical suppression effect.Social support significantly and positively predicted problematic usage(β=0.157,p=0.002).After controlling for the suppressor effect of social support,problematic usage in turn negatively predicted social connectedness(β=−0.177,p<0.001).Finally,reduced social connectedness—reflecting a state of solitude—positively predicted life satisfaction(β=−0.227,p<0.001).Conclusion:The findings suggest that for older adults with sufficient offline social support,these resources may serve a“social empowerment”function.This empowerment allows behaviors measured as“problematic usage”to be theoretically reframed as a form of“deep immersive entertainment”.This immersion appears to occur alongside a state of“high-quality solitude”,which ultimately is associated with higher life satisfaction.This study provides a novel,non-pathological theoretical perspective on the consequences of high engagement with emerging social media,offering empirical grounds for non-abstinence-based intervention strategies.