In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by ...In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by fusing lip images and audio signals. The main method used is lip-audio matching detection technology based on the Siamese neural network, combined with MFCC (Mel Frequency Cepstrum Coefficient) feature extraction of band-pass filters, an improved dual-branch Siamese network structure, and a two-stream network structure design. Firstly, the video stream is preprocessed to extract lip images, and the audio stream is preprocessed to extract MFCC features. Then, these features are processed separately through the two branches of the Siamese network. Finally, the model is trained and optimized through fully connected layers and loss functions. The experimental results show that the testing accuracy of the model in this study on the LRW (Lip Reading in the Wild) dataset reaches 92.3%;the recall rate is 94.3%;the F1 score is 93.3%, significantly better than the results of CNN (Convolutional Neural Networks) and LSTM (Long Short-Term Memory) models. In the validation of multi-resolution image streams, the highest accuracy of dual-resolution image streams reaches 94%. Band-pass filters can effectively improve the signal-to-noise ratio of deep forgery video detection when processing different types of audio signals. The real-time processing performance of the model is also excellent, and it achieves an average score of up to 5 in user research. These data demonstrate that the method proposed in this study can effectively fuse visual and audio information in deep forgery video detection, accurately identify inconsistencies between video and audio, and thus verify the effectiveness of lip-audio modality fusion technology in improving detection performance.展开更多
The exponential growth of video content has driven significant advancements in video summarization techniques in recent years. Breakthroughs in deep learning have been particularly transformative, enabling more effect...The exponential growth of video content has driven significant advancements in video summarization techniques in recent years. Breakthroughs in deep learning have been particularly transformative, enabling more effective detection of key information and creating new possibilities for video synopsis. To summarize recent progress and accelerate research in this field,this paper provides a comprehensive review of deep learningbased video summarization methods developed over the past decade. We begin by examining the research landscape of video abstraction technologies and identifying core challenges in video summarization. Subsequently, we systematically analyze prevailing deep learning frameworks and methodologies employed in current video summarization systems, offering researchers a clear roadmap of the field's evolution. Unlike previous review works,we first classify research papers based on the structural hierarchy of the video(from frame-level to shot-level to video-level),then further categorize them according to the summary backbone model(feature extraction and spatiotemporal modeling).This approach provides a more systematic and hierarchical organization of the documents. Following this comprehensive review,we summarize the benchmark datasets and evaluation metrics commonly employed in the field. Finally, we analyze persistent challenges and propose insightful directions for future research,providing a forward-looking perspective on video summarization technologies. This systematic literature review is of great reference value to new researchers exploring the fields of deep learning and video summarization.展开更多
With the continuous advancement of unmanned technology in various application domains,the development and deployment of blind-spot-free panoramic video systems have gained increasing importance.Such systems are partic...With the continuous advancement of unmanned technology in various application domains,the development and deployment of blind-spot-free panoramic video systems have gained increasing importance.Such systems are particularly critical in battlefield environments,where advanced panoramic video processing and wireless communication technologies are essential to enable remote control and autonomous operation of unmanned ground vehicles(UGVs).However,conventional video surveillance systems suffer from several limitations,including limited field of view,high processing latency,low reliability,excessive resource consumption,and significant transmission delays.These shortcomings impede the widespread adoption of UGVs in battlefield settings.To overcome these challenges,this paper proposes a novel multi-channel video capture and stitching system designed for real-time video processing.The system integrates the Speeded-Up Robust Features(SURF)algorithm and the Fast Library for Approximate Nearest Neighbors(FLANN)algorithm to execute essential operations such as feature detection,descriptor computation,image matching,homography estimation,and seamless image fusion.The fused panoramic video is then encoded and assembled to produce a seamless output devoid of stitching artifacts and shadows.Furthermore,H.264 video compression is employed to reduce the data size of the video stream without sacrificing visual quality.Using the Real-Time Streaming Protocol(RTSP),the compressed stream is transmitted efficiently,supporting real-time remote monitoring and control of UGVs in dynamic battlefield environments.Experimental results indicate that the proposed system achieves high stability,flexibility,and low latency.With a wireless link latency of 30 ms,the end-to-end video transmission latency remains around 140 ms,enabling smooth video communication.The system can tolerate packet loss rates(PLR)of up to 20%while maintaining usable video quality(with latency around 200 ms).These properties make it well-suited for mobile communication scenarios demanding high real-time video performance.展开更多
Scalable simulation leveraging real-world data plays an essential role in advancing autonomous driving,owing to its efficiency and applicability in both training and evaluating algorithms.Consequently,there has been i...Scalable simulation leveraging real-world data plays an essential role in advancing autonomous driving,owing to its efficiency and applicability in both training and evaluating algorithms.Consequently,there has been increasing attention on generating highly realistic and consistent driving videos,particularly those involving viewpoint changes guided by the control commands or trajectories of ego vehicles.However,current reconstruction approaches,such as Neural Radiance Fields and 3D Gaussian Splatting,frequently suffer from limited generalization and depend on substantial input data.Meanwhile,2D generative models,though capable of producing unknown scenes,still have room for improvement in terms of coherence and visual realism.To overcome these challenges,we introduce GenScene,a world model that synthesizes front-view driving videos conditioned on trajectories.A new temporal module is presented to improve video consistency by extracting the global context of each frame,calculating relationships of frames using these global representations,and fusing frame contexts accordingly.Moreover,we propose an innovative attention mechanism that computes relations of pixels within each frame and pixels in the corresponding window range of the initial frame.Extensive experiments show that our approach surpasses various state-of-the-art models in driving video generation,and the introduced modules contribute significantly to model performance.This work establishes a new paradigm for goal-oriented video synthesis in autonomous driving,which facilitates on-demand simulation to expedite algorithm development.展开更多
Video emotion recognition is widely used due to its alignment with the temporal characteristics of human emotional expression,but existingmodels have significant shortcomings.On the one hand,Transformermultihead self-...Video emotion recognition is widely used due to its alignment with the temporal characteristics of human emotional expression,but existingmodels have significant shortcomings.On the one hand,Transformermultihead self-attention modeling of global temporal dependency has problems of high computational overhead and feature similarity.On the other hand,fixed-size convolution kernels are often used,which have weak perception ability for emotional regions of different scales.Therefore,this paper proposes a video emotion recognition model that combines multi-scale region-aware convolution with temporal interactive sampling.In terms of space,multi-branch large-kernel stripe convolution is used to perceive emotional region features at different scales,and attention weights are generated for each scale feature.In terms of time,multi-layer odd-even down-sampling is performed on the time series,and oddeven sub-sequence interaction is performed to solve the problem of feature similarity,while reducing computational costs due to the linear relationship between sampling and convolution overhead.This paper was tested on CMU-MOSI,CMU-MOSEI,and Hume Reaction.The Acc-2 reached 83.4%,85.2%,and 81.2%,respectively.The experimental results show that the model can significantly improve the accuracy of emotion recognition.展开更多
Audio-visual speaker tracking aims to determine the locations of multiple speakers in the scene by leveraging signals captured from multisensor platforms.Multimodal fusion methods can improve both the accuracy and rob...Audio-visual speaker tracking aims to determine the locations of multiple speakers in the scene by leveraging signals captured from multisensor platforms.Multimodal fusion methods can improve both the accuracy and robustness of speaker tracking.However,in complex multispeaker tracking scenarios,critical challenges such as cross-modal feature discrepancy,weak sound source localisation ambiguity and frequent identity switch errors remain unresolved,which severely hinder the modelling of speaker identity consistency and consequently lead to degraded tracking accuracy and unstable tracking trajectories.To this end,this paper proposes a multimodal multispeaker tracking network using audio-visual contrastive learning(AVCLNet).By integrating heterogeneous modal representations into a unified space through audio-visual contrastive learning,which facilitates cross-modal feature alignment,mitigates cross-modal feature bias and enhances identity-consistent representations.In the audio-visual measurement stage,we design a vision-guided weak sound source weighted enhancement method,which leverages visual cues to establish cross-modal mappings and employs a spatiotemporal dynamic weighted mechanism to improve the detectability of weak sound sources.Furthermore,in the data association phase,a dual geometric constraint strategy is introduced by combining the 2D and 3D spatial geometric information,reducing frequent identity switch errors.Experiments on the AV16.3 and CAV3D datasets show that AVCLNet outperforms state-of-the-art methods,demonstrating superior robustness in multispeaker scenarios.展开更多
Background:This study aims to investigate the underlying mechanisms between parental marital conflict and adolescent short video dependence by constructing a chain mediation model,focusing on the mediating roles of ex...Background:This study aims to investigate the underlying mechanisms between parental marital conflict and adolescent short video dependence by constructing a chain mediation model,focusing on the mediating roles of experiential avoidance and emotional disturbance(anxiety,depression,and stress).Methods:Conducted in January 2025,the research recruited 4125 adolescents from multiple Chinese provinces through convenience sampling;after data cleaning,3957 valid participants(1959 males,1998 females)were included.Using a cross-sectional design,measures included parental marital conflict,experiential avoidance,anxiety,depression,stress,and short video dependence.Results:Pearson correlation analysis revealed significant positive correlations among all variables.Mediation analysis using the SPSS PROCESS macro showed that parental marital conflict directly predicted short video dependence(β=0.269,p<0.001),and also significantly predicted experiential avoidance(β=0.519,p<0.001),anxiety(β=0.072,p<0.001),depression(β=0.067,p<0.001),and stress(β=0.048,p<0.05).Experiential avoidance further predicted anxiety(β=0.521,p<0.001),depression(β=0.489,p<0.001),stress(β=0.408,p<0.001),and short video dependence(β=0.244,p<0.001).While both anxiety(β=0.050,p<0.05)and depression(β=0.116,p<0.001)positively predicted short video dependence,stress did not(β=0.019,p=0.257).Overall,experiential avoidance,anxiety,depression,and stress significantly mediated the relationship between parental marital conflict and short video dependence.Conclusion:These findings confirm that parental marital conflict not only directly influences adolescent short video dependence but also operates through a chain mediation pathway involving experiential avoidance and emotional disturbance,highlighting central psychological mechanisms and providing theoretical support for integrated mental health and behavioral interventions.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
Background:In the Chinese context,the impact of short video applications on the psychological well-being of older adults is contested.While often examined through a pathological lens of addiction,this perspective may ...Background:In the Chinese context,the impact of short video applications on the psychological well-being of older adults is contested.While often examined through a pathological lens of addiction,this perspective may overlook paradoxical,context-dependent positive outcomes.Therefore,the main objective of this study is to challenge the traditional Compensatory Internet Use Theory by proposing and testing a chained mediation model that explores a paradoxical pathway from social support to life satisfaction via problematic social media use.Methods:Data were collected between July and August 2025 via the Credamo online survey platform,yielding 384 valid responses from Chinese older adults aged 60 and above.Key constructs were assessed using the Social Support Rating Scale(SSRS),Bergen Social Media Addiction Scale(BSMAS),Simplified UCLA Loneliness Scale,and Satisfaction with Life Scale(SWLS).A chained mediation model was tested using stepwise regression and non-parametric bootstrapping(5000 resamples),controlling for age,gender,household income,and health status.Results:The analysis revealed a paradoxical pathway,which was clarified by a key statistical suppression effect.Social support significantly and positively predicted problematic usage(β=0.157,p=0.002).After controlling for the suppressor effect of social support,problematic usage in turn negatively predicted social connectedness(β=−0.177,p<0.001).Finally,reduced social connectedness—reflecting a state of solitude—positively predicted life satisfaction(β=−0.227,p<0.001).Conclusion:The findings suggest that for older adults with sufficient offline social support,these resources may serve a“social empowerment”function.This empowerment allows behaviors measured as“problematic usage”to be theoretically reframed as a form of“deep immersive entertainment”.This immersion appears to occur alongside a state of“high-quality solitude”,which ultimately is associated with higher life satisfaction.This study provides a novel,non-pathological theoretical perspective on the consequences of high engagement with emerging social media,offering empirical grounds for non-abstinence-based intervention strategies.展开更多
文摘In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by fusing lip images and audio signals. The main method used is lip-audio matching detection technology based on the Siamese neural network, combined with MFCC (Mel Frequency Cepstrum Coefficient) feature extraction of band-pass filters, an improved dual-branch Siamese network structure, and a two-stream network structure design. Firstly, the video stream is preprocessed to extract lip images, and the audio stream is preprocessed to extract MFCC features. Then, these features are processed separately through the two branches of the Siamese network. Finally, the model is trained and optimized through fully connected layers and loss functions. The experimental results show that the testing accuracy of the model in this study on the LRW (Lip Reading in the Wild) dataset reaches 92.3%;the recall rate is 94.3%;the F1 score is 93.3%, significantly better than the results of CNN (Convolutional Neural Networks) and LSTM (Long Short-Term Memory) models. In the validation of multi-resolution image streams, the highest accuracy of dual-resolution image streams reaches 94%. Band-pass filters can effectively improve the signal-to-noise ratio of deep forgery video detection when processing different types of audio signals. The real-time processing performance of the model is also excellent, and it achieves an average score of up to 5 in user research. These data demonstrate that the method proposed in this study can effectively fuse visual and audio information in deep forgery video detection, accurately identify inconsistencies between video and audio, and thus verify the effectiveness of lip-audio modality fusion technology in improving detection performance.
基金supported by UKRI(EP/Z000025/1)Horizon Europe Programme under the MSCA grant for the ACMod project(101130271)。
文摘The exponential growth of video content has driven significant advancements in video summarization techniques in recent years. Breakthroughs in deep learning have been particularly transformative, enabling more effective detection of key information and creating new possibilities for video synopsis. To summarize recent progress and accelerate research in this field,this paper provides a comprehensive review of deep learningbased video summarization methods developed over the past decade. We begin by examining the research landscape of video abstraction technologies and identifying core challenges in video summarization. Subsequently, we systematically analyze prevailing deep learning frameworks and methodologies employed in current video summarization systems, offering researchers a clear roadmap of the field's evolution. Unlike previous review works,we first classify research papers based on the structural hierarchy of the video(from frame-level to shot-level to video-level),then further categorize them according to the summary backbone model(feature extraction and spatiotemporal modeling).This approach provides a more systematic and hierarchical organization of the documents. Following this comprehensive review,we summarize the benchmark datasets and evaluation metrics commonly employed in the field. Finally, we analyze persistent challenges and propose insightful directions for future research,providing a forward-looking perspective on video summarization technologies. This systematic literature review is of great reference value to new researchers exploring the fields of deep learning and video summarization.
基金supported by the National Natural Science Foundation of China(Grant No.72334003)the National Key Research and Development Program of China(Grant No.2022YFB2702804)+1 种基金the Shandong Key Research and Development Program(Grant No.2020ZLYS09)the Jinan Program(Grant No.2021GXRC084-2).
文摘With the continuous advancement of unmanned technology in various application domains,the development and deployment of blind-spot-free panoramic video systems have gained increasing importance.Such systems are particularly critical in battlefield environments,where advanced panoramic video processing and wireless communication technologies are essential to enable remote control and autonomous operation of unmanned ground vehicles(UGVs).However,conventional video surveillance systems suffer from several limitations,including limited field of view,high processing latency,low reliability,excessive resource consumption,and significant transmission delays.These shortcomings impede the widespread adoption of UGVs in battlefield settings.To overcome these challenges,this paper proposes a novel multi-channel video capture and stitching system designed for real-time video processing.The system integrates the Speeded-Up Robust Features(SURF)algorithm and the Fast Library for Approximate Nearest Neighbors(FLANN)algorithm to execute essential operations such as feature detection,descriptor computation,image matching,homography estimation,and seamless image fusion.The fused panoramic video is then encoded and assembled to produce a seamless output devoid of stitching artifacts and shadows.Furthermore,H.264 video compression is employed to reduce the data size of the video stream without sacrificing visual quality.Using the Real-Time Streaming Protocol(RTSP),the compressed stream is transmitted efficiently,supporting real-time remote monitoring and control of UGVs in dynamic battlefield environments.Experimental results indicate that the proposed system achieves high stability,flexibility,and low latency.With a wireless link latency of 30 ms,the end-to-end video transmission latency remains around 140 ms,enabling smooth video communication.The system can tolerate packet loss rates(PLR)of up to 20%while maintaining usable video quality(with latency around 200 ms).These properties make it well-suited for mobile communication scenarios demanding high real-time video performance.
基金supported by the Cultivation Program for Major Scientific Research Projects of Harbin Institute of Technology(ZDXMPY20180109).
文摘Scalable simulation leveraging real-world data plays an essential role in advancing autonomous driving,owing to its efficiency and applicability in both training and evaluating algorithms.Consequently,there has been increasing attention on generating highly realistic and consistent driving videos,particularly those involving viewpoint changes guided by the control commands or trajectories of ego vehicles.However,current reconstruction approaches,such as Neural Radiance Fields and 3D Gaussian Splatting,frequently suffer from limited generalization and depend on substantial input data.Meanwhile,2D generative models,though capable of producing unknown scenes,still have room for improvement in terms of coherence and visual realism.To overcome these challenges,we introduce GenScene,a world model that synthesizes front-view driving videos conditioned on trajectories.A new temporal module is presented to improve video consistency by extracting the global context of each frame,calculating relationships of frames using these global representations,and fusing frame contexts accordingly.Moreover,we propose an innovative attention mechanism that computes relations of pixels within each frame and pixels in the corresponding window range of the initial frame.Extensive experiments show that our approach surpasses various state-of-the-art models in driving video generation,and the introduced modules contribute significantly to model performance.This work establishes a new paradigm for goal-oriented video synthesis in autonomous driving,which facilitates on-demand simulation to expedite algorithm development.
基金supported,in part,by the National Nature Science Foundation of China under Grant 62272236,62376128in part,by the Natural Science Foundation of Jiangsu Province under Grant BK20201136,BK20191401.
文摘Video emotion recognition is widely used due to its alignment with the temporal characteristics of human emotional expression,but existingmodels have significant shortcomings.On the one hand,Transformermultihead self-attention modeling of global temporal dependency has problems of high computational overhead and feature similarity.On the other hand,fixed-size convolution kernels are often used,which have weak perception ability for emotional regions of different scales.Therefore,this paper proposes a video emotion recognition model that combines multi-scale region-aware convolution with temporal interactive sampling.In terms of space,multi-branch large-kernel stripe convolution is used to perceive emotional region features at different scales,and attention weights are generated for each scale feature.In terms of time,multi-layer odd-even down-sampling is performed on the time series,and oddeven sub-sequence interaction is performed to solve the problem of feature similarity,while reducing computational costs due to the linear relationship between sampling and convolution overhead.This paper was tested on CMU-MOSI,CMU-MOSEI,and Hume Reaction.The Acc-2 reached 83.4%,85.2%,and 81.2%,respectively.The experimental results show that the model can significantly improve the accuracy of emotion recognition.
基金supported by the National Natural Science Foundation of China(62403345)the Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology(2024B1212010006)the Shanxi Provincial Department of Science and Technology Basic Research Project(202403021212174,202403021221074).
文摘Audio-visual speaker tracking aims to determine the locations of multiple speakers in the scene by leveraging signals captured from multisensor platforms.Multimodal fusion methods can improve both the accuracy and robustness of speaker tracking.However,in complex multispeaker tracking scenarios,critical challenges such as cross-modal feature discrepancy,weak sound source localisation ambiguity and frequent identity switch errors remain unresolved,which severely hinder the modelling of speaker identity consistency and consequently lead to degraded tracking accuracy and unstable tracking trajectories.To this end,this paper proposes a multimodal multispeaker tracking network using audio-visual contrastive learning(AVCLNet).By integrating heterogeneous modal representations into a unified space through audio-visual contrastive learning,which facilitates cross-modal feature alignment,mitigates cross-modal feature bias and enhances identity-consistent representations.In the audio-visual measurement stage,we design a vision-guided weak sound source weighted enhancement method,which leverages visual cues to establish cross-modal mappings and employs a spatiotemporal dynamic weighted mechanism to improve the detectability of weak sound sources.Furthermore,in the data association phase,a dual geometric constraint strategy is introduced by combining the 2D and 3D spatial geometric information,reducing frequent identity switch errors.Experiments on the AV16.3 and CAV3D datasets show that AVCLNet outperforms state-of-the-art methods,demonstrating superior robustness in multispeaker scenarios.
文摘Background:This study aims to investigate the underlying mechanisms between parental marital conflict and adolescent short video dependence by constructing a chain mediation model,focusing on the mediating roles of experiential avoidance and emotional disturbance(anxiety,depression,and stress).Methods:Conducted in January 2025,the research recruited 4125 adolescents from multiple Chinese provinces through convenience sampling;after data cleaning,3957 valid participants(1959 males,1998 females)were included.Using a cross-sectional design,measures included parental marital conflict,experiential avoidance,anxiety,depression,stress,and short video dependence.Results:Pearson correlation analysis revealed significant positive correlations among all variables.Mediation analysis using the SPSS PROCESS macro showed that parental marital conflict directly predicted short video dependence(β=0.269,p<0.001),and also significantly predicted experiential avoidance(β=0.519,p<0.001),anxiety(β=0.072,p<0.001),depression(β=0.067,p<0.001),and stress(β=0.048,p<0.05).Experiential avoidance further predicted anxiety(β=0.521,p<0.001),depression(β=0.489,p<0.001),stress(β=0.408,p<0.001),and short video dependence(β=0.244,p<0.001).While both anxiety(β=0.050,p<0.05)and depression(β=0.116,p<0.001)positively predicted short video dependence,stress did not(β=0.019,p=0.257).Overall,experiential avoidance,anxiety,depression,and stress significantly mediated the relationship between parental marital conflict and short video dependence.Conclusion:These findings confirm that parental marital conflict not only directly influences adolescent short video dependence but also operates through a chain mediation pathway involving experiential avoidance and emotional disturbance,highlighting central psychological mechanisms and providing theoretical support for integrated mental health and behavioral interventions.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
基金funded by the Guangxi Philosophy and Social Science Research Project,grant number 24XWC002.
文摘Background:In the Chinese context,the impact of short video applications on the psychological well-being of older adults is contested.While often examined through a pathological lens of addiction,this perspective may overlook paradoxical,context-dependent positive outcomes.Therefore,the main objective of this study is to challenge the traditional Compensatory Internet Use Theory by proposing and testing a chained mediation model that explores a paradoxical pathway from social support to life satisfaction via problematic social media use.Methods:Data were collected between July and August 2025 via the Credamo online survey platform,yielding 384 valid responses from Chinese older adults aged 60 and above.Key constructs were assessed using the Social Support Rating Scale(SSRS),Bergen Social Media Addiction Scale(BSMAS),Simplified UCLA Loneliness Scale,and Satisfaction with Life Scale(SWLS).A chained mediation model was tested using stepwise regression and non-parametric bootstrapping(5000 resamples),controlling for age,gender,household income,and health status.Results:The analysis revealed a paradoxical pathway,which was clarified by a key statistical suppression effect.Social support significantly and positively predicted problematic usage(β=0.157,p=0.002).After controlling for the suppressor effect of social support,problematic usage in turn negatively predicted social connectedness(β=−0.177,p<0.001).Finally,reduced social connectedness—reflecting a state of solitude—positively predicted life satisfaction(β=−0.227,p<0.001).Conclusion:The findings suggest that for older adults with sufficient offline social support,these resources may serve a“social empowerment”function.This empowerment allows behaviors measured as“problematic usage”to be theoretically reframed as a form of“deep immersive entertainment”.This immersion appears to occur alongside a state of“high-quality solitude”,which ultimately is associated with higher life satisfaction.This study provides a novel,non-pathological theoretical perspective on the consequences of high engagement with emerging social media,offering empirical grounds for non-abstinence-based intervention strategies.