The rapid progress of cloud technology has attracted a growing number of video providers to consider deploying their streaming services onto cloud platform for more cost-effective, scalable and reliable performance. I...The rapid progress of cloud technology has attracted a growing number of video providers to consider deploying their streaming services onto cloud platform for more cost-effective, scalable and reliable performance. In this paper, we utilize Markov decision process model to formulate the dynamic deployment of cloud-based video services over multiple geographically distributed datacenters. We focus on maximizing the average profits for the video service provider over a long run and introduce an average performance criteria which reflects the cost and user experience jointly. We develop an optimal algorithm based on the sensitivity analysis and sample-based policy iteration to obtain the optimal video placement and request dispatching strategy. We demonstrate the optimality of our algorithm with theoretical proof and specify the practical feasibility of our algorithm. We conduct simulations to evaluate the performance of our algorithm and the results show that our strategy can effectively cut down the total cost and guarantee users' quality of experience (QoE).展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Cloud-based video communication and networking has emerged as a promising new research paradigm to significantly improve the quality of experience for video consumers.An architectural overview of this promising resear...Cloud-based video communication and networking has emerged as a promising new research paradigm to significantly improve the quality of experience for video consumers.An architectural overview of this promising research area was presented.This overview with an end-to-end partition of the cloud-based video system into major blocks with respect to their locations from the center of the cloud to the edge of the cloud was started.Following this partition,existing research efforts on how the principles of cloud computing can provide unprecedented support to 1)video servers,2)content delivery networks,and 3)edge networks within the global cloud video ecosystems were examined.Moreover,a case study was exemplfied on an edge cloud assisted HTTP adaptive video streaming to demonstrate the effectiveness of cloud computing support.Finally,by envisioning a list of future research topics in cloud-based video communication and networking a coclusion is made.展开更多
The rapid development of short video platforms poses new challenges for traditional recommendation systems.Recommender systems typically depend on two types of user behavior feedback to construct user interest profile...The rapid development of short video platforms poses new challenges for traditional recommendation systems.Recommender systems typically depend on two types of user behavior feedback to construct user interest profiles:explicit feedback(interactive behavior),which significantly influences users’short-term interests,and implicit feedback(viewing time),which substantially affects their long-term interests.However,the previous model fails to distinguish between these two feedback methods,leading it to predict only the overall preferences of users based on extensive historical behavior sequences.Consequently,it cannot differentiate between users’long-term and shortterm interests,resulting in low accuracy in describing users’interest states and predicting the evolution of their interests.This paper introduces a video recommendationmodel calledCAT-MFRec(CrossAttention Transformer-Mixed Feedback Recommendation)designed to differentiate between explicit and implicit user feedback within the DIEN(Deep Interest Evolution Network)framework.This study emphasizes the separate learning of the two types of behavioral feedback,effectively integrating them through the cross-attention mechanism.Additionally,it leverages the long sequence dependence capabilities of Transformer technology to accurately construct user interest profiles and predict the evolution of user interests.Experimental results indicate that CAT-MF Rec significantly outperforms existing recommendation methods across various performance indicators.This advancement offers new theoretical and practical insights for the development of video recommendations,particularly in addressing complex and dynamic user behavior patterns.展开更多
Internal learning-based video inpainting methods have shown promising results by exploiting the intrinsic properties of the video to fill in the missing region without external dataset supervision.However,existing int...Internal learning-based video inpainting methods have shown promising results by exploiting the intrinsic properties of the video to fill in the missing region without external dataset supervision.However,existing internal learning-based video inpainting methods would produce inconsistent structures or blurry textures due to the insufficient utilisation of motion priors within the video sequence.In this paper,the authors propose a new internal learning-based video inpainting model called appearance consistency and motion coherence network(ACMC-Net),which can not only learn the recurrence of appearance prior but can also capture motion coherence prior to improve the quality of the inpainting results.In ACMC-Net,a transformer-based appearance network is developed to capture global context information within the video frame for representing appearance consistency accurately.Additionally,a novel motion coherence learning scheme is proposed to learn the motion prior in a video sequence effectively.Finally,the learnt internal appearance consistency and motion coherence are implicitly propagated to the missing regions to achieve inpainting well.Extensive experiments conducted on the DAVIS dataset show that the proposed model obtains the superior performance in terms of quantitative measurements and produces more visually plausible results compared with the state-of-the-art methods.展开更多
Airway management plays a crucial role in providing adequate oxygenation and ventilation to patients during various medical procedures and emergencies.When patients have a limited mouth opening due to factors such as ...Airway management plays a crucial role in providing adequate oxygenation and ventilation to patients during various medical procedures and emergencies.When patients have a limited mouth opening due to factors such as trauma,inflammation,or anatomical abnormalities airway management becomes challenging.A commonly utilized method to overcome this challenge is the use of video laryngoscopy(VL),which employs a specialized device equipped with a camera and a light source to allow a clear view of the larynx and vocal cords.VL overcomes the limitations of direct laryngoscopy in patients with limited mouth opening,enabling better visualization and successful intubation.Various types of VL blades are available.We devised a novel flangeless video laryngoscope for use in patients with a limited mouth opening and then tested it on a manikin.展开更多
Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial fo...Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.展开更多
Objective: The purpose of this study was to evaluate health education using videos and leaflets for preconception care (PCC) awareness among adolescent females up to six months after the health education. Methods: The...Objective: The purpose of this study was to evaluate health education using videos and leaflets for preconception care (PCC) awareness among adolescent females up to six months after the health education. Methods: The subjects were female university students living in the Kinki area. A longitudinal survey was conducted on 67 members in the intervention group, who received the health education, and 52 members in the control group, who did not receive the health education. The primary outcome measures were knowledge of PCC and the subscales of the Health Promotion Lifestyle Profile. Surveys were conducted before, after, and six months after the intervention in the intervention group, and an initial survey and survey six months later were conducted in the control group. Cochran’s Q test, Bonferroni’s multiple comparison test, and McNemar’s test were used to analyze the knowledge of PCC data. The Health Awareness, Nutrition, and Stress Management subscales of the Health Promotion Lifestyle Profile were analyzed by paired t-test, and comparisons between the intervention and control groups were performed using the two-way repeated measures analysis of variance. Results: In the intervention group of 67 people, the number of subjects who answered “correct” for five of the nine items concerning knowledge of PCC increased immediately after the health education (P = 0.006) but decreased for five items from immediately after the health education to six months later (P = 0.043). In addition, the number of respondents who answered “correct” for “low birth weight infants and future lifestyle-related diseases” (P = 0.016) increased after six months compared with before the health education. For the 52 subjects in the control group, there was no change in the number of subjects who answered “correct” for eight out of the nine items after six months. There was also no increase in scores for the Health Promotion Lifestyle Profile after six months for either the intervention or control group. Conclusion: Providing health education about PCC using videos and leaflets to adolescent females was shown to enhance the knowledge of PCC immediately after the education.展开更多
Antarctic clouds and their vertical structures play a significant role in influencing the regional radiation budget and ice mass balance;however,substantial uncertainties persist.Continuous monitoring and research are...Antarctic clouds and their vertical structures play a significant role in influencing the regional radiation budget and ice mass balance;however,substantial uncertainties persist.Continuous monitoring and research are essential for enhancing our understanding of these clouds.This study presents an analysis of cloud occurrence frequency and cloud-base heights(CBHs)at Zhongshan Station in East Antarctica for the first time,utilizing data from a C12 ceilometer covering the period from January 2022 to December 2023.The findings indicate that low clouds dominate at Zhongshan Station,with an average cloud occurrence frequency of 75%.Both the cloud occurrence frequency and CBH distribution exhibit distinct seasonal variations.Specifically,the cloud occurrence frequency during winter is higher than that observed in summer,while winter clouds can develop to greater heights.Over the Southern Ocean,the cloud occurrence frequency during summer surpasses that at Zhongshan Station,with clouds featuring lower CBHs and larger extinction coefficients.Furthermore,it is noteworthy that CBHs derived from the ceilometer are basically consistent with those obtained from radiosondes.Importantly,ERA5 demonstrates commendable performance in retrieving CBHs at Zhongshan Station when compared with ceilometer measurements.展开更多
Multimedia semantic communication has been receiving increasing attention due to its significant enhancement of communication efficiency.Semantic coding,which is oriented towards extracting and encoding the key semant...Multimedia semantic communication has been receiving increasing attention due to its significant enhancement of communication efficiency.Semantic coding,which is oriented towards extracting and encoding the key semantics of video for transmission,is a key aspect in the framework of multimedia semantic communication.In this paper,we propose a facial video semantic coding method with low bitrate based on the temporal continuity of video semantics.At the sender’s end,we selectively transmit facial keypoints and deformation information,allocating distinct bitrates to different keypoints across frames.Compressive techniques involving sampling and quantization are employed to reduce the bitrate while retaining facial key semantic information.At the receiver’s end,a GAN-based generative network is utilized for reconstruction,effectively mitigating block artifacts and buffering problems present in traditional codec algorithms under low bitrates.The performance of the proposed approach is validated on multiple datasets,such as VoxCeleb and TalkingHead-1kH,employing metrics such as LPIPS,DISTS,and AKD for assessment.Experimental results demonstrate significant advantages over traditional codec methods,achieving up to approximately 10-fold bitrate reduction in prolonged,stable head pose scenarios across diverse conversational video settings.展开更多
The application of short videos in agricultural scenarios has become a new form of productive force driving agricultural development,injecting new vitality and opportunities into traditional agriculture.These videos l...The application of short videos in agricultural scenarios has become a new form of productive force driving agricultural development,injecting new vitality and opportunities into traditional agriculture.These videos leverage the unique expressive logic of the platform by adopting a small entry point and prioritizing dissemination rate.They are strategically planned in terms of content,visuals,and interaction to cater to users needs for relaxation,knowledge acquisition,social sharing,agricultural product marketing,and talent display.Through careful design,full creativity,rich emotion,and the creation of distinct character personalities,these videos deliver positive,entertaining,informative,and opinion-driven agricultural content.The production and operation of agricultural short videos can be effectively optimized by analyzing the characteristics of both popular and less popular videos,and utilizing smart tools and trending topics.展开更多
Objectives:Medical students often rely on recreational internet media to relieve the stress caused by immense academic and life pressures,and among these media,short-form videos,which are an emerging digital medium,ha...Objectives:Medical students often rely on recreational internet media to relieve the stress caused by immense academic and life pressures,and among these media,short-form videos,which are an emerging digital medium,have gradually become the mainstream choice of students to relieve their stress.However,the addiction caused by their usage has attracted the widespread attention of both academia and society,which is why the purpose of this study is to systematically explore the underlying mechanisms that link perceived stress,entertainment gratification,emotional gratification,short-form video usage intensity,and short-form video addiction based on multiple theoretical frameworks including the Compensatory Internet Use Model(CIU),the Interaction of Person-Affect-Cognition-Execution Model(I-PACE),and the Use and Gratification Theory(UGT).Methods:A hypothetical model with 9 research hypotheses was constructed.Taking medical students from Chi-nese universities as the research subjects,1057 valid responses were collected through an online questionnaire survey,including 358 males and 658 females.Structural equation modelling(SEM)was performed using the AMOS software to test the research hypotheses.Results:(1)Perceived stress positively predicted entertainment gratification and emotional gratification(β=0.72,p<0.001;β=0.61,p<0.001);(2)Entertainment gratifi-cation and emotional gratification positively influenced short-form video usage intensity(β=0.35,p<0.001;β=0.19,p<0.001);(3)Entertainment gratification and emotional gratification positively predicted short-form video addiction(β=0.40,p<0.001;β=0.17,p<0.001);(4)Short-form video usage intensity positively influenced short-form video addiction(β=0.36,p<0.001);and(5)Perceived stress exerted an indirect but positive effect on both short-form video usage intensity and short-form video addiction,mediated by entertainment and emotional gratification(β=0.37,p<0.001;β=0.52,p<0.001).Conclusion:The mechanisms that underlie medical students’short-form video addiction in stressful situations were revealed in this study.It was found that stress enhances medical students’need for entertainment and emotional online compensation,prompting more frequent short-form video usage and ultimately leading to addiction.These results underscore the need to address the stressors faced by medical students.Effective interventions should prioritise stress management strategies and promote healthier alternative coping mechanisms to mitigate the risk of addiction.展开更多
Objectives:Short video addiction has emerged as a significant public health issue in recent years,with a growing trend toward severity.However,research on the causes and impacts of short video addiction remains limite...Objectives:Short video addiction has emerged as a significant public health issue in recent years,with a growing trend toward severity.However,research on the causes and impacts of short video addiction remains limited,and understanding of the variable“TikTok brain”is still in its infancy.Therefore,based on the Stimulus-Organism-Behavior-Consequence(SOBC)framework,we proposed six research hypotheses and constructed a model to explore the relationships between short video usage intensity,TikTok brain,short video addiction,and decreased attention control.Methods:Given that students are considered a high-risk group for excessive short video use,we collected 1086 valid participants from Chinese student users,including 609 males(56.1%)and 477 females(43.9%),with an average participant age of 19.84 years,to test the hypotheses.Results:(1)Short video usage intensity was positively related to short video addiction,TikTok brain,and decreased attention control;(2)TikTok brain was positively related to short video addiction and decreased attention control;and(3)Short video addiction was positively related to decreased attention control.Conclusions:These findings suggest that although excessive use of short video applications brings negative consequences,users still spend significant amounts of time on these platforms,indicating a need for strict self-regulation of usage time.展开更多
Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions...Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.展开更多
Objective:The objective of this study is to determine the effect of nurse-led instructional video(NLIV)on anxiety,satisfaction,and recovery among mothers admitted for cesarean section(CS).Materials and Methods:A quasi...Objective:The objective of this study is to determine the effect of nurse-led instructional video(NLIV)on anxiety,satisfaction,and recovery among mothers admitted for cesarean section(CS).Materials and Methods:A quasi-experimental design was carried out on the mothers scheduled for CS.Eighty participants were selected by a purposive sampling technique,which were divided(40 participants in each group)into an experimental group and a control group.Nurse-led informational video(NLIV)was shown to the experimental group,and routine care was provided for the control group.Modified hospital anxiety scale(HADS),scale for measuring maternal satisfaction in cesarean birth,and obstetric quality of recovery following cesarean delivery were used to assess anxiety,satisfaction,and recovery.Results:Both the experimental and control groups showed significant reductions in anxiety by the first postintervention day(P<0.001),with the experimental group experiencing a greater mean reduction(mean difference[MD]=4.37)than the control group(MD=3.35)but the intergroup difference was not statistically significant(P>0.05).The experimental group reported significantly higher satisfaction scores(175.55±9.42)on the 3rd postoperative day compared to the control group(151.93±14.89;P<0.001).Similarly,the experimental group’s recovery scores(79.90±6.24)were considerably higher than those of the control group(62.45±15.18;P<0.001).On the 3rd postintervention day,satisfaction was significantly associated with age(P<0.001),and recovery with gravidity(P<0.05).Conclusions:NLIV can be used in the preoperative period to reduce anxiety related to CS and to improve satisfaction and recovery after the CS.展开更多
Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and t...Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and the modality fusion approach tends to be too simple,often neglecting modality alignment before fusion.This research introduces a novel dual stream multimodal alignment and fusion network named DMAFNet for classifying short videos.The network uses two unimodal encoder modules to extract features within modalities and exploits a multimodal encoder module to learn interaction between modalities.To solve the modality alignment problem,contrastive learning is introduced between two unimodal encoder modules.Additionally,masked language modeling(MLM)and video text matching(VTM)auxiliary tasks are introduced to improve the interaction between video frames and text modalities through backpropagation of loss functions.Diverse experiments prove the efficiency of DMAFNet in multimodal video classification tasks.Compared with other two mainstream baselines,DMAFNet achieves the best results on the 2022 WeChat Big Data Challenge dataset.展开更多
The Double Take column looks at a single topic from an African and Chinese perspective.This month,we explore how we can cope with the influence of short videos.
Objective While there is consensus regarding a positive effect of video gaming on dexterity,little is known regarding how much traditional laparoscopic practice can or should be substituted with video gaming.This stud...Objective While there is consensus regarding a positive effect of video gaming on dexterity,little is known regarding how much traditional laparoscopic practice can or should be substituted with video gaming.This study was designed to assess the effects of varying the amount of traditional practice in a lap box trainer and video gaming on performance in two fundamentals of laparoscopic surgery core tasks.Methods Undergraduate and medical students were recruited and randomized into one of four groups:a control group,a lap box group,a video game group,and a combined group with 50%of the time allocated to each modality.Performance in the peg transfer and precision cutting tasks was assessed both prior to and following the 6 training sessions.Results Peg transfer performance significantly improved in the lap box group(168.4±70.6 s vs.332.9±178.2 s,p<0.001),video game group(176.7±53.3 s vs.300.0±101.2 s,p<0.001)and combined group(214.2±86.9 s vs.406.8±239.5 s,p=0.002)after training.Similar improvements were also observed in precision cutting performance in the lap box group(413.1±138.4 s vs.614.3±211.4 s,p=0.002),video game group(434.1±150.8 s vs.609.2±233.2 s,p=0.007)and combined group(469.2±185.3 s vs.663.8±296.3 s,p=0.020).When analyzing improvements in performance across three different training groups compared with the control group,we found that both the lap box group(p<0.001)and the combined group(p<0.001)showed better improvement in both tasks,and the video game group had significantly better outcomes in the precision cutting task(p=0.003).Conclusion Traditional lap box training remains the most effective method for improving the performance of simulated laparoscopic surgery.Video games can be encouraged to enhance skills retention and supplement simulated practice outside of a formal training curriculum.展开更多
High-speed imaging is crucial for understanding the transient dynamics of the world,but conventional frame-by-frame video acquisition is limited by specialized hardware and substantial data storage requirements.We int...High-speed imaging is crucial for understanding the transient dynamics of the world,but conventional frame-by-frame video acquisition is limited by specialized hardware and substantial data storage requirements.We introduce“SpeedShot,”a computational imaging framework for efficient high-speed video imaging.SpeedShot features a low-speed dual-camera setup,which simultaneously captures two temporally coded snapshots.Cross-referencing these two snapshots extracts a multiplexed temporal gradient image,producing a compact and multiframe motion representation for video reconstruction.Recognizing the unique temporal-only modulation model,we propose an explicable motion-guided scale-recurrent transformer for video decoding.It exploits cross-scale error maps to bolster the cycle consistency between predicted and observed data.Evaluations on both simulated datasets and real imaging setups demonstrate SpeedShot’s effectiveness in video-rate up-conversion,with pronounced improvement over video frame interpolation and deblurring methods.The proposed framework is compatible with commercial low-speed cameras,offering a versatile low-bandwidth alternative for video-related applications,such as video surveillance and sports analysis.展开更多
基金supported by the State Key Program of National Natural Science Foundation of China(No.61233003)National Natural Science Foundation of China(No.61503358)
文摘The rapid progress of cloud technology has attracted a growing number of video providers to consider deploying their streaming services onto cloud platform for more cost-effective, scalable and reliable performance. In this paper, we utilize Markov decision process model to formulate the dynamic deployment of cloud-based video services over multiple geographically distributed datacenters. We focus on maximizing the average profits for the video service provider over a long run and introduce an average performance criteria which reflects the cost and user experience jointly. We develop an optimal algorithm based on the sensitivity analysis and sample-based policy iteration to obtain the optimal video placement and request dispatching strategy. We demonstrate the optimality of our algorithm with theoretical proof and specify the practical feasibility of our algorithm. We conduct simulations to evaluate the performance of our algorithm and the results show that our strategy can effectively cut down the total cost and guarantee users' quality of experience (QoE).
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
基金supported by National Science Foundation Grants(ECCS-1405594).
文摘Cloud-based video communication and networking has emerged as a promising new research paradigm to significantly improve the quality of experience for video consumers.An architectural overview of this promising research area was presented.This overview with an end-to-end partition of the cloud-based video system into major blocks with respect to their locations from the center of the cloud to the edge of the cloud was started.Following this partition,existing research efforts on how the principles of cloud computing can provide unprecedented support to 1)video servers,2)content delivery networks,and 3)edge networks within the global cloud video ecosystems were examined.Moreover,a case study was exemplfied on an edge cloud assisted HTTP adaptive video streaming to demonstrate the effectiveness of cloud computing support.Finally,by envisioning a list of future research topics in cloud-based video communication and networking a coclusion is made.
基金supported by National Natural Science Foundation of China(62072416)Key Research and Development Special Project of Henan Province(221111210500)Key TechnologiesR&DProgram of Henan rovince(232102211053,242102211071).
文摘The rapid development of short video platforms poses new challenges for traditional recommendation systems.Recommender systems typically depend on two types of user behavior feedback to construct user interest profiles:explicit feedback(interactive behavior),which significantly influences users’short-term interests,and implicit feedback(viewing time),which substantially affects their long-term interests.However,the previous model fails to distinguish between these two feedback methods,leading it to predict only the overall preferences of users based on extensive historical behavior sequences.Consequently,it cannot differentiate between users’long-term and shortterm interests,resulting in low accuracy in describing users’interest states and predicting the evolution of their interests.This paper introduces a video recommendationmodel calledCAT-MFRec(CrossAttention Transformer-Mixed Feedback Recommendation)designed to differentiate between explicit and implicit user feedback within the DIEN(Deep Interest Evolution Network)framework.This study emphasizes the separate learning of the two types of behavioral feedback,effectively integrating them through the cross-attention mechanism.Additionally,it leverages the long sequence dependence capabilities of Transformer technology to accurately construct user interest profiles and predict the evolution of user interests.Experimental results indicate that CAT-MF Rec significantly outperforms existing recommendation methods across various performance indicators.This advancement offers new theoretical and practical insights for the development of video recommendations,particularly in addressing complex and dynamic user behavior patterns.
基金Shenzhen Science and Technology Programme,Grant/Award Number:JCYJ202308071208000012023 Shenzhen sustainable supporting funds for colleges and universities,Grant/Award Number:20231121165240001Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology,Grant/Award Number:2024B1212010006。
文摘Internal learning-based video inpainting methods have shown promising results by exploiting the intrinsic properties of the video to fill in the missing region without external dataset supervision.However,existing internal learning-based video inpainting methods would produce inconsistent structures or blurry textures due to the insufficient utilisation of motion priors within the video sequence.In this paper,the authors propose a new internal learning-based video inpainting model called appearance consistency and motion coherence network(ACMC-Net),which can not only learn the recurrence of appearance prior but can also capture motion coherence prior to improve the quality of the inpainting results.In ACMC-Net,a transformer-based appearance network is developed to capture global context information within the video frame for representing appearance consistency accurately.Additionally,a novel motion coherence learning scheme is proposed to learn the motion prior in a video sequence effectively.Finally,the learnt internal appearance consistency and motion coherence are implicitly propagated to the missing regions to achieve inpainting well.Extensive experiments conducted on the DAVIS dataset show that the proposed model obtains the superior performance in terms of quantitative measurements and produces more visually plausible results compared with the state-of-the-art methods.
文摘Airway management plays a crucial role in providing adequate oxygenation and ventilation to patients during various medical procedures and emergencies.When patients have a limited mouth opening due to factors such as trauma,inflammation,or anatomical abnormalities airway management becomes challenging.A commonly utilized method to overcome this challenge is the use of video laryngoscopy(VL),which employs a specialized device equipped with a camera and a light source to allow a clear view of the larynx and vocal cords.VL overcomes the limitations of direct laryngoscopy in patients with limited mouth opening,enabling better visualization and successful intubation.Various types of VL blades are available.We devised a novel flangeless video laryngoscope for use in patients with a limited mouth opening and then tested it on a manikin.
文摘Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.
文摘Objective: The purpose of this study was to evaluate health education using videos and leaflets for preconception care (PCC) awareness among adolescent females up to six months after the health education. Methods: The subjects were female university students living in the Kinki area. A longitudinal survey was conducted on 67 members in the intervention group, who received the health education, and 52 members in the control group, who did not receive the health education. The primary outcome measures were knowledge of PCC and the subscales of the Health Promotion Lifestyle Profile. Surveys were conducted before, after, and six months after the intervention in the intervention group, and an initial survey and survey six months later were conducted in the control group. Cochran’s Q test, Bonferroni’s multiple comparison test, and McNemar’s test were used to analyze the knowledge of PCC data. The Health Awareness, Nutrition, and Stress Management subscales of the Health Promotion Lifestyle Profile were analyzed by paired t-test, and comparisons between the intervention and control groups were performed using the two-way repeated measures analysis of variance. Results: In the intervention group of 67 people, the number of subjects who answered “correct” for five of the nine items concerning knowledge of PCC increased immediately after the health education (P = 0.006) but decreased for five items from immediately after the health education to six months later (P = 0.043). In addition, the number of respondents who answered “correct” for “low birth weight infants and future lifestyle-related diseases” (P = 0.016) increased after six months compared with before the health education. For the 52 subjects in the control group, there was no change in the number of subjects who answered “correct” for eight out of the nine items after six months. There was also no increase in scores for the Health Promotion Lifestyle Profile after six months for either the intervention or control group. Conclusion: Providing health education about PCC using videos and leaflets to adolescent females was shown to enhance the knowledge of PCC immediately after the education.
基金supported by the National Key Research and Development Program of China(Grant No.2021YFC2802501)the National Natural Science Foundation of China(Grant Nos.42175154 and 42305084)+1 种基金the Hunan Provincial Natural Science Foundation of China(Grant No.2024JJ2058)Research Project of the National University of Defense Technology(Grant No.202401-YJRC-XX-030)。
文摘Antarctic clouds and their vertical structures play a significant role in influencing the regional radiation budget and ice mass balance;however,substantial uncertainties persist.Continuous monitoring and research are essential for enhancing our understanding of these clouds.This study presents an analysis of cloud occurrence frequency and cloud-base heights(CBHs)at Zhongshan Station in East Antarctica for the first time,utilizing data from a C12 ceilometer covering the period from January 2022 to December 2023.The findings indicate that low clouds dominate at Zhongshan Station,with an average cloud occurrence frequency of 75%.Both the cloud occurrence frequency and CBH distribution exhibit distinct seasonal variations.Specifically,the cloud occurrence frequency during winter is higher than that observed in summer,while winter clouds can develop to greater heights.Over the Southern Ocean,the cloud occurrence frequency during summer surpasses that at Zhongshan Station,with clouds featuring lower CBHs and larger extinction coefficients.Furthermore,it is noteworthy that CBHs derived from the ceilometer are basically consistent with those obtained from radiosondes.Importantly,ERA5 demonstrates commendable performance in retrieving CBHs at Zhongshan Station when compared with ceilometer measurements.
基金supported by the National Natural Science Foundation of China (Nos. NSFC 61925105, 62322109, 62171257 and U22B2001)the Xplorer Prize in Information and Electronics technologiesthe Tsinghua University (Department of Electronic Engineering)-Nantong Research Institute for Advanced Communication Technologies Joint Research Center for Space, Air, Ground and Sea Cooperative Communication Network Technology
文摘Multimedia semantic communication has been receiving increasing attention due to its significant enhancement of communication efficiency.Semantic coding,which is oriented towards extracting and encoding the key semantics of video for transmission,is a key aspect in the framework of multimedia semantic communication.In this paper,we propose a facial video semantic coding method with low bitrate based on the temporal continuity of video semantics.At the sender’s end,we selectively transmit facial keypoints and deformation information,allocating distinct bitrates to different keypoints across frames.Compressive techniques involving sampling and quantization are employed to reduce the bitrate while retaining facial key semantic information.At the receiver’s end,a GAN-based generative network is utilized for reconstruction,effectively mitigating block artifacts and buffering problems present in traditional codec algorithms under low bitrates.The performance of the proposed approach is validated on multiple datasets,such as VoxCeleb and TalkingHead-1kH,employing metrics such as LPIPS,DISTS,and AKD for assessment.Experimental results demonstrate significant advantages over traditional codec methods,achieving up to approximately 10-fold bitrate reduction in prolonged,stable head pose scenarios across diverse conversational video settings.
文摘The application of short videos in agricultural scenarios has become a new form of productive force driving agricultural development,injecting new vitality and opportunities into traditional agriculture.These videos leverage the unique expressive logic of the platform by adopting a small entry point and prioritizing dissemination rate.They are strategically planned in terms of content,visuals,and interaction to cater to users needs for relaxation,knowledge acquisition,social sharing,agricultural product marketing,and talent display.Through careful design,full creativity,rich emotion,and the creation of distinct character personalities,these videos deliver positive,entertaining,informative,and opinion-driven agricultural content.The production and operation of agricultural short videos can be effectively optimized by analyzing the characteristics of both popular and less popular videos,and utilizing smart tools and trending topics.
文摘Objectives:Medical students often rely on recreational internet media to relieve the stress caused by immense academic and life pressures,and among these media,short-form videos,which are an emerging digital medium,have gradually become the mainstream choice of students to relieve their stress.However,the addiction caused by their usage has attracted the widespread attention of both academia and society,which is why the purpose of this study is to systematically explore the underlying mechanisms that link perceived stress,entertainment gratification,emotional gratification,short-form video usage intensity,and short-form video addiction based on multiple theoretical frameworks including the Compensatory Internet Use Model(CIU),the Interaction of Person-Affect-Cognition-Execution Model(I-PACE),and the Use and Gratification Theory(UGT).Methods:A hypothetical model with 9 research hypotheses was constructed.Taking medical students from Chi-nese universities as the research subjects,1057 valid responses were collected through an online questionnaire survey,including 358 males and 658 females.Structural equation modelling(SEM)was performed using the AMOS software to test the research hypotheses.Results:(1)Perceived stress positively predicted entertainment gratification and emotional gratification(β=0.72,p<0.001;β=0.61,p<0.001);(2)Entertainment gratifi-cation and emotional gratification positively influenced short-form video usage intensity(β=0.35,p<0.001;β=0.19,p<0.001);(3)Entertainment gratification and emotional gratification positively predicted short-form video addiction(β=0.40,p<0.001;β=0.17,p<0.001);(4)Short-form video usage intensity positively influenced short-form video addiction(β=0.36,p<0.001);and(5)Perceived stress exerted an indirect but positive effect on both short-form video usage intensity and short-form video addiction,mediated by entertainment and emotional gratification(β=0.37,p<0.001;β=0.52,p<0.001).Conclusion:The mechanisms that underlie medical students’short-form video addiction in stressful situations were revealed in this study.It was found that stress enhances medical students’need for entertainment and emotional online compensation,prompting more frequent short-form video usage and ultimately leading to addiction.These results underscore the need to address the stressors faced by medical students.Effective interventions should prioritise stress management strategies and promote healthier alternative coping mechanisms to mitigate the risk of addiction.
基金supported by the International Joint Research Project of Huiyan International College,Faculty of Education,Beijing Normal University(Grant Number:ICER202102).
文摘Objectives:Short video addiction has emerged as a significant public health issue in recent years,with a growing trend toward severity.However,research on the causes and impacts of short video addiction remains limited,and understanding of the variable“TikTok brain”is still in its infancy.Therefore,based on the Stimulus-Organism-Behavior-Consequence(SOBC)framework,we proposed six research hypotheses and constructed a model to explore the relationships between short video usage intensity,TikTok brain,short video addiction,and decreased attention control.Methods:Given that students are considered a high-risk group for excessive short video use,we collected 1086 valid participants from Chinese student users,including 609 males(56.1%)and 477 females(43.9%),with an average participant age of 19.84 years,to test the hypotheses.Results:(1)Short video usage intensity was positively related to short video addiction,TikTok brain,and decreased attention control;(2)TikTok brain was positively related to short video addiction and decreased attention control;and(3)Short video addiction was positively related to decreased attention control.Conclusions:These findings suggest that although excessive use of short video applications brings negative consequences,users still spend significant amounts of time on these platforms,indicating a need for strict self-regulation of usage time.
基金supported by the Zhejiang Provincial Natural Science Foundation of China(No.LQ23F030001)the National Natural Science Foundation of China(No.62406280)+5 种基金the Autism Research Special Fund of Zhejiang Foundation for Disabled Persons(No.2023008)the Liaoning Province Higher Education Innovative Talents Program Support Project(No.LR2019058)the Liaoning Province Joint Open Fund for Key Scientific and Technological Innovation Bases(No.2021-KF-12-05)the Central Guidance on Local Science and Technology Development Fund of Liaoning Province(No.2023JH6/100100066)the Key Laboratory for Biomedical Engineering of Ministry of Education,Zhejiang University,Chinain part by the Open Research Fund of the State Key Laboratory of Cognitive Neuroscience and Learning.
文摘Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.
文摘Objective:The objective of this study is to determine the effect of nurse-led instructional video(NLIV)on anxiety,satisfaction,and recovery among mothers admitted for cesarean section(CS).Materials and Methods:A quasi-experimental design was carried out on the mothers scheduled for CS.Eighty participants were selected by a purposive sampling technique,which were divided(40 participants in each group)into an experimental group and a control group.Nurse-led informational video(NLIV)was shown to the experimental group,and routine care was provided for the control group.Modified hospital anxiety scale(HADS),scale for measuring maternal satisfaction in cesarean birth,and obstetric quality of recovery following cesarean delivery were used to assess anxiety,satisfaction,and recovery.Results:Both the experimental and control groups showed significant reductions in anxiety by the first postintervention day(P<0.001),with the experimental group experiencing a greater mean reduction(mean difference[MD]=4.37)than the control group(MD=3.35)but the intergroup difference was not statistically significant(P>0.05).The experimental group reported significantly higher satisfaction scores(175.55±9.42)on the 3rd postoperative day compared to the control group(151.93±14.89;P<0.001).Similarly,the experimental group’s recovery scores(79.90±6.24)were considerably higher than those of the control group(62.45±15.18;P<0.001).On the 3rd postintervention day,satisfaction was significantly associated with age(P<0.001),and recovery with gravidity(P<0.05).Conclusions:NLIV can be used in the preoperative period to reduce anxiety related to CS and to improve satisfaction and recovery after the CS.
基金Fundamental Research Funds for the Central Universities,China(No.2232021A-10)National Natural Science Foundation of China(No.61903078)+1 种基金Shanghai Sailing Program,China(No.22YF1401300)Natural Science Foundation of Shanghai,China(No.20ZR1400400)。
文摘Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and the modality fusion approach tends to be too simple,often neglecting modality alignment before fusion.This research introduces a novel dual stream multimodal alignment and fusion network named DMAFNet for classifying short videos.The network uses two unimodal encoder modules to extract features within modalities and exploits a multimodal encoder module to learn interaction between modalities.To solve the modality alignment problem,contrastive learning is introduced between two unimodal encoder modules.Additionally,masked language modeling(MLM)and video text matching(VTM)auxiliary tasks are introduced to improve the interaction between video frames and text modalities through backpropagation of loss functions.Diverse experiments prove the efficiency of DMAFNet in multimodal video classification tasks.Compared with other two mainstream baselines,DMAFNet achieves the best results on the 2022 WeChat Big Data Challenge dataset.
文摘The Double Take column looks at a single topic from an African and Chinese perspective.This month,we explore how we can cope with the influence of short videos.
基金the financial support from the China Scholarship Council(Grant No.202106370009)and Alberta Innovate Graduate Student Scholarship.
文摘Objective While there is consensus regarding a positive effect of video gaming on dexterity,little is known regarding how much traditional laparoscopic practice can or should be substituted with video gaming.This study was designed to assess the effects of varying the amount of traditional practice in a lap box trainer and video gaming on performance in two fundamentals of laparoscopic surgery core tasks.Methods Undergraduate and medical students were recruited and randomized into one of four groups:a control group,a lap box group,a video game group,and a combined group with 50%of the time allocated to each modality.Performance in the peg transfer and precision cutting tasks was assessed both prior to and following the 6 training sessions.Results Peg transfer performance significantly improved in the lap box group(168.4±70.6 s vs.332.9±178.2 s,p<0.001),video game group(176.7±53.3 s vs.300.0±101.2 s,p<0.001)and combined group(214.2±86.9 s vs.406.8±239.5 s,p=0.002)after training.Similar improvements were also observed in precision cutting performance in the lap box group(413.1±138.4 s vs.614.3±211.4 s,p=0.002),video game group(434.1±150.8 s vs.609.2±233.2 s,p=0.007)and combined group(469.2±185.3 s vs.663.8±296.3 s,p=0.020).When analyzing improvements in performance across three different training groups compared with the control group,we found that both the lap box group(p<0.001)and the combined group(p<0.001)showed better improvement in both tasks,and the video game group had significantly better outcomes in the precision cutting task(p=0.003).Conclusion Traditional lap box training remains the most effective method for improving the performance of simulated laparoscopic surgery.Video games can be encouraged to enhance skills retention and supplement simulated practice outside of a formal training curriculum.
基金supported by the National Natural Science Foundation of China(Grant No.62305184)the Basic and Applied Basic Research Foundation of Guangdong Province(Grant No.2023A1515012932)+7 种基金the Science,Technology,and Innovation Commission of Shenzhen Municipality(Grant No.JCYJ20241202123919027)the Major Key Project of Pengcheng Laboratory(Grant No.PCL2024A1)the Science Fund for Distinguished Young Scholars of Zhejiang Province(Grant No.LR23F010001)the Research Center for Industries of the Future(RCIF)at Westlake University and and the Key Project of Westlake Institute for Optoelectronics(Grant No.2023GD007)the Zhejiang“Pioneer”and“Leading Goose”R&D Program(Grant Nos.2024SDXHDX0006 and 2024C03182)the Ningbo Science and Technology Bureau“Science and Technology Yongjiang 2035”Key Technology Breakthrough Program(Grant No.2024Z126)the Research Grants Council of the Hong Kong Special Administrative Region,China(Grant Nos.C5031-22G,CityU11310522,and CityU11300123)the City University of Hong Kong(Grant No.9610628).
文摘High-speed imaging is crucial for understanding the transient dynamics of the world,but conventional frame-by-frame video acquisition is limited by specialized hardware and substantial data storage requirements.We introduce“SpeedShot,”a computational imaging framework for efficient high-speed video imaging.SpeedShot features a low-speed dual-camera setup,which simultaneously captures two temporally coded snapshots.Cross-referencing these two snapshots extracts a multiplexed temporal gradient image,producing a compact and multiframe motion representation for video reconstruction.Recognizing the unique temporal-only modulation model,we propose an explicable motion-guided scale-recurrent transformer for video decoding.It exploits cross-scale error maps to bolster the cycle consistency between predicted and observed data.Evaluations on both simulated datasets and real imaging setups demonstrate SpeedShot’s effectiveness in video-rate up-conversion,with pronounced improvement over video frame interpolation and deblurring methods.The proposed framework is compatible with commercial low-speed cameras,offering a versatile low-bandwidth alternative for video-related applications,such as video surveillance and sports analysis.