In this study,it aims at examining the differences between humangenerated and AI-generated texts in IELTS Writing Task 2.It especially focuses on lexical resourcefulness,grammatical accuracy,and contextual appropriate...In this study,it aims at examining the differences between humangenerated and AI-generated texts in IELTS Writing Task 2.It especially focuses on lexical resourcefulness,grammatical accuracy,and contextual appropriateness.We analyzed 20 essays,including 10 human written ones by Chinese university students who have achieved an IELTS writing score ranging from 5.5 to 6.0,and 10 ChatGPT-4 Turbo-generated ones,using a mixed-methods approach,through corpus-based tools(NLTK,SpaCy,AntConc)and qualitative content analysis.Results showed that AI texts exhibited superior grammatical accuracy(0.4%–3%error rates for AI vs.20–26%for university students)but higher lexical repetition(17.2%to 23.25%for AI vs.17.68%for university students)and weaker contextual adaptability(3.33/10–3.69/10 for AI vs.3.23/10 to 4.14/10 for university students).While AI’s grammatical precision supports its utility as a corrective tool,human writers outperformed AI in lexical diversity and task-specific nuance.The findings advocate for a hybrid pedagogical model that leverages AI’s strengths in error detection while retaining human instruction for advanced lexical and contextual skills.Limitations include the small corpus and single-AI-model focus,suggesting future research with diverse datasets and longitudinal designs.展开更多
The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situati...The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.展开更多
This conceptual study proposes a pedagogical framework that integrates Generative Artificial Intelligence tools(AIGC)and Chain-of-Thought(CoT)reasoning,grounded in the cognitive apprenticeship model,for the Pragmatics...This conceptual study proposes a pedagogical framework that integrates Generative Artificial Intelligence tools(AIGC)and Chain-of-Thought(CoT)reasoning,grounded in the cognitive apprenticeship model,for the Pragmatics and Translation course within Master of Translation and Interpreting(MTI)programs.A key feature involves CoT reasoning exercises,which require students to articulate their step-by-step translation reasoning.This explicates cognitive processes,enhances pragmatic awareness,translation strategy development,and critical reflection on linguistic choices and context.Hypothetical activities exemplify its application,including comparative analysis of AI and human translations to examine pragmatic nuances,and guided exercises where students analyze or critique the reasoning traces generated by Large Language Models(LLMs).Ethically grounded,the framework positions AI as a supportive tool,thereby ensuring human translators retain the central decision-making role and promoting critical evaluation of machine-generated suggestions.Potential challenges,such as AI biases,ethical concerns,and overreliance,are addressed through strategies including bias-awareness discussions,rigorous accuracy verification,and a strong emphasis on human accountability.Future research will involve piloting the framework to empirically evaluate its impact on learners’pragmatic competence and translation skills,followed by iterative refinements to advance evidence-based translation pedagogy.展开更多
AI-generated images are a prime example of AI-generated content,and this paper discusses the controversy over their copyrightability.Starting with the general technical principles that lie behind AI’s deep learning f...AI-generated images are a prime example of AI-generated content,and this paper discusses the controversy over their copyrightability.Starting with the general technical principles that lie behind AI’s deep learning for model training and the generation and correction of AI-generated images according to an AI users’instructions to the AI prompt and their parameter settings,the paper analyzes the initial legal viewpoint that as AI-generated images do not have a human creator,they cannot apply for copyright.It goes on to examine the rapid development of AI-generated image technology and the gradual adoption of more open attitudes towards the copyrightability of AI-generated images due to the influence of the promoting technological advancement approach.On the basis of this,the paper further analyzes the criteria for assessing the copyrightability of AI-generated images,by using measures such as originality,human authorship,and intellectual achievements,aiming to clarify the legal basis for the copyrightability of AI-generated images and enhancing the copyright protection system.展开更多
Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well a...Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.展开更多
The rapid development of short video platforms poses new challenges for traditional recommendation systems.Recommender systems typically depend on two types of user behavior feedback to construct user interest profile...The rapid development of short video platforms poses new challenges for traditional recommendation systems.Recommender systems typically depend on two types of user behavior feedback to construct user interest profiles:explicit feedback(interactive behavior),which significantly influences users’short-term interests,and implicit feedback(viewing time),which substantially affects their long-term interests.However,the previous model fails to distinguish between these two feedback methods,leading it to predict only the overall preferences of users based on extensive historical behavior sequences.Consequently,it cannot differentiate between users’long-term and shortterm interests,resulting in low accuracy in describing users’interest states and predicting the evolution of their interests.This paper introduces a video recommendationmodel calledCAT-MFRec(CrossAttention Transformer-Mixed Feedback Recommendation)designed to differentiate between explicit and implicit user feedback within the DIEN(Deep Interest Evolution Network)framework.This study emphasizes the separate learning of the two types of behavioral feedback,effectively integrating them through the cross-attention mechanism.Additionally,it leverages the long sequence dependence capabilities of Transformer technology to accurately construct user interest profiles and predict the evolution of user interests.Experimental results indicate that CAT-MF Rec significantly outperforms existing recommendation methods across various performance indicators.This advancement offers new theoretical and practical insights for the development of video recommendations,particularly in addressing complex and dynamic user behavior patterns.展开更多
Internal learning-based video inpainting methods have shown promising results by exploiting the intrinsic properties of the video to fill in the missing region without external dataset supervision.However,existing int...Internal learning-based video inpainting methods have shown promising results by exploiting the intrinsic properties of the video to fill in the missing region without external dataset supervision.However,existing internal learning-based video inpainting methods would produce inconsistent structures or blurry textures due to the insufficient utilisation of motion priors within the video sequence.In this paper,the authors propose a new internal learning-based video inpainting model called appearance consistency and motion coherence network(ACMC-Net),which can not only learn the recurrence of appearance prior but can also capture motion coherence prior to improve the quality of the inpainting results.In ACMC-Net,a transformer-based appearance network is developed to capture global context information within the video frame for representing appearance consistency accurately.Additionally,a novel motion coherence learning scheme is proposed to learn the motion prior in a video sequence effectively.Finally,the learnt internal appearance consistency and motion coherence are implicitly propagated to the missing regions to achieve inpainting well.Extensive experiments conducted on the DAVIS dataset show that the proposed model obtains the superior performance in terms of quantitative measurements and produces more visually plausible results compared with the state-of-the-art methods.展开更多
Airway management plays a crucial role in providing adequate oxygenation and ventilation to patients during various medical procedures and emergencies.When patients have a limited mouth opening due to factors such as ...Airway management plays a crucial role in providing adequate oxygenation and ventilation to patients during various medical procedures and emergencies.When patients have a limited mouth opening due to factors such as trauma,inflammation,or anatomical abnormalities airway management becomes challenging.A commonly utilized method to overcome this challenge is the use of video laryngoscopy(VL),which employs a specialized device equipped with a camera and a light source to allow a clear view of the larynx and vocal cords.VL overcomes the limitations of direct laryngoscopy in patients with limited mouth opening,enabling better visualization and successful intubation.Various types of VL blades are available.We devised a novel flangeless video laryngoscope for use in patients with a limited mouth opening and then tested it on a manikin.展开更多
Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial fo...Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.展开更多
Objective: The purpose of this study was to evaluate health education using videos and leaflets for preconception care (PCC) awareness among adolescent females up to six months after the health education. Methods: The...Objective: The purpose of this study was to evaluate health education using videos and leaflets for preconception care (PCC) awareness among adolescent females up to six months after the health education. Methods: The subjects were female university students living in the Kinki area. A longitudinal survey was conducted on 67 members in the intervention group, who received the health education, and 52 members in the control group, who did not receive the health education. The primary outcome measures were knowledge of PCC and the subscales of the Health Promotion Lifestyle Profile. Surveys were conducted before, after, and six months after the intervention in the intervention group, and an initial survey and survey six months later were conducted in the control group. Cochran’s Q test, Bonferroni’s multiple comparison test, and McNemar’s test were used to analyze the knowledge of PCC data. The Health Awareness, Nutrition, and Stress Management subscales of the Health Promotion Lifestyle Profile were analyzed by paired t-test, and comparisons between the intervention and control groups were performed using the two-way repeated measures analysis of variance. Results: In the intervention group of 67 people, the number of subjects who answered “correct” for five of the nine items concerning knowledge of PCC increased immediately after the health education (P = 0.006) but decreased for five items from immediately after the health education to six months later (P = 0.043). In addition, the number of respondents who answered “correct” for “low birth weight infants and future lifestyle-related diseases” (P = 0.016) increased after six months compared with before the health education. For the 52 subjects in the control group, there was no change in the number of subjects who answered “correct” for eight out of the nine items after six months. There was also no increase in scores for the Health Promotion Lifestyle Profile after six months for either the intervention or control group. Conclusion: Providing health education about PCC using videos and leaflets to adolescent females was shown to enhance the knowledge of PCC immediately after the education.展开更多
Multimedia semantic communication has been receiving increasing attention due to its significant enhancement of communication efficiency.Semantic coding,which is oriented towards extracting and encoding the key semant...Multimedia semantic communication has been receiving increasing attention due to its significant enhancement of communication efficiency.Semantic coding,which is oriented towards extracting and encoding the key semantics of video for transmission,is a key aspect in the framework of multimedia semantic communication.In this paper,we propose a facial video semantic coding method with low bitrate based on the temporal continuity of video semantics.At the sender’s end,we selectively transmit facial keypoints and deformation information,allocating distinct bitrates to different keypoints across frames.Compressive techniques involving sampling and quantization are employed to reduce the bitrate while retaining facial key semantic information.At the receiver’s end,a GAN-based generative network is utilized for reconstruction,effectively mitigating block artifacts and buffering problems present in traditional codec algorithms under low bitrates.The performance of the proposed approach is validated on multiple datasets,such as VoxCeleb and TalkingHead-1kH,employing metrics such as LPIPS,DISTS,and AKD for assessment.Experimental results demonstrate significant advantages over traditional codec methods,achieving up to approximately 10-fold bitrate reduction in prolonged,stable head pose scenarios across diverse conversational video settings.展开更多
The application of short videos in agricultural scenarios has become a new form of productive force driving agricultural development,injecting new vitality and opportunities into traditional agriculture.These videos l...The application of short videos in agricultural scenarios has become a new form of productive force driving agricultural development,injecting new vitality and opportunities into traditional agriculture.These videos leverage the unique expressive logic of the platform by adopting a small entry point and prioritizing dissemination rate.They are strategically planned in terms of content,visuals,and interaction to cater to users needs for relaxation,knowledge acquisition,social sharing,agricultural product marketing,and talent display.Through careful design,full creativity,rich emotion,and the creation of distinct character personalities,these videos deliver positive,entertaining,informative,and opinion-driven agricultural content.The production and operation of agricultural short videos can be effectively optimized by analyzing the characteristics of both popular and less popular videos,and utilizing smart tools and trending topics.展开更多
Objectives:Medical students often rely on recreational internet media to relieve the stress caused by immense academic and life pressures,and among these media,short-form videos,which are an emerging digital medium,ha...Objectives:Medical students often rely on recreational internet media to relieve the stress caused by immense academic and life pressures,and among these media,short-form videos,which are an emerging digital medium,have gradually become the mainstream choice of students to relieve their stress.However,the addiction caused by their usage has attracted the widespread attention of both academia and society,which is why the purpose of this study is to systematically explore the underlying mechanisms that link perceived stress,entertainment gratification,emotional gratification,short-form video usage intensity,and short-form video addiction based on multiple theoretical frameworks including the Compensatory Internet Use Model(CIU),the Interaction of Person-Affect-Cognition-Execution Model(I-PACE),and the Use and Gratification Theory(UGT).Methods:A hypothetical model with 9 research hypotheses was constructed.Taking medical students from Chi-nese universities as the research subjects,1057 valid responses were collected through an online questionnaire survey,including 358 males and 658 females.Structural equation modelling(SEM)was performed using the AMOS software to test the research hypotheses.Results:(1)Perceived stress positively predicted entertainment gratification and emotional gratification(β=0.72,p<0.001;β=0.61,p<0.001);(2)Entertainment gratifi-cation and emotional gratification positively influenced short-form video usage intensity(β=0.35,p<0.001;β=0.19,p<0.001);(3)Entertainment gratification and emotional gratification positively predicted short-form video addiction(β=0.40,p<0.001;β=0.17,p<0.001);(4)Short-form video usage intensity positively influenced short-form video addiction(β=0.36,p<0.001);and(5)Perceived stress exerted an indirect but positive effect on both short-form video usage intensity and short-form video addiction,mediated by entertainment and emotional gratification(β=0.37,p<0.001;β=0.52,p<0.001).Conclusion:The mechanisms that underlie medical students’short-form video addiction in stressful situations were revealed in this study.It was found that stress enhances medical students’need for entertainment and emotional online compensation,prompting more frequent short-form video usage and ultimately leading to addiction.These results underscore the need to address the stressors faced by medical students.Effective interventions should prioritise stress management strategies and promote healthier alternative coping mechanisms to mitigate the risk of addiction.展开更多
Objectives:Short video addiction has emerged as a significant public health issue in recent years,with a growing trend toward severity.However,research on the causes and impacts of short video addiction remains limite...Objectives:Short video addiction has emerged as a significant public health issue in recent years,with a growing trend toward severity.However,research on the causes and impacts of short video addiction remains limited,and understanding of the variable“TikTok brain”is still in its infancy.Therefore,based on the Stimulus-Organism-Behavior-Consequence(SOBC)framework,we proposed six research hypotheses and constructed a model to explore the relationships between short video usage intensity,TikTok brain,short video addiction,and decreased attention control.Methods:Given that students are considered a high-risk group for excessive short video use,we collected 1086 valid participants from Chinese student users,including 609 males(56.1%)and 477 females(43.9%),with an average participant age of 19.84 years,to test the hypotheses.Results:(1)Short video usage intensity was positively related to short video addiction,TikTok brain,and decreased attention control;(2)TikTok brain was positively related to short video addiction and decreased attention control;and(3)Short video addiction was positively related to decreased attention control.Conclusions:These findings suggest that although excessive use of short video applications brings negative consequences,users still spend significant amounts of time on these platforms,indicating a need for strict self-regulation of usage time.展开更多
Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions...Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.展开更多
Objective:The objective of this study is to determine the effect of nurse-led instructional video(NLIV)on anxiety,satisfaction,and recovery among mothers admitted for cesarean section(CS).Materials and Methods:A quasi...Objective:The objective of this study is to determine the effect of nurse-led instructional video(NLIV)on anxiety,satisfaction,and recovery among mothers admitted for cesarean section(CS).Materials and Methods:A quasi-experimental design was carried out on the mothers scheduled for CS.Eighty participants were selected by a purposive sampling technique,which were divided(40 participants in each group)into an experimental group and a control group.Nurse-led informational video(NLIV)was shown to the experimental group,and routine care was provided for the control group.Modified hospital anxiety scale(HADS),scale for measuring maternal satisfaction in cesarean birth,and obstetric quality of recovery following cesarean delivery were used to assess anxiety,satisfaction,and recovery.Results:Both the experimental and control groups showed significant reductions in anxiety by the first postintervention day(P<0.001),with the experimental group experiencing a greater mean reduction(mean difference[MD]=4.37)than the control group(MD=3.35)but the intergroup difference was not statistically significant(P>0.05).The experimental group reported significantly higher satisfaction scores(175.55±9.42)on the 3rd postoperative day compared to the control group(151.93±14.89;P<0.001).Similarly,the experimental group’s recovery scores(79.90±6.24)were considerably higher than those of the control group(62.45±15.18;P<0.001).On the 3rd postintervention day,satisfaction was significantly associated with age(P<0.001),and recovery with gravidity(P<0.05).Conclusions:NLIV can be used in the preoperative period to reduce anxiety related to CS and to improve satisfaction and recovery after the CS.展开更多
Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and t...Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and the modality fusion approach tends to be too simple,often neglecting modality alignment before fusion.This research introduces a novel dual stream multimodal alignment and fusion network named DMAFNet for classifying short videos.The network uses two unimodal encoder modules to extract features within modalities and exploits a multimodal encoder module to learn interaction between modalities.To solve the modality alignment problem,contrastive learning is introduced between two unimodal encoder modules.Additionally,masked language modeling(MLM)and video text matching(VTM)auxiliary tasks are introduced to improve the interaction between video frames and text modalities through backpropagation of loss functions.Diverse experiments prove the efficiency of DMAFNet in multimodal video classification tasks.Compared with other two mainstream baselines,DMAFNet achieves the best results on the 2022 WeChat Big Data Challenge dataset.展开更多
The Double Take column looks at a single topic from an African and Chinese perspective.This month,we explore how we can cope with the influence of short videos.
Objective While there is consensus regarding a positive effect of video gaming on dexterity,little is known regarding how much traditional laparoscopic practice can or should be substituted with video gaming.This stud...Objective While there is consensus regarding a positive effect of video gaming on dexterity,little is known regarding how much traditional laparoscopic practice can or should be substituted with video gaming.This study was designed to assess the effects of varying the amount of traditional practice in a lap box trainer and video gaming on performance in two fundamentals of laparoscopic surgery core tasks.Methods Undergraduate and medical students were recruited and randomized into one of four groups:a control group,a lap box group,a video game group,and a combined group with 50%of the time allocated to each modality.Performance in the peg transfer and precision cutting tasks was assessed both prior to and following the 6 training sessions.Results Peg transfer performance significantly improved in the lap box group(168.4±70.6 s vs.332.9±178.2 s,p<0.001),video game group(176.7±53.3 s vs.300.0±101.2 s,p<0.001)and combined group(214.2±86.9 s vs.406.8±239.5 s,p=0.002)after training.Similar improvements were also observed in precision cutting performance in the lap box group(413.1±138.4 s vs.614.3±211.4 s,p=0.002),video game group(434.1±150.8 s vs.609.2±233.2 s,p=0.007)and combined group(469.2±185.3 s vs.663.8±296.3 s,p=0.020).When analyzing improvements in performance across three different training groups compared with the control group,we found that both the lap box group(p<0.001)and the combined group(p<0.001)showed better improvement in both tasks,and the video game group had significantly better outcomes in the precision cutting task(p=0.003).Conclusion Traditional lap box training remains the most effective method for improving the performance of simulated laparoscopic surgery.Video games can be encouraged to enhance skills retention and supplement simulated practice outside of a formal training curriculum.展开更多
基金supported by the Macao Science and Technology Development Fund(FDCT)(No.0071/2023/RIB3)Joint Research Funding Program between the Macao Science and Technology Development Fund(FDCT)and the Department of Science and Technology of Guangdong Province(FDCTGDST)(No.0003-2024-AGJ).
文摘In this study,it aims at examining the differences between humangenerated and AI-generated texts in IELTS Writing Task 2.It especially focuses on lexical resourcefulness,grammatical accuracy,and contextual appropriateness.We analyzed 20 essays,including 10 human written ones by Chinese university students who have achieved an IELTS writing score ranging from 5.5 to 6.0,and 10 ChatGPT-4 Turbo-generated ones,using a mixed-methods approach,through corpus-based tools(NLTK,SpaCy,AntConc)and qualitative content analysis.Results showed that AI texts exhibited superior grammatical accuracy(0.4%–3%error rates for AI vs.20–26%for university students)but higher lexical repetition(17.2%to 23.25%for AI vs.17.68%for university students)and weaker contextual adaptability(3.33/10–3.69/10 for AI vs.3.23/10 to 4.14/10 for university students).While AI’s grammatical precision supports its utility as a corrective tool,human writers outperformed AI in lexical diversity and task-specific nuance.The findings advocate for a hybrid pedagogical model that leverages AI’s strengths in error detection while retaining human instruction for advanced lexical and contextual skills.Limitations include the small corpus and single-AI-model focus,suggesting future research with diverse datasets and longitudinal designs.
文摘The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.
文摘This conceptual study proposes a pedagogical framework that integrates Generative Artificial Intelligence tools(AIGC)and Chain-of-Thought(CoT)reasoning,grounded in the cognitive apprenticeship model,for the Pragmatics and Translation course within Master of Translation and Interpreting(MTI)programs.A key feature involves CoT reasoning exercises,which require students to articulate their step-by-step translation reasoning.This explicates cognitive processes,enhances pragmatic awareness,translation strategy development,and critical reflection on linguistic choices and context.Hypothetical activities exemplify its application,including comparative analysis of AI and human translations to examine pragmatic nuances,and guided exercises where students analyze or critique the reasoning traces generated by Large Language Models(LLMs).Ethically grounded,the framework positions AI as a supportive tool,thereby ensuring human translators retain the central decision-making role and promoting critical evaluation of machine-generated suggestions.Potential challenges,such as AI biases,ethical concerns,and overreliance,are addressed through strategies including bias-awareness discussions,rigorous accuracy verification,and a strong emphasis on human accountability.Future research will involve piloting the framework to empirically evaluate its impact on learners’pragmatic competence and translation skills,followed by iterative refinements to advance evidence-based translation pedagogy.
文摘AI-generated images are a prime example of AI-generated content,and this paper discusses the controversy over their copyrightability.Starting with the general technical principles that lie behind AI’s deep learning for model training and the generation and correction of AI-generated images according to an AI users’instructions to the AI prompt and their parameter settings,the paper analyzes the initial legal viewpoint that as AI-generated images do not have a human creator,they cannot apply for copyright.It goes on to examine the rapid development of AI-generated image technology and the gradual adoption of more open attitudes towards the copyrightability of AI-generated images due to the influence of the promoting technological advancement approach.On the basis of this,the paper further analyzes the criteria for assessing the copyrightability of AI-generated images,by using measures such as originality,human authorship,and intellectual achievements,aiming to clarify the legal basis for the copyrightability of AI-generated images and enhancing the copyright protection system.
文摘Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.
基金supported by National Natural Science Foundation of China(62072416)Key Research and Development Special Project of Henan Province(221111210500)Key TechnologiesR&DProgram of Henan rovince(232102211053,242102211071).
文摘The rapid development of short video platforms poses new challenges for traditional recommendation systems.Recommender systems typically depend on two types of user behavior feedback to construct user interest profiles:explicit feedback(interactive behavior),which significantly influences users’short-term interests,and implicit feedback(viewing time),which substantially affects their long-term interests.However,the previous model fails to distinguish between these two feedback methods,leading it to predict only the overall preferences of users based on extensive historical behavior sequences.Consequently,it cannot differentiate between users’long-term and shortterm interests,resulting in low accuracy in describing users’interest states and predicting the evolution of their interests.This paper introduces a video recommendationmodel calledCAT-MFRec(CrossAttention Transformer-Mixed Feedback Recommendation)designed to differentiate between explicit and implicit user feedback within the DIEN(Deep Interest Evolution Network)framework.This study emphasizes the separate learning of the two types of behavioral feedback,effectively integrating them through the cross-attention mechanism.Additionally,it leverages the long sequence dependence capabilities of Transformer technology to accurately construct user interest profiles and predict the evolution of user interests.Experimental results indicate that CAT-MF Rec significantly outperforms existing recommendation methods across various performance indicators.This advancement offers new theoretical and practical insights for the development of video recommendations,particularly in addressing complex and dynamic user behavior patterns.
基金Shenzhen Science and Technology Programme,Grant/Award Number:JCYJ202308071208000012023 Shenzhen sustainable supporting funds for colleges and universities,Grant/Award Number:20231121165240001Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology,Grant/Award Number:2024B1212010006。
文摘Internal learning-based video inpainting methods have shown promising results by exploiting the intrinsic properties of the video to fill in the missing region without external dataset supervision.However,existing internal learning-based video inpainting methods would produce inconsistent structures or blurry textures due to the insufficient utilisation of motion priors within the video sequence.In this paper,the authors propose a new internal learning-based video inpainting model called appearance consistency and motion coherence network(ACMC-Net),which can not only learn the recurrence of appearance prior but can also capture motion coherence prior to improve the quality of the inpainting results.In ACMC-Net,a transformer-based appearance network is developed to capture global context information within the video frame for representing appearance consistency accurately.Additionally,a novel motion coherence learning scheme is proposed to learn the motion prior in a video sequence effectively.Finally,the learnt internal appearance consistency and motion coherence are implicitly propagated to the missing regions to achieve inpainting well.Extensive experiments conducted on the DAVIS dataset show that the proposed model obtains the superior performance in terms of quantitative measurements and produces more visually plausible results compared with the state-of-the-art methods.
文摘Airway management plays a crucial role in providing adequate oxygenation and ventilation to patients during various medical procedures and emergencies.When patients have a limited mouth opening due to factors such as trauma,inflammation,or anatomical abnormalities airway management becomes challenging.A commonly utilized method to overcome this challenge is the use of video laryngoscopy(VL),which employs a specialized device equipped with a camera and a light source to allow a clear view of the larynx and vocal cords.VL overcomes the limitations of direct laryngoscopy in patients with limited mouth opening,enabling better visualization and successful intubation.Various types of VL blades are available.We devised a novel flangeless video laryngoscope for use in patients with a limited mouth opening and then tested it on a manikin.
文摘Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.
文摘Objective: The purpose of this study was to evaluate health education using videos and leaflets for preconception care (PCC) awareness among adolescent females up to six months after the health education. Methods: The subjects were female university students living in the Kinki area. A longitudinal survey was conducted on 67 members in the intervention group, who received the health education, and 52 members in the control group, who did not receive the health education. The primary outcome measures were knowledge of PCC and the subscales of the Health Promotion Lifestyle Profile. Surveys were conducted before, after, and six months after the intervention in the intervention group, and an initial survey and survey six months later were conducted in the control group. Cochran’s Q test, Bonferroni’s multiple comparison test, and McNemar’s test were used to analyze the knowledge of PCC data. The Health Awareness, Nutrition, and Stress Management subscales of the Health Promotion Lifestyle Profile were analyzed by paired t-test, and comparisons between the intervention and control groups were performed using the two-way repeated measures analysis of variance. Results: In the intervention group of 67 people, the number of subjects who answered “correct” for five of the nine items concerning knowledge of PCC increased immediately after the health education (P = 0.006) but decreased for five items from immediately after the health education to six months later (P = 0.043). In addition, the number of respondents who answered “correct” for “low birth weight infants and future lifestyle-related diseases” (P = 0.016) increased after six months compared with before the health education. For the 52 subjects in the control group, there was no change in the number of subjects who answered “correct” for eight out of the nine items after six months. There was also no increase in scores for the Health Promotion Lifestyle Profile after six months for either the intervention or control group. Conclusion: Providing health education about PCC using videos and leaflets to adolescent females was shown to enhance the knowledge of PCC immediately after the education.
基金supported by the National Natural Science Foundation of China (Nos. NSFC 61925105, 62322109, 62171257 and U22B2001)the Xplorer Prize in Information and Electronics technologiesthe Tsinghua University (Department of Electronic Engineering)-Nantong Research Institute for Advanced Communication Technologies Joint Research Center for Space, Air, Ground and Sea Cooperative Communication Network Technology
文摘Multimedia semantic communication has been receiving increasing attention due to its significant enhancement of communication efficiency.Semantic coding,which is oriented towards extracting and encoding the key semantics of video for transmission,is a key aspect in the framework of multimedia semantic communication.In this paper,we propose a facial video semantic coding method with low bitrate based on the temporal continuity of video semantics.At the sender’s end,we selectively transmit facial keypoints and deformation information,allocating distinct bitrates to different keypoints across frames.Compressive techniques involving sampling and quantization are employed to reduce the bitrate while retaining facial key semantic information.At the receiver’s end,a GAN-based generative network is utilized for reconstruction,effectively mitigating block artifacts and buffering problems present in traditional codec algorithms under low bitrates.The performance of the proposed approach is validated on multiple datasets,such as VoxCeleb and TalkingHead-1kH,employing metrics such as LPIPS,DISTS,and AKD for assessment.Experimental results demonstrate significant advantages over traditional codec methods,achieving up to approximately 10-fold bitrate reduction in prolonged,stable head pose scenarios across diverse conversational video settings.
文摘The application of short videos in agricultural scenarios has become a new form of productive force driving agricultural development,injecting new vitality and opportunities into traditional agriculture.These videos leverage the unique expressive logic of the platform by adopting a small entry point and prioritizing dissemination rate.They are strategically planned in terms of content,visuals,and interaction to cater to users needs for relaxation,knowledge acquisition,social sharing,agricultural product marketing,and talent display.Through careful design,full creativity,rich emotion,and the creation of distinct character personalities,these videos deliver positive,entertaining,informative,and opinion-driven agricultural content.The production and operation of agricultural short videos can be effectively optimized by analyzing the characteristics of both popular and less popular videos,and utilizing smart tools and trending topics.
文摘Objectives:Medical students often rely on recreational internet media to relieve the stress caused by immense academic and life pressures,and among these media,short-form videos,which are an emerging digital medium,have gradually become the mainstream choice of students to relieve their stress.However,the addiction caused by their usage has attracted the widespread attention of both academia and society,which is why the purpose of this study is to systematically explore the underlying mechanisms that link perceived stress,entertainment gratification,emotional gratification,short-form video usage intensity,and short-form video addiction based on multiple theoretical frameworks including the Compensatory Internet Use Model(CIU),the Interaction of Person-Affect-Cognition-Execution Model(I-PACE),and the Use and Gratification Theory(UGT).Methods:A hypothetical model with 9 research hypotheses was constructed.Taking medical students from Chi-nese universities as the research subjects,1057 valid responses were collected through an online questionnaire survey,including 358 males and 658 females.Structural equation modelling(SEM)was performed using the AMOS software to test the research hypotheses.Results:(1)Perceived stress positively predicted entertainment gratification and emotional gratification(β=0.72,p<0.001;β=0.61,p<0.001);(2)Entertainment gratifi-cation and emotional gratification positively influenced short-form video usage intensity(β=0.35,p<0.001;β=0.19,p<0.001);(3)Entertainment gratification and emotional gratification positively predicted short-form video addiction(β=0.40,p<0.001;β=0.17,p<0.001);(4)Short-form video usage intensity positively influenced short-form video addiction(β=0.36,p<0.001);and(5)Perceived stress exerted an indirect but positive effect on both short-form video usage intensity and short-form video addiction,mediated by entertainment and emotional gratification(β=0.37,p<0.001;β=0.52,p<0.001).Conclusion:The mechanisms that underlie medical students’short-form video addiction in stressful situations were revealed in this study.It was found that stress enhances medical students’need for entertainment and emotional online compensation,prompting more frequent short-form video usage and ultimately leading to addiction.These results underscore the need to address the stressors faced by medical students.Effective interventions should prioritise stress management strategies and promote healthier alternative coping mechanisms to mitigate the risk of addiction.
基金supported by the International Joint Research Project of Huiyan International College,Faculty of Education,Beijing Normal University(Grant Number:ICER202102).
文摘Objectives:Short video addiction has emerged as a significant public health issue in recent years,with a growing trend toward severity.However,research on the causes and impacts of short video addiction remains limited,and understanding of the variable“TikTok brain”is still in its infancy.Therefore,based on the Stimulus-Organism-Behavior-Consequence(SOBC)framework,we proposed six research hypotheses and constructed a model to explore the relationships between short video usage intensity,TikTok brain,short video addiction,and decreased attention control.Methods:Given that students are considered a high-risk group for excessive short video use,we collected 1086 valid participants from Chinese student users,including 609 males(56.1%)and 477 females(43.9%),with an average participant age of 19.84 years,to test the hypotheses.Results:(1)Short video usage intensity was positively related to short video addiction,TikTok brain,and decreased attention control;(2)TikTok brain was positively related to short video addiction and decreased attention control;and(3)Short video addiction was positively related to decreased attention control.Conclusions:These findings suggest that although excessive use of short video applications brings negative consequences,users still spend significant amounts of time on these platforms,indicating a need for strict self-regulation of usage time.
基金supported by the Zhejiang Provincial Natural Science Foundation of China(No.LQ23F030001)the National Natural Science Foundation of China(No.62406280)+5 种基金the Autism Research Special Fund of Zhejiang Foundation for Disabled Persons(No.2023008)the Liaoning Province Higher Education Innovative Talents Program Support Project(No.LR2019058)the Liaoning Province Joint Open Fund for Key Scientific and Technological Innovation Bases(No.2021-KF-12-05)the Central Guidance on Local Science and Technology Development Fund of Liaoning Province(No.2023JH6/100100066)the Key Laboratory for Biomedical Engineering of Ministry of Education,Zhejiang University,Chinain part by the Open Research Fund of the State Key Laboratory of Cognitive Neuroscience and Learning.
文摘Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.
文摘Objective:The objective of this study is to determine the effect of nurse-led instructional video(NLIV)on anxiety,satisfaction,and recovery among mothers admitted for cesarean section(CS).Materials and Methods:A quasi-experimental design was carried out on the mothers scheduled for CS.Eighty participants were selected by a purposive sampling technique,which were divided(40 participants in each group)into an experimental group and a control group.Nurse-led informational video(NLIV)was shown to the experimental group,and routine care was provided for the control group.Modified hospital anxiety scale(HADS),scale for measuring maternal satisfaction in cesarean birth,and obstetric quality of recovery following cesarean delivery were used to assess anxiety,satisfaction,and recovery.Results:Both the experimental and control groups showed significant reductions in anxiety by the first postintervention day(P<0.001),with the experimental group experiencing a greater mean reduction(mean difference[MD]=4.37)than the control group(MD=3.35)but the intergroup difference was not statistically significant(P>0.05).The experimental group reported significantly higher satisfaction scores(175.55±9.42)on the 3rd postoperative day compared to the control group(151.93±14.89;P<0.001).Similarly,the experimental group’s recovery scores(79.90±6.24)were considerably higher than those of the control group(62.45±15.18;P<0.001).On the 3rd postintervention day,satisfaction was significantly associated with age(P<0.001),and recovery with gravidity(P<0.05).Conclusions:NLIV can be used in the preoperative period to reduce anxiety related to CS and to improve satisfaction and recovery after the CS.
基金Fundamental Research Funds for the Central Universities,China(No.2232021A-10)National Natural Science Foundation of China(No.61903078)+1 种基金Shanghai Sailing Program,China(No.22YF1401300)Natural Science Foundation of Shanghai,China(No.20ZR1400400)。
文摘Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and the modality fusion approach tends to be too simple,often neglecting modality alignment before fusion.This research introduces a novel dual stream multimodal alignment and fusion network named DMAFNet for classifying short videos.The network uses two unimodal encoder modules to extract features within modalities and exploits a multimodal encoder module to learn interaction between modalities.To solve the modality alignment problem,contrastive learning is introduced between two unimodal encoder modules.Additionally,masked language modeling(MLM)and video text matching(VTM)auxiliary tasks are introduced to improve the interaction between video frames and text modalities through backpropagation of loss functions.Diverse experiments prove the efficiency of DMAFNet in multimodal video classification tasks.Compared with other two mainstream baselines,DMAFNet achieves the best results on the 2022 WeChat Big Data Challenge dataset.
文摘The Double Take column looks at a single topic from an African and Chinese perspective.This month,we explore how we can cope with the influence of short videos.
基金the financial support from the China Scholarship Council(Grant No.202106370009)and Alberta Innovate Graduate Student Scholarship.
文摘Objective While there is consensus regarding a positive effect of video gaming on dexterity,little is known regarding how much traditional laparoscopic practice can or should be substituted with video gaming.This study was designed to assess the effects of varying the amount of traditional practice in a lap box trainer and video gaming on performance in two fundamentals of laparoscopic surgery core tasks.Methods Undergraduate and medical students were recruited and randomized into one of four groups:a control group,a lap box group,a video game group,and a combined group with 50%of the time allocated to each modality.Performance in the peg transfer and precision cutting tasks was assessed both prior to and following the 6 training sessions.Results Peg transfer performance significantly improved in the lap box group(168.4±70.6 s vs.332.9±178.2 s,p<0.001),video game group(176.7±53.3 s vs.300.0±101.2 s,p<0.001)and combined group(214.2±86.9 s vs.406.8±239.5 s,p=0.002)after training.Similar improvements were also observed in precision cutting performance in the lap box group(413.1±138.4 s vs.614.3±211.4 s,p=0.002),video game group(434.1±150.8 s vs.609.2±233.2 s,p=0.007)and combined group(469.2±185.3 s vs.663.8±296.3 s,p=0.020).When analyzing improvements in performance across three different training groups compared with the control group,we found that both the lap box group(p<0.001)and the combined group(p<0.001)showed better improvement in both tasks,and the video game group had significantly better outcomes in the precision cutting task(p=0.003).Conclusion Traditional lap box training remains the most effective method for improving the performance of simulated laparoscopic surgery.Video games can be encouraged to enhance skills retention and supplement simulated practice outside of a formal training curriculum.