Objective:To comparatively analyze the readability and information quality of the educational materials for patients undergoing thoracoscopic lobectomy in both Chinese and English versions generated by three mainstrea...Objective:To comparatively analyze the readability and information quality of the educational materials for patients undergoing thoracoscopic lobectomy in both Chinese and English versions generated by three mainstream Large Language Models(LLMS),namely DeepSeek,Grok-3 and ChatGPT,Provide evidence-based basis for the clinical selection of AI-assisted educational tools.Method:A cross-sectional study design was adopted,with“education for patients undergoing thoracoscopic lobectomy”as the core requirement.Standardized Chinese and English prompts were designed to drive each of the three models to generate 3 independent educational materials(a total of 18,9 in Chinese and 9 in English).The readability was evaluated using the internationally recognized readability assessment tools(English:Flesch-Kincaid Grade Level,FKGL;Flesch Reading Ease,FRE;Chinese:average sentence length),and the DISCERN scale was used to evaluate the quality of information.The diff erences among the three models were compared by the Kruskal-Wallis H test,the diff erences between the Chinese and English versions were analyzed by the paired sample t-test,and the reliability of the raters was tested by the intraclass correlation coeffi cient(ICC).Result:1.Readability:In the English version,DeepSeek V3 had the highest FRE score(80.36±1.18)and the lowest FKGL score(4.83±0.12),which was significantly better than ChatGPT-o3(FRE:67.36±0.74,FKGL:)(6.56±0.36)and Grok3(FRE:45.67±1.65,FKGL:11.93±0.17)(P<0.05);Among the Chinese versions,Grok3 had the shortest average sentence length(17.74±1.02 characters),which was signifi cantly better than ChatGPT-o3(27.81±1.47 characters)and DeepSeek V3(26.75±1.18 characters)(P<0.05).2.Information quality:The reliability of the raters was excellent(ICC=0.92,95%CI:0.925-0.998,P<0.001);The DISCERN total scores of the Chinese and English versions of the three models were all at the“good-excellent”level(59.00-71.17 points).Among them,the total scores of the Chinese and English versions of ChatGPT-o3 were the highest(English:71.17±1.17,Chinese:70.50±0.55),and Grok3 was the lowest(English:(63.17±0.94,Chinese:59.00±0.89)),and the diff erence between groups was statistically significant(P<0.05).Conclusion:Among the educational materials for thoracoscopic lobectomy generated by the three LLMS,the English version of DeepSeeking V3 has the best readability,the Chinese version of Grok3 has outstanding reading fl uency,and the comprehensive performance of the Chinese and English versions of ChatGPT-o3 is balanced.The Chinese version still needs to be optimized in terms of terminology consistency and information details.When applying it in clinical practice,the model should be selected in combination with language requirements,and the content generated by AI should be professionally reviewed.展开更多
The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack sys...The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack systematic frameworks capable of addressing the contextual and pedagogical nuances required for effective implementation.This paper introduces a novel framework that combines Data-Driven Error-Correcting Output Codes(DECOC),Long Short-Term Memory(LSTM)networks,and Multi-Layer Deep Neural Networks(ML-DNN)to identify optimal emoji placements within computer science course materials.The originality of the proposed system lies in its ability to leverage sentiment analysis techniques and contextual embeddings to align emoji recommendations with both the emotional tone and learning objectives of course content.A meticulously annotated dataset,comprising diverse topics in computer science,was developed to train and validate the model,ensuring its applicability across a wide range of educational contexts.Comprehensive validation demonstrated the system’s superior performance,achieving an accuracy of 92.4%,precision of 90.7%,recall of 89.3%,and an F1-score of 90.0%.Comparative analysis with baselinemodels and relatedworks confirms themodel’s ability tooutperformexisting approaches inbalancing accuracy,relevance,and contextual appropriateness.Beyond its technical advancements,this framework offers practical benefits for educators by providing an Artificial Intelligence-assisted(AI-assisted)tool that facilitates personalized content adaptation based on student sentiment and engagement patterns.By automating the identification of appropriate emoji placements,teachers can enhance digital course materials with minimal effort,improving the clarity of complex concepts and fostering an emotionally supportive learning environment.This paper contributes to the emerging field of AI-enhanced education by addressing critical gaps in personalized content delivery and pedagogical support.Its findings highlight the transformative potential of integrating AI-driven emoji placement systems into educational materials,offering an innovative tool for fostering student engagement and enhancing learning outcomes.The proposed framework establishes a foundation for future advancements in the visual augmentation of educational resources,emphasizing scalability and adaptability for broader applications in e-learning.展开更多
<strong>Aim:</strong> The aim of this study was to explore patients’ preferences for forms of patient education material, including leaflets, podcasts, and videos;that is, to determine what forms of infor...<strong>Aim:</strong> The aim of this study was to explore patients’ preferences for forms of patient education material, including leaflets, podcasts, and videos;that is, to determine what forms of information, besides that provided verbally by healthcare personnel, do patients prefer following visits to hospital? <strong>Methods: </strong>The study was a mixed-methods study, using a survey design with primarily quantitative items but with a qualitative component. A survey was distributed to patients over 18 years between May and July 2020 and 480 patients chose to respond.<strong> Results:</strong> Text-based patient education materials (leaflets), is the form that patients have the most experience with and was preferred by 86.46% of respondents;however, 50.21% and 31.67% of respondents would also like to receive patient education material in video and podcast formats, respectively. Furthermore, several respondents wrote about the need for different forms of patient education material, depending on the subject of the supplementary information. <strong>Conclusion: </strong>This study provides an overview of patient preferences regarding forms of patient education material. The results show that the majority of respondents prefer to use combinations of written, audio, and video material, thus applying and co-constructing a multimodal communication system, from which they select and apply different modes of communication from different sources simultaneously.展开更多
Background: The quality of online Arabic educational materials for diabetic foot syndrome (DFS) is unknown. This study evaluated Arabic websites as patients’ sources of information for DFS. Methods: The study assesse...Background: The quality of online Arabic educational materials for diabetic foot syndrome (DFS) is unknown. This study evaluated Arabic websites as patients’ sources of information for DFS. Methods: The study assessed patient-related websites about DFS using a modified Ensuring Quality of Information for Patients (EQIP) tool (score 0 - 35). Specific terms were searched in Google to identify DFS websites;eligibility criteria were applied to 20 pages of search results to select the included websites. Data on country of origin, source types and subtypes, and website traffic were extracted. Additional therapeutic information regarding prevention and conservative, pharmacological, and surgical treatments was also recorded and analyzed. Results: Among 559 websites, 157 were eligible for inclusion. The median EQIP score was 16 out of 35, indicating poor quality in one of three domains (content, identification, or structure). Most sources originated from Arab countries (75.8%) were non-governmental (94.9%), and were medical information websites (46.5%). High-scoring websites were significantly more likely than low-scoring websites to describe information on prevention (30.9% vs. 2.9%, p = 0.001), conservative treatment (34.1% vs. 13%, p = 0.002), or pharmacological treatment (32.5% vs. 16.8%, p = 0.024). There were increased odds of scoring high if a website provided information on prevention (OR = 12.9, 95% CI [1.68 - 98.57], p = 0.014). Conclusion: Most Arabic online patient information on DFS is of poor quality. Quality control measures are needed to ensure accurate health information for the public.展开更多
Lonely Holden in the Catcher in the Rye wanders in New York for three days after he leaves school and he feels even lonelier. What he sees and meets stimulates his criticism and protest.
Dear Editor,The use of artificial intelligence(AI)to generate patient information has become increasingly common in medicine.In radiology,AI has demonstrated potential in improving patient education and care[1].With t...Dear Editor,The use of artificial intelligence(AI)to generate patient information has become increasingly common in medicine.In radiology,AI has demonstrated potential in improving patient education and care[1].With the increased usage of AI and large language models(LLM)in medicine,it is important that their readability is evaluated.The Joint Commission states that the reading level of patient education material should be at or below a 5th grade reading level;thus,their outputs must meet established readability standards to support effective patient education[2].In this study,we evaluated the readability of information generated by LLMs ChatGPT,Gemini,and Copilot on chronic diseases such as diabetes,cancer,and heart disease.展开更多
Objective:Patients are increasingly turning to the Internet as a source of healthcare information.Given that neck dissection is a common procedure within the field of Otolaryngology-Head and Neck Surgery,the aim of th...Objective:Patients are increasingly turning to the Internet as a source of healthcare information.Given that neck dissection is a common procedure within the field of Otolaryngology-Head and Neck Surgery,the aim of this study was to evaluate the quality and readability of online patient education materials on neck dissection.Methods:A Google search was performed using the term"neck dissection."The first 10 pages of a Google search using the term"neck dissection"were analyzed.The DISCERN instrument was used to assess quality of information.Readability was calculated using the Flesch-Reading Ease,Flesch-Kincaid Grade Level,Gunning-Fog Index,Coleman-Liau Index,and Simple Measure of Gobbledygook Index.Results:Thirty-one online patient education materials were included.Fifty-five percent(n=17)of results originated from academic institutions or hospitals.The mean Flesch-Reading Ease score was 61.2±11.9.Fifty-two percent(n=16)of patient education materials had Flesch-Reading Ease scores above the recommended score of 65.The average reading grade level was 10.5±2.1.The average total DISCERN score was 43.6±10.1.Only 26%of patient education materials(PEMs)had DISCERN scores corresponding to a"good quality"rating.There was a significant positive correlation between DISCERN scores and both Flesch-Reading Ease scores and average reading grade level.Conclusions:The majority of patient education materials were written above the recommended sixth-grade reading level and the quality of online information pertaining to neck dissections was found to be suboptimal.This research highlights the need for patient education materials regarding neck dissection that are high quality and easily understandable by patients.展开更多
基金supported by the National High Level Hospital Clinical Research Funding(80102022501).
文摘Objective:To comparatively analyze the readability and information quality of the educational materials for patients undergoing thoracoscopic lobectomy in both Chinese and English versions generated by three mainstream Large Language Models(LLMS),namely DeepSeek,Grok-3 and ChatGPT,Provide evidence-based basis for the clinical selection of AI-assisted educational tools.Method:A cross-sectional study design was adopted,with“education for patients undergoing thoracoscopic lobectomy”as the core requirement.Standardized Chinese and English prompts were designed to drive each of the three models to generate 3 independent educational materials(a total of 18,9 in Chinese and 9 in English).The readability was evaluated using the internationally recognized readability assessment tools(English:Flesch-Kincaid Grade Level,FKGL;Flesch Reading Ease,FRE;Chinese:average sentence length),and the DISCERN scale was used to evaluate the quality of information.The diff erences among the three models were compared by the Kruskal-Wallis H test,the diff erences between the Chinese and English versions were analyzed by the paired sample t-test,and the reliability of the raters was tested by the intraclass correlation coeffi cient(ICC).Result:1.Readability:In the English version,DeepSeek V3 had the highest FRE score(80.36±1.18)and the lowest FKGL score(4.83±0.12),which was significantly better than ChatGPT-o3(FRE:67.36±0.74,FKGL:)(6.56±0.36)and Grok3(FRE:45.67±1.65,FKGL:11.93±0.17)(P<0.05);Among the Chinese versions,Grok3 had the shortest average sentence length(17.74±1.02 characters),which was signifi cantly better than ChatGPT-o3(27.81±1.47 characters)and DeepSeek V3(26.75±1.18 characters)(P<0.05).2.Information quality:The reliability of the raters was excellent(ICC=0.92,95%CI:0.925-0.998,P<0.001);The DISCERN total scores of the Chinese and English versions of the three models were all at the“good-excellent”level(59.00-71.17 points).Among them,the total scores of the Chinese and English versions of ChatGPT-o3 were the highest(English:71.17±1.17,Chinese:70.50±0.55),and Grok3 was the lowest(English:(63.17±0.94,Chinese:59.00±0.89)),and the diff erence between groups was statistically significant(P<0.05).Conclusion:Among the educational materials for thoracoscopic lobectomy generated by the three LLMS,the English version of DeepSeeking V3 has the best readability,the Chinese version of Grok3 has outstanding reading fl uency,and the comprehensive performance of the Chinese and English versions of ChatGPT-o3 is balanced.The Chinese version still needs to be optimized in terms of terminology consistency and information details.When applying it in clinical practice,the model should be selected in combination with language requirements,and the content generated by AI should be professionally reviewed.
基金funded by the Deanship of Postgraduate Studies and Scientific Research at Majmaah University,grant number[R-2025-1637].
文摘The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack systematic frameworks capable of addressing the contextual and pedagogical nuances required for effective implementation.This paper introduces a novel framework that combines Data-Driven Error-Correcting Output Codes(DECOC),Long Short-Term Memory(LSTM)networks,and Multi-Layer Deep Neural Networks(ML-DNN)to identify optimal emoji placements within computer science course materials.The originality of the proposed system lies in its ability to leverage sentiment analysis techniques and contextual embeddings to align emoji recommendations with both the emotional tone and learning objectives of course content.A meticulously annotated dataset,comprising diverse topics in computer science,was developed to train and validate the model,ensuring its applicability across a wide range of educational contexts.Comprehensive validation demonstrated the system’s superior performance,achieving an accuracy of 92.4%,precision of 90.7%,recall of 89.3%,and an F1-score of 90.0%.Comparative analysis with baselinemodels and relatedworks confirms themodel’s ability tooutperformexisting approaches inbalancing accuracy,relevance,and contextual appropriateness.Beyond its technical advancements,this framework offers practical benefits for educators by providing an Artificial Intelligence-assisted(AI-assisted)tool that facilitates personalized content adaptation based on student sentiment and engagement patterns.By automating the identification of appropriate emoji placements,teachers can enhance digital course materials with minimal effort,improving the clarity of complex concepts and fostering an emotionally supportive learning environment.This paper contributes to the emerging field of AI-enhanced education by addressing critical gaps in personalized content delivery and pedagogical support.Its findings highlight the transformative potential of integrating AI-driven emoji placement systems into educational materials,offering an innovative tool for fostering student engagement and enhancing learning outcomes.The proposed framework establishes a foundation for future advancements in the visual augmentation of educational resources,emphasizing scalability and adaptability for broader applications in e-learning.
文摘<strong>Aim:</strong> The aim of this study was to explore patients’ preferences for forms of patient education material, including leaflets, podcasts, and videos;that is, to determine what forms of information, besides that provided verbally by healthcare personnel, do patients prefer following visits to hospital? <strong>Methods: </strong>The study was a mixed-methods study, using a survey design with primarily quantitative items but with a qualitative component. A survey was distributed to patients over 18 years between May and July 2020 and 480 patients chose to respond.<strong> Results:</strong> Text-based patient education materials (leaflets), is the form that patients have the most experience with and was preferred by 86.46% of respondents;however, 50.21% and 31.67% of respondents would also like to receive patient education material in video and podcast formats, respectively. Furthermore, several respondents wrote about the need for different forms of patient education material, depending on the subject of the supplementary information. <strong>Conclusion: </strong>This study provides an overview of patient preferences regarding forms of patient education material. The results show that the majority of respondents prefer to use combinations of written, audio, and video material, thus applying and co-constructing a multimodal communication system, from which they select and apply different modes of communication from different sources simultaneously.
文摘Background: The quality of online Arabic educational materials for diabetic foot syndrome (DFS) is unknown. This study evaluated Arabic websites as patients’ sources of information for DFS. Methods: The study assessed patient-related websites about DFS using a modified Ensuring Quality of Information for Patients (EQIP) tool (score 0 - 35). Specific terms were searched in Google to identify DFS websites;eligibility criteria were applied to 20 pages of search results to select the included websites. Data on country of origin, source types and subtypes, and website traffic were extracted. Additional therapeutic information regarding prevention and conservative, pharmacological, and surgical treatments was also recorded and analyzed. Results: Among 559 websites, 157 were eligible for inclusion. The median EQIP score was 16 out of 35, indicating poor quality in one of three domains (content, identification, or structure). Most sources originated from Arab countries (75.8%) were non-governmental (94.9%), and were medical information websites (46.5%). High-scoring websites were significantly more likely than low-scoring websites to describe information on prevention (30.9% vs. 2.9%, p = 0.001), conservative treatment (34.1% vs. 13%, p = 0.002), or pharmacological treatment (32.5% vs. 16.8%, p = 0.024). There were increased odds of scoring high if a website provided information on prevention (OR = 12.9, 95% CI [1.68 - 98.57], p = 0.014). Conclusion: Most Arabic online patient information on DFS is of poor quality. Quality control measures are needed to ensure accurate health information for the public.
文摘Lonely Holden in the Catcher in the Rye wanders in New York for three days after he leaves school and he feels even lonelier. What he sees and meets stimulates his criticism and protest.
文摘Dear Editor,The use of artificial intelligence(AI)to generate patient information has become increasingly common in medicine.In radiology,AI has demonstrated potential in improving patient education and care[1].With the increased usage of AI and large language models(LLM)in medicine,it is important that their readability is evaluated.The Joint Commission states that the reading level of patient education material should be at or below a 5th grade reading level;thus,their outputs must meet established readability standards to support effective patient education[2].In this study,we evaluated the readability of information generated by LLMs ChatGPT,Gemini,and Copilot on chronic diseases such as diabetes,cancer,and heart disease.
文摘Objective:Patients are increasingly turning to the Internet as a source of healthcare information.Given that neck dissection is a common procedure within the field of Otolaryngology-Head and Neck Surgery,the aim of this study was to evaluate the quality and readability of online patient education materials on neck dissection.Methods:A Google search was performed using the term"neck dissection."The first 10 pages of a Google search using the term"neck dissection"were analyzed.The DISCERN instrument was used to assess quality of information.Readability was calculated using the Flesch-Reading Ease,Flesch-Kincaid Grade Level,Gunning-Fog Index,Coleman-Liau Index,and Simple Measure of Gobbledygook Index.Results:Thirty-one online patient education materials were included.Fifty-five percent(n=17)of results originated from academic institutions or hospitals.The mean Flesch-Reading Ease score was 61.2±11.9.Fifty-two percent(n=16)of patient education materials had Flesch-Reading Ease scores above the recommended score of 65.The average reading grade level was 10.5±2.1.The average total DISCERN score was 43.6±10.1.Only 26%of patient education materials(PEMs)had DISCERN scores corresponding to a"good quality"rating.There was a significant positive correlation between DISCERN scores and both Flesch-Reading Ease scores and average reading grade level.Conclusions:The majority of patient education materials were written above the recommended sixth-grade reading level and the quality of online information pertaining to neck dissections was found to be suboptimal.This research highlights the need for patient education materials regarding neck dissection that are high quality and easily understandable by patients.