OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models...OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models havebeen employed for intricate tasks including object recognition, image generation, and image processing, leveragingtheir advanced capabilities to fuel transformative breakthroughs. Within the gaming industry, they have foundutility in crafting virtual characters and generating plots and dialogues, thereby enabling immersive and interactiveplayer experiences. Furthermore, these models have been harnessed in the realm of medical diagnosis, providinginvaluable insights and support to healthcare professionals in the realmof disease detection. The principal objectiveof this paper is to offer a comprehensive overview of OpenAI, OpenAI Gym, ChatGPT, DALL E, stable diffusion,the pre-trained clip model, and other pertinent models in various domains, encompassing CLIP Text-to-Image,education, medical imaging, computer vision, social influence, natural language processing, software development,coding assistance, and Chatbot, among others. Particular emphasis will be placed on comparative analysis andexamination of popular text-to-image and text-to-video models under diverse stimuli, shedding light on thecurrent research landscape, emerging trends, and existing challenges within the domains of OpenAI and ChatGPT.Through a rigorous literature review, this paper aims to deliver a professional and insightful overview of theadvancements, potentials, and limitations of these pioneering language models.展开更多
Purpose To evaluate the accuracy and reasoning ability of DeepSeek-R1 and three recently released large language models(LLMs)in bilingual complex ophthalmology cases.Methods A total of 130 multiple-choice questions(MC...Purpose To evaluate the accuracy and reasoning ability of DeepSeek-R1 and three recently released large language models(LLMs)in bilingual complex ophthalmology cases.Methods A total of 130 multiple-choice questions(MCQs)related to diagnosis(n=39)and management(n=91)were collected from the Chinese ophthalmology senior professional title examination and categorized into six topics.These MCQs were translated into English.Responses from DeepSeek-R1,Gemini 2.0 Pro,OpenAI o1 and o3-mini were generated under default configurations between February 15 and February 20,2025.Accuracy was calculated as the proportion of correctly answered questions,with omissions and extra answers considered incorrect.Reasoning ability was evaluated through analyzing reasoning logic and the causes of reasoning errors.Results DeepSeek-R1 demonstrated the highest overall accuracy,achieving 0.862 in Chinese MCQs and 0.808 in English MCQs.Gemini 2.0 Pro,OpenAI o1,and OpenAI o3-mini attained accuracies of 0.715,0.685,and 0.692 in Chinese MCQs(all P<0.001 compared with DeepSeek-R1),and 0.746(P=0.115),0.723(P=0.027),and 0.577(P<0.001)in English MCQs,respectively.DeepSeek-R1 achieved the highest accuracy across five topics in both Chinese and English MCQs.It also excelled in management questions conducted in Chinese(all P<0.05).Reasoning ability analysis showed that the four LLMs shared similar reasoning logic.Ignoring key positive history,ignoring key positive signs,misinterpretation of medical data,and overuse of non–first-line interventions were the most common causes of reasoning errors.Conclusions DeepSeek-R1 demonstrated superior performance in bilingual complex ophthalmology reasoning tasks than three state-of-the-art LLMs.These findings highlight the potential of advanced LLMs to assist in clinical decision-making and suggest a framework for evaluating reasoning capabilities.展开更多
基金the National Natural Science Foundation of China(No.62001197).
文摘OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models havebeen employed for intricate tasks including object recognition, image generation, and image processing, leveragingtheir advanced capabilities to fuel transformative breakthroughs. Within the gaming industry, they have foundutility in crafting virtual characters and generating plots and dialogues, thereby enabling immersive and interactiveplayer experiences. Furthermore, these models have been harnessed in the realm of medical diagnosis, providinginvaluable insights and support to healthcare professionals in the realmof disease detection. The principal objectiveof this paper is to offer a comprehensive overview of OpenAI, OpenAI Gym, ChatGPT, DALL E, stable diffusion,the pre-trained clip model, and other pertinent models in various domains, encompassing CLIP Text-to-Image,education, medical imaging, computer vision, social influence, natural language processing, software development,coding assistance, and Chatbot, among others. Particular emphasis will be placed on comparative analysis andexamination of popular text-to-image and text-to-video models under diverse stimuli, shedding light on thecurrent research landscape, emerging trends, and existing challenges within the domains of OpenAI and ChatGPT.Through a rigorous literature review, this paper aims to deliver a professional and insightful overview of theadvancements, potentials, and limitations of these pioneering language models.
基金supported by the Global STEM Professorship Scheme(P0046113)the Start-up Fund for RAPs under the Strategic Hiring Scheme(P0048623)from HKSAR.
文摘Purpose To evaluate the accuracy and reasoning ability of DeepSeek-R1 and three recently released large language models(LLMs)in bilingual complex ophthalmology cases.Methods A total of 130 multiple-choice questions(MCQs)related to diagnosis(n=39)and management(n=91)were collected from the Chinese ophthalmology senior professional title examination and categorized into six topics.These MCQs were translated into English.Responses from DeepSeek-R1,Gemini 2.0 Pro,OpenAI o1 and o3-mini were generated under default configurations between February 15 and February 20,2025.Accuracy was calculated as the proportion of correctly answered questions,with omissions and extra answers considered incorrect.Reasoning ability was evaluated through analyzing reasoning logic and the causes of reasoning errors.Results DeepSeek-R1 demonstrated the highest overall accuracy,achieving 0.862 in Chinese MCQs and 0.808 in English MCQs.Gemini 2.0 Pro,OpenAI o1,and OpenAI o3-mini attained accuracies of 0.715,0.685,and 0.692 in Chinese MCQs(all P<0.001 compared with DeepSeek-R1),and 0.746(P=0.115),0.723(P=0.027),and 0.577(P<0.001)in English MCQs,respectively.DeepSeek-R1 achieved the highest accuracy across five topics in both Chinese and English MCQs.It also excelled in management questions conducted in Chinese(all P<0.05).Reasoning ability analysis showed that the four LLMs shared similar reasoning logic.Ignoring key positive history,ignoring key positive signs,misinterpretation of medical data,and overuse of non–first-line interventions were the most common causes of reasoning errors.Conclusions DeepSeek-R1 demonstrated superior performance in bilingual complex ophthalmology reasoning tasks than three state-of-the-art LLMs.These findings highlight the potential of advanced LLMs to assist in clinical decision-making and suggest a framework for evaluating reasoning capabilities.