This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was f...This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was formed by combining 28,309 samples from the ChestX-ray14,PadChest,and CheXpert databases,with 10,287,6022,and 12,000 samples representing Pleural Effusion,Pulmonary Edema,and Normal cases,respectively.Consequently,the preprocessing step involves applying the Contrast Limited Adaptive Histogram Equalization(CLAHE)method to boost the local contrast of the X-ray samples,then resizing the images to 380×380 dimensions,followed by using the data augmentation technique.The classification task employs a deep learning model based on the EfficientNet-V1-B4 architecture and is trained using the AdamW optimizer.The proposed multiclass system achieved an accuracy(ACC)of 98.3%,recall of 98.3%,precision of 98.7%,and F1-score of 98.7%.Moreover,the robustness of the model was revealed by the Receiver Operating Characteristic(ROC)analysis,which demonstrated an Area Under the Curve(AUC)of 1.00 for edema and normal cases and 0.99 for effusion.The experimental results demonstrate the superiority of the proposedmulti-class system,which has the potential to assist clinicians in timely and accurate diagnosis,leading to improved patient outcomes.Notably,ablation-CAM visualization at the last convolutional layer portrayed further enhanced diagnostic capabilities with heat maps on X-ray images,which will aid clinicians in interpreting and localizing abnormalities more effectively.展开更多
大型语言模型(Large Language Model,LLM)在处理常规语言指令方面表现出色,但是处理印刷领域相关专业问题的能力还有待提升。本研究通过构建高质量的印刷领域微调数据集对开源LLM进行微调优化,利用清晰的微调提示词引导模型生成符合期...大型语言模型(Large Language Model,LLM)在处理常规语言指令方面表现出色,但是处理印刷领域相关专业问题的能力还有待提升。本研究通过构建高质量的印刷领域微调数据集对开源LLM进行微调优化,利用清晰的微调提示词引导模型生成符合期望的回答。基于此,设计了一个针对印刷领域应用场景的服务型LLM,借助定制化训练提高其在印刷领域的表现能力。该过程主要涉及两项关键工作:通过收集、清洗、标注和扩增数据等方法,构建一个印刷领域微调数据集;选择Qwen-7B-Chat作为基座模型进行监督式微调,结合LoRA方法以实现参数高效的任务适应,并借助AdamW优化器策略对LLM的微调训练过程进行优化。验证结果表明,微调后的Qwen-7B-Chat模型相较原模型在回答长度上提升了约302.92%,并在回答质量评估环节保持了更高的满意率。展开更多
文摘This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was formed by combining 28,309 samples from the ChestX-ray14,PadChest,and CheXpert databases,with 10,287,6022,and 12,000 samples representing Pleural Effusion,Pulmonary Edema,and Normal cases,respectively.Consequently,the preprocessing step involves applying the Contrast Limited Adaptive Histogram Equalization(CLAHE)method to boost the local contrast of the X-ray samples,then resizing the images to 380×380 dimensions,followed by using the data augmentation technique.The classification task employs a deep learning model based on the EfficientNet-V1-B4 architecture and is trained using the AdamW optimizer.The proposed multiclass system achieved an accuracy(ACC)of 98.3%,recall of 98.3%,precision of 98.7%,and F1-score of 98.7%.Moreover,the robustness of the model was revealed by the Receiver Operating Characteristic(ROC)analysis,which demonstrated an Area Under the Curve(AUC)of 1.00 for edema and normal cases and 0.99 for effusion.The experimental results demonstrate the superiority of the proposedmulti-class system,which has the potential to assist clinicians in timely and accurate diagnosis,leading to improved patient outcomes.Notably,ablation-CAM visualization at the last convolutional layer portrayed further enhanced diagnostic capabilities with heat maps on X-ray images,which will aid clinicians in interpreting and localizing abnormalities more effectively.
文摘大型语言模型(Large Language Model,LLM)在处理常规语言指令方面表现出色,但是处理印刷领域相关专业问题的能力还有待提升。本研究通过构建高质量的印刷领域微调数据集对开源LLM进行微调优化,利用清晰的微调提示词引导模型生成符合期望的回答。基于此,设计了一个针对印刷领域应用场景的服务型LLM,借助定制化训练提高其在印刷领域的表现能力。该过程主要涉及两项关键工作:通过收集、清洗、标注和扩增数据等方法,构建一个印刷领域微调数据集;选择Qwen-7B-Chat作为基座模型进行监督式微调,结合LoRA方法以实现参数高效的任务适应,并借助AdamW优化器策略对LLM的微调训练过程进行优化。验证结果表明,微调后的Qwen-7B-Chat模型相较原模型在回答长度上提升了约302.92%,并在回答质量评估环节保持了更高的满意率。