为更精确地预测航班过站时间,将全国机场按照规模差异及不同地理位置所导致的客流量差异和天气差异对航班过站时间造成的不同影响进行分类,基于各类机场航班数据,构建混合轻量级梯度提升机算法(LightGBM)模型对航班过站时间分类预测。...为更精确地预测航班过站时间,将全国机场按照规模差异及不同地理位置所导致的客流量差异和天气差异对航班过站时间造成的不同影响进行分类,基于各类机场航班数据,构建混合轻量级梯度提升机算法(LightGBM)模型对航班过站时间分类预测。引入自适应鲁棒损失函数(adaptive robust loss function,ARLF)改进LightGBM模型损失函数,降低航班数据中存在离群值的影响;通过改进的麻雀搜索算法对改进后的LightGBM模型进行参数寻优,形成混合LightGBM模型。采用全国2019年全年航班数据进行验证,实验结果验证了方法的可行性。展开更多
Learning from demonstration is widely regarded as a promising paradigm for robots to acquire diverse skills.Other than the artificial learning from observation-action pairs for machines,humans can learn to imitate in ...Learning from demonstration is widely regarded as a promising paradigm for robots to acquire diverse skills.Other than the artificial learning from observation-action pairs for machines,humans can learn to imitate in a more versatile and effective manner:acquiring skills through mere“observation”.Video to Command task is widely perceived as a promising approach for task-based learning,which yet faces two key challenges:(1)High redundancy and low frame rate of fine-grained action sequences make it difficult to manipulate objects robustly and accurately.(2)Video to Command models often prioritize accuracy and richness of output commands over physical capabilities,leading to impractical or unsafe instructions for robots.This article presents a novel Video to Command framework that employs multiple data associations and physical constraints.First,we introduce an object-level appearancecontrasting multiple data association strategy to effectively associate manipulated objects in visually complex environments,capturing dynamic changes in video content.Then,we propose a multi-task Video to Command model that utilizes object-level video content changes to compile expert demonstrations into manipulation commands.Finally,a multi-task hybrid loss function is proposed to train a Video to Command model that adheres to the constraints of the physical world and manipulation tasks.Our method achieved over 10%on BLEU_N,METEOR,ROUGE_L,and CIDEr compared to the up-to-date methods.The dual-arm robot prototype was established to demonstrate the whole process of learning from an expert demonstration of multiple skills and then executing the tasks by a robot.展开更多
Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete v...Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete vessel segmentation and poor continuity.In this study,we propose CT-MFENet to address the aforementioned issues.First,the use of context transformer(CT)allows for the integration of contextual feature information,which helps establish the connection between pixels and solve the problem of incomplete vessel continuity.Second,multi-scale dense residual networks are used instead of traditional CNN to address the issue of inadequate local feature extraction when the model encounters vessels at multiple scales.In the decoding stage,we introduce a local-global fusion module.It enhances the localization of vascular information and reduces the semantic gap between high-and low-level features.To address the class imbalance in retinal images,we propose a hybrid loss function that enhances the segmentation ability of the model for topological structures.We conducted experiments on the publicly available DRIVE,CHASEDB1,STARE,and IOSTAR datasets.The experimental results show that our CT-MFENet performs better than most existing methods,including the baseline U-Net.展开更多
Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low a...Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters.展开更多
文摘为更精确地预测航班过站时间,将全国机场按照规模差异及不同地理位置所导致的客流量差异和天气差异对航班过站时间造成的不同影响进行分类,基于各类机场航班数据,构建混合轻量级梯度提升机算法(LightGBM)模型对航班过站时间分类预测。引入自适应鲁棒损失函数(adaptive robust loss function,ARLF)改进LightGBM模型损失函数,降低航班数据中存在离群值的影响;通过改进的麻雀搜索算法对改进后的LightGBM模型进行参数寻优,形成混合LightGBM模型。采用全国2019年全年航班数据进行验证,实验结果验证了方法的可行性。
基金Supported by Zhejiang Provincial Key Research and Development Program(Grant No.2021C04015)。
文摘Learning from demonstration is widely regarded as a promising paradigm for robots to acquire diverse skills.Other than the artificial learning from observation-action pairs for machines,humans can learn to imitate in a more versatile and effective manner:acquiring skills through mere“observation”.Video to Command task is widely perceived as a promising approach for task-based learning,which yet faces two key challenges:(1)High redundancy and low frame rate of fine-grained action sequences make it difficult to manipulate objects robustly and accurately.(2)Video to Command models often prioritize accuracy and richness of output commands over physical capabilities,leading to impractical or unsafe instructions for robots.This article presents a novel Video to Command framework that employs multiple data associations and physical constraints.First,we introduce an object-level appearancecontrasting multiple data association strategy to effectively associate manipulated objects in visually complex environments,capturing dynamic changes in video content.Then,we propose a multi-task Video to Command model that utilizes object-level video content changes to compile expert demonstrations into manipulation commands.Finally,a multi-task hybrid loss function is proposed to train a Video to Command model that adheres to the constraints of the physical world and manipulation tasks.Our method achieved over 10%on BLEU_N,METEOR,ROUGE_L,and CIDEr compared to the up-to-date methods.The dual-arm robot prototype was established to demonstrate the whole process of learning from an expert demonstration of multiple skills and then executing the tasks by a robot.
基金the National Natural Science Foundation of China(No.62266025)。
文摘Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete vessel segmentation and poor continuity.In this study,we propose CT-MFENet to address the aforementioned issues.First,the use of context transformer(CT)allows for the integration of contextual feature information,which helps establish the connection between pixels and solve the problem of incomplete vessel continuity.Second,multi-scale dense residual networks are used instead of traditional CNN to address the issue of inadequate local feature extraction when the model encounters vessels at multiple scales.In the decoding stage,we introduce a local-global fusion module.It enhances the localization of vascular information and reduces the semantic gap between high-and low-level features.To address the class imbalance in retinal images,we propose a hybrid loss function that enhances the segmentation ability of the model for topological structures.We conducted experiments on the publicly available DRIVE,CHASEDB1,STARE,and IOSTAR datasets.The experimental results show that our CT-MFENet performs better than most existing methods,including the baseline U-Net.
基金funded by the National Natural Foundation of China under Grant No.61172167the Science Fund Project of Heilongjiang Province(LH2020F035).
文摘Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters.