Introduction:Accurate contouring of thoracic organs at risk(OARs)is essential for minimizing complications in radiation treatment.Manual contouring of thoracic OARs is not only time-consuming but also prone to substan...Introduction:Accurate contouring of thoracic organs at risk(OARs)is essential for minimizing complications in radiation treatment.Manual contouring of thoracic OARs is not only time-consuming but also prone to substantial user variation.To enhance the efficiency and consistency,we developed a unified deep learning(DL)OAR contouring model,DeepOAR,that was trained using multiple partially labeled datasets for segmenting a comprehensive set of thoracic OARs following the Radiation Therapy Oncology Group(RTOG)-guided OAR atlas.This DL model supports the segmentation of six required and eight optional OARs guided by the NRG-RTOG 1106 trial,providing precise and reproducible OARs contouring that are ready to be used in radiotherapy practice.Materials and methods:Following the OAR contouring recommendation of the NRG-RTOG 1106 trial,we collected and curated three private datasets and two public datasets,comprising a total of 531 patients with partially annotated thoracic OARs.These partially annotated datasets were utilized to develop DeepOAR,which consisted of a shared encoder and 14 separate decoders,with each decoder dedicated to one specific OAR.For model training,we utilized all patients from the two public datasets and 75%of the patients from the private datasets.We reserved the remaining 25%of the private datasets for independent testing.A multi-user study involving 21 radiation oncologists was conducted on 40 randomly selected patients from the independent testing dataset to evaluate the clinical applicability of DeepOAR.The Dice coefficient score(DSC)and average surface distance(ASD)were computed to evaluate the quantitative delineation performance of the model.Results:DeepOAR outperformed nnUNet(the benchmark medical segmentation model)across all 14 OARs,achieving mean DSC and ASD values of 88.4%and 1.0 mm,respectively,in the independent testing set.Multi-user validation demonstrated that 89.7%of DeepOAR-generated OARs were clinically acceptable or required only minor revisions.A comparison using two randomly selected patients showed that the delineation variability of DeepOAR was significantly smaller than the inter-user variation among radiation oncologists.Human editing of DeepOAR’s predictions could further improve OAR delineation accuracy by an average of 3%increase in DSC and 40%reduction in ASD while significantly reducing the workload of radiation oncologists for contouring 14 thoracic OARs by an average of 77.0%.Conclusion:We developed DeepOAR,a DL-based unified contouring model trained using multiple partially labeled datasets,to delineate a comprehensive set of 14 thoracic OARs following the RTOG-guided OAR atlas.Both qualitative and quantitative results demonstrated the strong clinical applicability of DeepOAR for the OAR delineation process in thoracic cancer radiotherapy workflows,along with improved efficiency,comprehensiveness,and quality.展开更多
Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial fo...Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.展开更多
为有效解决多维时间序列(multivariate time series, MTS)无监督异常检测模型中自编码器模块容易拟合异常样本、正常MTS样本对应的隐空间特征可能被重构为异常MTS的问题,设计一种具有三重生成对抗的MTS异常检测模型。以LSTM自编码器为...为有效解决多维时间序列(multivariate time series, MTS)无监督异常检测模型中自编码器模块容易拟合异常样本、正常MTS样本对应的隐空间特征可能被重构为异常MTS的问题,设计一种具有三重生成对抗的MTS异常检测模型。以LSTM自编码器为生成器,基于重构误差生成伪标签,由判别器区分经伪标签过滤后的重构MTS和原始MTS;采用两次对抗训练将LSTM自编码器的隐空间约束为均匀分布,减少LSTM自编码器隐空间特征重构出异常MTS的可能性。多个公开MTS数据集上的实验结果表明,T-GAN能在带有污染数据的训练集上更好学习正常MTS分布,取得较高的异常检测效果。展开更多
Fine-grained function-level encrypted traffic classification is an essential approach to maintaining network security.Machine learning and deep learning have become mainstream methods to analyze traffic,and labeled da...Fine-grained function-level encrypted traffic classification is an essential approach to maintaining network security.Machine learning and deep learning have become mainstream methods to analyze traffic,and labeled dataset construction is the basis.Android occupies a huge share of the mobile operating system market.Instant Messaging(IM)applications are important tools for people communication.But such applications have complex functions which frequently switched,so it is difficult to obtain function-level labels.The existing function-level public datasets in Android are rare and noisy,leading to research stagnation.Most labeled samples are collected with WLAN devices,which cannot exclude the operating system background traffic.At the same time,other datasets need to obtain root permission or use scripts to simulate user behavior.These collecting methods either destroy the security of the mobile device or ignore the real operation features of users with coarse-grained.Previous work(Chen et al.in Appl Sci 12(22):11731,2022)proposed a one-stop automated encrypted traffic labeled sample collection,construction,and correlation system,A3C,running at the application-level in Android.This paper analyzes the display characteristics of IM and proposes a function-level low-overhead labeled encrypted traffic datasets construction method for Android,F3L.The supplementary method to A3C monitors UI controls and layouts of the Android system in the foreground.It selects the feature fields of attributes of them for different in-app functions to build an in-app function label matching library for target applications and in-app functions.The deviation of timestamp between function invocation and label identification completion is calibrated to cut traffic samples and map them to corresponding labels.Experiments show that the method can match the correct label within 3 s after the user operation.展开更多
支持向量机(Support vector machine,SVM)作为一种经典的分类方法,已经广泛应用于各种领域中。然而,标准支持向量机在分类决策中面临以下问题:(1)未考虑分类数据的分布特征;(2)忽略了样本类别间的相对关系;(3)无法解决大规模分类问题。...支持向量机(Support vector machine,SVM)作为一种经典的分类方法,已经广泛应用于各种领域中。然而,标准支持向量机在分类决策中面临以下问题:(1)未考虑分类数据的分布特征;(2)忽略了样本类别间的相对关系;(3)无法解决大规模分类问题。鉴于此,提出融合数据分布特征的保序学习机(Rank preservation learning machine based on data distribution fusion,RPLM-DDF)。该方法通过引入类内离散度表征数据的分布特征;通过各类样本数据中心位置相对不变保证全局样本顺序不变;通过建立所提方法和核心向量机对偶形式的等价性解决了大规模分类问题。在人工数据集、中小规模数据集和大规模数据集上的比较实验验证所提方法的有效性。展开更多
基金Xianghua Ye,Zhejiang Provincial Science and Technology Project,2024-KY1-001-105.This funding is intended to support the training of organ auto-segmentation models.
文摘Introduction:Accurate contouring of thoracic organs at risk(OARs)is essential for minimizing complications in radiation treatment.Manual contouring of thoracic OARs is not only time-consuming but also prone to substantial user variation.To enhance the efficiency and consistency,we developed a unified deep learning(DL)OAR contouring model,DeepOAR,that was trained using multiple partially labeled datasets for segmenting a comprehensive set of thoracic OARs following the Radiation Therapy Oncology Group(RTOG)-guided OAR atlas.This DL model supports the segmentation of six required and eight optional OARs guided by the NRG-RTOG 1106 trial,providing precise and reproducible OARs contouring that are ready to be used in radiotherapy practice.Materials and methods:Following the OAR contouring recommendation of the NRG-RTOG 1106 trial,we collected and curated three private datasets and two public datasets,comprising a total of 531 patients with partially annotated thoracic OARs.These partially annotated datasets were utilized to develop DeepOAR,which consisted of a shared encoder and 14 separate decoders,with each decoder dedicated to one specific OAR.For model training,we utilized all patients from the two public datasets and 75%of the patients from the private datasets.We reserved the remaining 25%of the private datasets for independent testing.A multi-user study involving 21 radiation oncologists was conducted on 40 randomly selected patients from the independent testing dataset to evaluate the clinical applicability of DeepOAR.The Dice coefficient score(DSC)and average surface distance(ASD)were computed to evaluate the quantitative delineation performance of the model.Results:DeepOAR outperformed nnUNet(the benchmark medical segmentation model)across all 14 OARs,achieving mean DSC and ASD values of 88.4%and 1.0 mm,respectively,in the independent testing set.Multi-user validation demonstrated that 89.7%of DeepOAR-generated OARs were clinically acceptable or required only minor revisions.A comparison using two randomly selected patients showed that the delineation variability of DeepOAR was significantly smaller than the inter-user variation among radiation oncologists.Human editing of DeepOAR’s predictions could further improve OAR delineation accuracy by an average of 3%increase in DSC and 40%reduction in ASD while significantly reducing the workload of radiation oncologists for contouring 14 thoracic OARs by an average of 77.0%.Conclusion:We developed DeepOAR,a DL-based unified contouring model trained using multiple partially labeled datasets,to delineate a comprehensive set of 14 thoracic OARs following the RTOG-guided OAR atlas.Both qualitative and quantitative results demonstrated the strong clinical applicability of DeepOAR for the OAR delineation process in thoracic cancer radiotherapy workflows,along with improved efficiency,comprehensiveness,and quality.
文摘Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.
文摘为有效解决多维时间序列(multivariate time series, MTS)无监督异常检测模型中自编码器模块容易拟合异常样本、正常MTS样本对应的隐空间特征可能被重构为异常MTS的问题,设计一种具有三重生成对抗的MTS异常检测模型。以LSTM自编码器为生成器,基于重构误差生成伪标签,由判别器区分经伪标签过滤后的重构MTS和原始MTS;采用两次对抗训练将LSTM自编码器的隐空间约束为均匀分布,减少LSTM自编码器隐空间特征重构出异常MTS的可能性。多个公开MTS数据集上的实验结果表明,T-GAN能在带有污染数据的训练集上更好学习正常MTS分布,取得较高的异常检测效果。
基金supported by the General Program of the National Natural Science Foundation of China under Grant No.62172093.
文摘Fine-grained function-level encrypted traffic classification is an essential approach to maintaining network security.Machine learning and deep learning have become mainstream methods to analyze traffic,and labeled dataset construction is the basis.Android occupies a huge share of the mobile operating system market.Instant Messaging(IM)applications are important tools for people communication.But such applications have complex functions which frequently switched,so it is difficult to obtain function-level labels.The existing function-level public datasets in Android are rare and noisy,leading to research stagnation.Most labeled samples are collected with WLAN devices,which cannot exclude the operating system background traffic.At the same time,other datasets need to obtain root permission or use scripts to simulate user behavior.These collecting methods either destroy the security of the mobile device or ignore the real operation features of users with coarse-grained.Previous work(Chen et al.in Appl Sci 12(22):11731,2022)proposed a one-stop automated encrypted traffic labeled sample collection,construction,and correlation system,A3C,running at the application-level in Android.This paper analyzes the display characteristics of IM and proposes a function-level low-overhead labeled encrypted traffic datasets construction method for Android,F3L.The supplementary method to A3C monitors UI controls and layouts of the Android system in the foreground.It selects the feature fields of attributes of them for different in-app functions to build an in-app function label matching library for target applications and in-app functions.The deviation of timestamp between function invocation and label identification completion is calibrated to cut traffic samples and map them to corresponding labels.Experiments show that the method can match the correct label within 3 s after the user operation.
文摘支持向量机(Support vector machine,SVM)作为一种经典的分类方法,已经广泛应用于各种领域中。然而,标准支持向量机在分类决策中面临以下问题:(1)未考虑分类数据的分布特征;(2)忽略了样本类别间的相对关系;(3)无法解决大规模分类问题。鉴于此,提出融合数据分布特征的保序学习机(Rank preservation learning machine based on data distribution fusion,RPLM-DDF)。该方法通过引入类内离散度表征数据的分布特征;通过各类样本数据中心位置相对不变保证全局样本顺序不变;通过建立所提方法和核心向量机对偶形式的等价性解决了大规模分类问题。在人工数据集、中小规模数据集和大规模数据集上的比较实验验证所提方法的有效性。