期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Multimodal Machine Learning Guides Low Carbon Aeration Strategies in Urban Wastewater Treatment 被引量:4
1
作者 Hong-Cheng Wang Yu-Qi Wang +4 位作者 Xu Wang Wan-Xin Yin Ting-Chao Yu Chen-Hao Xue Ai-Jie Wang 《Engineering》 SCIE EI CAS CSCD 2024年第5期51-62,共12页
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol... The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment. 展开更多
关键词 Wastewater treatment multimodal machine learning Deep learning Aeration control Interpretable machine learning
在线阅读 下载PDF
Label distribution for multimodal machine learning 被引量:1
2
作者 Yi REN Ning XU +1 位作者 Miaogen LING Xin GENG 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第1期33-43,共11页
Multimodal machine learning(MML)aims to understand the world from multiple related modalities.It has attracted much attention as multimodal data has become increasingly available in real-world application.It is shown ... Multimodal machine learning(MML)aims to understand the world from multiple related modalities.It has attracted much attention as multimodal data has become increasingly available in real-world application.It is shown that MML can perform better than single-modal machine learning,since multi-modalities containing more information which could complement each other.However,it is a key challenge to fuse the multi-modalities in MML.Different from previous work,we further consider the side-information,which reflects the situation and influences the fusion of multi-modalities.We recover multimodal label distribution(MLD)by leveraging the side-information,representing the degree to which each modality contributes to describing the instance.Accordingly,a novel framework named multimodal label distribution learning(MLDL)is proposed to recover the MLD,and fuse the multimodalities with its guidance to learn an in-depth understanding of the jointly feature representation.Moreover,two versions of MLDL are proposed to deal with the sequential data.Experiments on multimodal sentiment analysis and disease prediction show that the proposed approaches perform favorably against state-of-the-art methods. 展开更多
关键词 multimodal machine learning label distribution learning sentiment analysis disease prediction
原文传递
Enhancing 3D Reconstruction Accuracy of FIB Tomography Data Using Multi‑voltage Images and Multimodal Machine Learning
3
作者 Trushal Sardhara Alexander Shkurmanov +5 位作者 Yong Li Lukas Riedel Shan Shi Christian J.Cyron Roland C.Aydin Martin Ritter 《Nanomanufacturing and Metrology》 EI 2024年第1期48-60,共13页
FIB-SEM tomography is a powerful technique that integrates a focused ion beam(FIB)and a scanning electron microscope(SEM)to capture high-resolution imaging data of nanostructures.This approach involves collecting in-p... FIB-SEM tomography is a powerful technique that integrates a focused ion beam(FIB)and a scanning electron microscope(SEM)to capture high-resolution imaging data of nanostructures.This approach involves collecting in-plane SEM imagesand using FIB to remove material layers for imaging subsequent planes,thereby producing image stacks.However,theseimage stacks in FIB-SEM tomography are subject to the shine-through effect,which makes structures visible from theposterior regions of the current plane.This artifact introduces an ambiguity between image intensity and structures in thecurrent plane,making conventional segmentation methods such as thresholding or the k-means algorithm insufficient.Inthis study,we propose a multimodal machine learning approach that combines intensity information obtained at differentelectron beam accelerating voltages to improve the three-dimensional(3D)reconstruction of nanostructures.By treatingthe increased shine-through effect at higher accelerating voltages as a form of additional information,the proposed methodsignificantly improves segmentation accuracy and leads to more precise 3D reconstructions for real FIB tomography data. 展开更多
关键词 multimodal machine learning Multi-voltage images FIB-SEM Overdeterministic systems 3D reconstruction FIB tomography
原文传递
Intelligent Recognition Using Ultralight Multifunctional Nano‑Layered Carbon Aerogel Sensors with Human‑Like Tactile Perception 被引量:4
4
作者 Huiqi Zhao Yizheng Zhang +8 位作者 Lei Han Weiqi Qian Jiabin Wang Heting Wu Jingchen Li Yuan Dai Zhengyou Zhang Chris RBowen Ya Yang 《Nano-Micro Letters》 SCIE EI CAS CSCD 2024年第1期172-186,共15页
Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this uniq... Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this unique capability in robots remains a significant challenge.Here,we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure,temperature,material recognition and 3D location capabilities,which is combined with multimodal supervised learning algorithms for object recognition.The sensor exhibits human-like pressure(0.04–100 kPa)and temperature(21.5–66.2℃)detection,millisecond response times(11 ms),a pressure sensitivity of 92.22 kPa^(−1)and triboelectric durability of over 6000 cycles.The devised algorithm has universality and can accommodate a range of application scenarios.The tactile system can identify common foods in a kitchen scene with 94.63%accuracy and explore the topographic and geomorphic features of a Mars scene with 100%accuracy.This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing,recognition and intelligence. 展开更多
关键词 Multifunctional sensor Tactile perception multimodal machine learning algorithms Universal tactile system Intelligent object recognition
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部