With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State I...With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.展开更多
Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy....Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.展开更多
High cost of raw materials and the insufficient research on alloy systems severely constrained the development of Cu-Be alloys.The complex coupling relationship between composition and preparation process poses challe...High cost of raw materials and the insufficient research on alloy systems severely constrained the development of Cu-Be alloys.The complex coupling relationship between composition and preparation process poses challenges to the use of machine learning methods for the precise design of Cu-Be alloy.This study develops a novel method for integrated design of copper alloy composition and processing based on a Long Short-Term Memory model followed by an Encoder model(LSTM-Encoder)and enriches the framework by integrating phase diagram information.This approach not only capitalizes on the patterns of microstructural evolution during heat treatment as indicated in phase diagrams to reveal their intrinsic links with alloy performance but also eliminates cross-interference within sample data,thus significantly enhancing the model's generalization and predictive accuracy,which achieves high efficient and precise design of low-cost(low Be content) and high-performance Cu-Be alloys.Compared with other models,the LSTM-Encoder model incorporating phase diagram information(LSTM-Encoder-Ⅱ) showed significant superiority in prediction accuracy.After two rounds of experimental verification and iteration,the LSTM-Encoder-Ⅱ model attained prediction accuracies of 96% for hardness and 93% for electrical conductivity.Various Cu-Be-X alloys with excellent comprehensive performance and low cost have been designed,and Cu-1.5Be-0.1Ni-0.3Co alloy achieves a tensile strength of 1211 MPa and an electrical conductivity of 30.3% IACS,and Cu-1.5Be-0.6Ni alloy attains a tensile strength of1290 MPa and an electrical conductivity of 29.3% IACS,both of which are comparable to the C17200 alloy,with raw material cost reduced by more than 14%.展开更多
Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for ...Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.展开更多
Research indicates that microbe activity within the human body significantly influences health by being closely linked to various diseases.Accurately predicting microbe-disease interactions(MDIs)offers critical insigh...Research indicates that microbe activity within the human body significantly influences health by being closely linked to various diseases.Accurately predicting microbe-disease interactions(MDIs)offers critical insights for disease intervention and pharmaceutical research.Current advanced AI-based technologies automatically generate robust representations of microbes and diseases,enabling effective MDI predictions.However,these models continue to face significant challenges.A major issue is their reliance on complex feature extractors and classifiers,which substantially diminishes the models’generalizability.To address this,we introduce a novel graph autoencoder framework that utilizes decoupled representation learning and multi-scale information fusion strategies to efficiently infer potential MDIs.Initially,we randomly mask portions of the input microbe-disease graph based on Bernoulli distribution to boost self-supervised training and minimize noise-related performance degradation.Secondly,we employ decoupled representation learning technology,compelling the graph neural network(GNN)to independently learn the weights for each feature subspace,thus enhancing its expressive power.Finally,we implement multi-scale information fusion technology to amalgamate the multi-layer outputs of GNN,reducing information loss due to occlusion.Extensive experiments on public datasets demonstrate that our model significantly surpasses existing top MDI prediction models.This indicates that our model can accurately predict unknown MDIs and is likely to aid in disease discovery and precision pharmaceutical research.Code and data are accessible at:https://github.com/shmildsj/MDI-IFDRL.展开更多
With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration predict...With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning.展开更多
The present study explores the effects of media and distributed information on the performance of remotely located pairs of people′s completing a concept-learning task. Sixty pairs performed a concept-learning task u...The present study explores the effects of media and distributed information on the performance of remotely located pairs of people′s completing a concept-learning task. Sixty pairs performed a concept-learning task using either audio-only or audio-plus-video for communication. The distribution of information includes three levels: with totally same information, with partly same information, and with totally different information. The subjects′ primary psychological functions were also considered in this study. The results showed a significant main effect of the amount of information shared by the subjects on the number of the negative instances selected by the subjects, and a significant main effect of media on the time taken by the subjects to complete the task.展开更多
Partial Differential Equations(PDEs)are model candidates of soft sensing for aero-engine health management units.The existing Physics-Informed Neural Networks(PINNs)have made achievements.However,unmeasurable aero-eng...Partial Differential Equations(PDEs)are model candidates of soft sensing for aero-engine health management units.The existing Physics-Informed Neural Networks(PINNs)have made achievements.However,unmeasurable aero-engine driving sources lead to unknown PDE driving terms,which weaken PINNs feasibility.To this end,Physically Informed Hierarchical Learning followed by Recurrent-Prediction Term(PIHL-RPT)is proposed.First,PIHL is proposed for learning nonhomogeneous PDE solutions,in which two networks NetU and NetG are constructed.NetU is for learning solutions satisfying PDEs;NetG is for learning driving terms to regularize NetU training.Then,we propose a hierarchical learning strategy to optimize and couple NetU and NetG,which are integrated into a data-physics-hybrid loss function.Besides,we prove PIHL-RPT can iteratively generate a series of networks converging to a function,which can approximate a solution to well-posed PDE.Furthermore,RPT is proposed for prediction improvement of PIHL,in which network NetU-RP is constructed to compensate for information loss caused by data sampling and driving sources’immeasurability.Finally,artificial datasets and practical vibration process datasets from our wear experiment platform are used to verify the feasibility and effectiveness of PIHL-RPT based soft sensing.Meanwhile,comparisons with relevant methods,discussions,and PIHL-RPT based health monitoring example are given.展开更多
Collaborative filtering is the most popular and successful information recommendation technique. However, it can suffer from data sparsity issue in cases where the systems do not have sufficient domain information. Tr...Collaborative filtering is the most popular and successful information recommendation technique. However, it can suffer from data sparsity issue in cases where the systems do not have sufficient domain information. Transfer learning, which enables information to be transferred from source domains to target domain, presents an unprecedented opportunity to alleviate this issue. A few recent works focus on transferring user-item rating information from a dense domain to a sparse target domain, while almost all methods need that each rating matrix in source domain to be extracted should be complete. To address this issue, in this paper we propose a novel multiple incomplete domains transfer learning model for cross-domain collaborative filtering. The transfer learning process consists of two steps. First, the user-item ratings information in incomplete source domains are compressed into multiple informative compact cluster-level matrixes, which are referred as codebooks. Second, we reconstruct the target matrix based on the codebooks. Specifically, for the purpose of maximizing the knowledge transfer, we design a new algorithm to learn the rating knowledge efficiently from multiple incomplete domains. Extensive experiments on real datasets demonstrate that our proposed approach significantly outperforms existing methods.展开更多
The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-hei...The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-height remote sensing technique, which is flexible, efficient with low cost and with high resolution, is widely applied to investing various resources. Based on this, a novel extraction method for cultivated land information based on Deep Convolutional Neural Network and Transfer Learning (DTCLE) was proposed. First, linear features (roads and ridges etc.) were excluded based on Deep Convolutional Neural Network (DCNN). Next, feature extraction method learned from DCNN was used to cultivated land information extraction by introducing transfer learning mechanism. Last, cultivated land information extraction results were completed by the DTCLE and eCognifion for cultivated land information extraction (ECLE). The location of the Pengzhou County and Guanghan County, Sichuan Province were selected for the experimental purpose. The experimental results showed that the overall precision for the experimental image 1, 2 and 3 (of extracting cultivated land) with the DTCLE method was 91.7%, 88.1% and 88.2% respectively, and the overall precision of ECLE is 9o.7%, 90.5% and 87.0%, respectively. Accuracy of DTCLE was equivalent to that of ECLE, and also outperformed ECLE in terms of integrity and continuity.展开更多
This paper conducts a survey on iterative learn-ing control(ILC)with incomplete information and associated control system design,which is a frontier of the ILC field.The incomplete information,including passive and ac...This paper conducts a survey on iterative learn-ing control(ILC)with incomplete information and associated control system design,which is a frontier of the ILC field.The incomplete information,including passive and active types,can cause data loss or fragment due to various factors.Passive incomplete information refers to incomplete data and information caused by practical system limitations during data collection,storage,transmission,and processing,such as data dropouts,delays,disordering,and limited transmission bandwidth.Active incomplete information refers to incomplete data and information caused by man-made reduction of data quantity and quality on the premise that the given objective is satisfied,such as sampling and quantization.This survey emphasizes two aspects:the first one is how to guarantee good learning performance and tracking performance with passive incomplete data,and the second is how to balance the control performance index and data demand by active means.The promising research directions along this topic are also addressed,where data robustness is highly emphasized.This survey is expected to improve understanding of the restrictive relationship and trade-off between incomplete data and tracking performance,quantitatively,and promote further developments of ILC theory.展开更多
Coronavirus disease 2019(COVID-19)is continuing to spread globally and still poses a great threat to human health.Since its outbreak,it has had catastrophic effects on human society.A visual method of analyzing COVID-...Coronavirus disease 2019(COVID-19)is continuing to spread globally and still poses a great threat to human health.Since its outbreak,it has had catastrophic effects on human society.A visual method of analyzing COVID-19 case information using spatio-temporal objects with multi-granularity is proposed based on the officially provided case information.This analysis reveals the spread of the epidemic,from the perspective of spatio-temporal objects,to provide references for related research and the formulation of epidemic prevention and control measures.The case information is abstracted,descripted,represented,and analyzed in the form of spatio-temporal objects through the construction of spatio-temporal case objects,multi-level visual expressions,and spatial correlation analysis.The rationality of the method is verified through visualization scenarios of case information statistics for China,Henan cases,and cases related to Shulan.The results show that the proposed method is helpful in the research and judgment of the development trend of the epidemic,the discovery of the transmission law,and the spatial traceability of the cases.It has a good portability and good expansion performance,so it can be used for the visual analysis of case information for other regions and can help users quickly discover the potential knowledge this information contains.展开更多
With the development of data science and technology,information security has been further concerned.In order to solve privacy problems such as personal privacy being peeped and copyright being infringed,information hi...With the development of data science and technology,information security has been further concerned.In order to solve privacy problems such as personal privacy being peeped and copyright being infringed,information hiding algorithms has been developed.Image information hiding is to make use of the redundancy of the cover image to hide secret information in it.Ensuring that the stego image cannot be distinguished from the cover image,and sending secret information to receiver through the transmission of the stego image.At present,the model based on deep learning is also widely applied to the field of information hiding.This paper makes an overall conclusion on image information hiding based on deep learning.It is divided into four parts of steganography algorithms,watermarking embedding algorithms,coverless information hiding algorithms and steganalysis algorithms based on deep learning.From these four aspects,the state-of-the-art information hiding technologies based on deep learning are illustrated and analyzed.展开更多
To guarantee the heterogeneous delay requirements of the diverse vehicular services,it is necessary to design a full cooperative policy for both Vehicle to Infrastructure(V2I)and Vehicle to Vehicle(V2V)links.This pape...To guarantee the heterogeneous delay requirements of the diverse vehicular services,it is necessary to design a full cooperative policy for both Vehicle to Infrastructure(V2I)and Vehicle to Vehicle(V2V)links.This paper investigates the reduction of the delay in edge information sharing for V2V links while satisfying the delay requirements of the V2I links.Specifically,a mean delay minimization problem and a maximum individual delay minimization problem are formulated to improve the global network performance and ensure the fairness of a single user,respectively.A multi-agent reinforcement learning framework is designed to solve these two problems,where a new reward function is proposed to evaluate the utilities of the two optimization objectives in a unified framework.Thereafter,a proximal policy optimization approach is proposed to enable each V2V user to learn its policy using the shared global network reward.The effectiveness of the proposed approach is finally validated by comparing the obtained results with those of the other baseline approaches through extensive simulation experiments.展开更多
The trend of distance learning education has increased year by year because of the rapid advancement of information and communication technologies. Distance learning system can be regarded as one of ubiquitous computi...The trend of distance learning education has increased year by year because of the rapid advancement of information and communication technologies. Distance learning system can be regarded as one of ubiquitous computing applications since the learners can study anywhere even in mobile environments. However, the instructor cannot know if the learners comprehend the lecture or not since each learner is physically isolated. Therefore, a framework which detects the learners’ concentration condition is required. If a distance learning system obtains the information that many learners are not concentrated on the class due to the incomprehensible lecture style, the instructor can perceive it through the system and change the presentation strategy. This is a context-aware technology which is widely used for ubiquitous computing services. In this paper, an efficient distance learning system, which accurately detects learners’ concentration condition during a class, is proposed. The proposed system uses multiple biological information which are learners’ eye movement metrics, i.e. fixation counts, fixation rate, fixation duration and average saccade length obtained by an eye tracking system. The learners’ concentration condition is classified by using machine learning techniques. The proposed system has performed the detection accuracy of 90.7% when Multilayer Perceptron is used as a classifier. In addition, the effectiveness of the proposed eye metrics has been confirmed. Furthermore, it has been clarified that the fixation duration is the most important eye metric among the four metrics based on the investigation of evaluation experiment.展开更多
Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant...Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant information has become a challenging task.Though existing deep learningbased techniques have been applied in smart agriculture for crop cultivation,crop disease detection,weed removal,and yield production,still it is difficult to find the semantics between extracted information due to unswerving effects of weather,soil,pest,and fertilizer data.This paper consists of two parts.An initial phase,which proposes a data preprocessing technique for removal of ambiguity in input corpora,and the second phase proposes a novel deep learning-based long short-term memory with rectification in Adam optimizer andmultilayer perceptron to find agricultural-based named entity recognition,events,and relations between them.The proposed algorithm has been trained and tested on four input corpora i.e.,agriculture,weather,soil,and pest&fertilizers.The experimental results have been compared with existing techniques and itwas observed that the proposed algorithm outperformsWeighted-SOM,LSTM+RAO,PLR-DBN,KNN,and Na飗e Bayes on standard parameters like accuracy,sensitivity,and specificity.展开更多
Electronic medical record (EMR) containing rich biomedical information has a great potential in disease diagnosis and biomedical research. However, the EMR information is usually in the form of unstructured text, whic...Electronic medical record (EMR) containing rich biomedical information has a great potential in disease diagnosis and biomedical research. However, the EMR information is usually in the form of unstructured text, which increases the use cost and hinders its applications. In this work, an effective named entity recognition (NER) method is presented for information extraction on Chinese EMR, which is achieved by word embedding bootstrapped deep active learning to promote the acquisition of medical information from Chinese EMR and to release its value. In this work, deep active learning of bi-directional long short-term memory followed by conditional random field (Bi-LSTM+CRF) is used to capture the characteristics of different information from labeled corpus, and the word embedding models of contiguous bag of words and skip-gram are combined in the above model to respectively capture the text feature of Chinese EMR from unlabeled corpus. To evaluate the performance of above method, the tasks of NER on Chinese EMR with “medical history” content were used. Experimental results show that the word embedding bootstrapped deep active learning method using unlabeled medical corpus can achieve a better performance compared with other models.展开更多
A large number of debris flow disasters(called Seismic debris flows) would occur after an earthquake, which can cause a great amount of damage. UAV low-altitude remote sensing technology has become a means of quickly ...A large number of debris flow disasters(called Seismic debris flows) would occur after an earthquake, which can cause a great amount of damage. UAV low-altitude remote sensing technology has become a means of quickly obtaining disaster information as it has the advantage of convenience and timeliness, but the spectral information of the image is so scarce, making it difficult to accurately detect the information of earthquake debris flow disasters. Based on the above problems, a seismic debris flow detection method based on transfer learning(TL) mechanism is proposed. On the basis of the constructed seismic debris flow disaster database, the features acquired from the training of the convolutional neural network(CNN) are transferred to the disaster information detection of the seismic debris flow. The automatic detection of earthquake debris flow disaster information is then completed, and the results of object-oriented seismic debris flow disaster information detection are compared and analyzed with the detection results supported by transfer learning.展开更多
基金supported by National Natural Science Foundation of China(NSFC)under grant U23A20310.
文摘With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.
基金supported by the following funding bodies:the National Key Research and Development Program of China(Grant No.2020YFA0608000)National Science Foundation of China(Grant Nos.42075142,42375148,42125503+2 种基金42130608)FY-APP-2022.0609,Sichuan Province Key Tech nology Research and Development project(Grant Nos.2024ZHCG0168,2024ZHCG0176,2023YFG0305,2023YFG-0124,and 23ZDYF0091)the CUIT Science and Technology Innovation Capacity Enhancement Program project(Grant No.KYQN202305)。
文摘Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.
基金financial supplies supported by the National Natural Science Foundation of China(Nos.52371038 and U2202255)the Science and Technology Innovation Program of Hunan Province(No.2023RC1019)
文摘High cost of raw materials and the insufficient research on alloy systems severely constrained the development of Cu-Be alloys.The complex coupling relationship between composition and preparation process poses challenges to the use of machine learning methods for the precise design of Cu-Be alloy.This study develops a novel method for integrated design of copper alloy composition and processing based on a Long Short-Term Memory model followed by an Encoder model(LSTM-Encoder)and enriches the framework by integrating phase diagram information.This approach not only capitalizes on the patterns of microstructural evolution during heat treatment as indicated in phase diagrams to reveal their intrinsic links with alloy performance but also eliminates cross-interference within sample data,thus significantly enhancing the model's generalization and predictive accuracy,which achieves high efficient and precise design of low-cost(low Be content) and high-performance Cu-Be alloys.Compared with other models,the LSTM-Encoder model incorporating phase diagram information(LSTM-Encoder-Ⅱ) showed significant superiority in prediction accuracy.After two rounds of experimental verification and iteration,the LSTM-Encoder-Ⅱ model attained prediction accuracies of 96% for hardness and 93% for electrical conductivity.Various Cu-Be-X alloys with excellent comprehensive performance and low cost have been designed,and Cu-1.5Be-0.1Ni-0.3Co alloy achieves a tensile strength of 1211 MPa and an electrical conductivity of 30.3% IACS,and Cu-1.5Be-0.6Ni alloy attains a tensile strength of1290 MPa and an electrical conductivity of 29.3% IACS,both of which are comparable to the C17200 alloy,with raw material cost reduced by more than 14%.
基金the support of Research Program of Fine Exploration and Surrounding Rock Classification Technology for Deep Buried Long Tunnels Driven by Horizontal Directional Drilling and Magnetotelluric Methods Based on Deep Learning under Grant E202408010the Sichuan Science and Technology Program under Grant 2024NSFSC1984 and Grant 2024NSFSC1990。
文摘Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.
基金supported by the Natural Science Foundation of Wenzhou University of Technology,China(Grant No.:ky202211).
文摘Research indicates that microbe activity within the human body significantly influences health by being closely linked to various diseases.Accurately predicting microbe-disease interactions(MDIs)offers critical insights for disease intervention and pharmaceutical research.Current advanced AI-based technologies automatically generate robust representations of microbes and diseases,enabling effective MDI predictions.However,these models continue to face significant challenges.A major issue is their reliance on complex feature extractors and classifiers,which substantially diminishes the models’generalizability.To address this,we introduce a novel graph autoencoder framework that utilizes decoupled representation learning and multi-scale information fusion strategies to efficiently infer potential MDIs.Initially,we randomly mask portions of the input microbe-disease graph based on Bernoulli distribution to boost self-supervised training and minimize noise-related performance degradation.Secondly,we employ decoupled representation learning technology,compelling the graph neural network(GNN)to independently learn the weights for each feature subspace,thus enhancing its expressive power.Finally,we implement multi-scale information fusion technology to amalgamate the multi-layer outputs of GNN,reducing information loss due to occlusion.Extensive experiments on public datasets demonstrate that our model significantly surpasses existing top MDI prediction models.This indicates that our model can accurately predict unknown MDIs and is likely to aid in disease discovery and precision pharmaceutical research.Code and data are accessible at:https://github.com/shmildsj/MDI-IFDRL.
基金supported by General Scientific Research Funding of the Science and Technology Development Fund(FDCT)in Macao(No.0150/2022/A)the Faculty Research Grants of Macao University of Science and Technology(No.FRG-22-074-FIE).
文摘With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning.
文摘The present study explores the effects of media and distributed information on the performance of remotely located pairs of people′s completing a concept-learning task. Sixty pairs performed a concept-learning task using either audio-only or audio-plus-video for communication. The distribution of information includes three levels: with totally same information, with partly same information, and with totally different information. The subjects′ primary psychological functions were also considered in this study. The results showed a significant main effect of the amount of information shared by the subjects on the number of the negative instances selected by the subjects, and a significant main effect of media on the time taken by the subjects to complete the task.
基金supported in part by the National Science and Technology Major Project of China(No.2019-I-0019-0018)the National Natural Science Foundation of China(Nos.61890920,61890921,12302065 and 12172073).
文摘Partial Differential Equations(PDEs)are model candidates of soft sensing for aero-engine health management units.The existing Physics-Informed Neural Networks(PINNs)have made achievements.However,unmeasurable aero-engine driving sources lead to unknown PDE driving terms,which weaken PINNs feasibility.To this end,Physically Informed Hierarchical Learning followed by Recurrent-Prediction Term(PIHL-RPT)is proposed.First,PIHL is proposed for learning nonhomogeneous PDE solutions,in which two networks NetU and NetG are constructed.NetU is for learning solutions satisfying PDEs;NetG is for learning driving terms to regularize NetU training.Then,we propose a hierarchical learning strategy to optimize and couple NetU and NetG,which are integrated into a data-physics-hybrid loss function.Besides,we prove PIHL-RPT can iteratively generate a series of networks converging to a function,which can approximate a solution to well-posed PDE.Furthermore,RPT is proposed for prediction improvement of PIHL,in which network NetU-RP is constructed to compensate for information loss caused by data sampling and driving sources’immeasurability.Finally,artificial datasets and practical vibration process datasets from our wear experiment platform are used to verify the feasibility and effectiveness of PIHL-RPT based soft sensing.Meanwhile,comparisons with relevant methods,discussions,and PIHL-RPT based health monitoring example are given.
基金supported by the National Natural Science Foundation of China (No. 91546111, 91646201)the Key Project of Beijing Municipal Education Commission (No. KZ201610005009)the General Project of Beijing Municipal Education Commission (No. KM201710005023)
文摘Collaborative filtering is the most popular and successful information recommendation technique. However, it can suffer from data sparsity issue in cases where the systems do not have sufficient domain information. Transfer learning, which enables information to be transferred from source domains to target domain, presents an unprecedented opportunity to alleviate this issue. A few recent works focus on transferring user-item rating information from a dense domain to a sparse target domain, while almost all methods need that each rating matrix in source domain to be extracted should be complete. To address this issue, in this paper we propose a novel multiple incomplete domains transfer learning model for cross-domain collaborative filtering. The transfer learning process consists of two steps. First, the user-item ratings information in incomplete source domains are compressed into multiple informative compact cluster-level matrixes, which are referred as codebooks. Second, we reconstruct the target matrix based on the codebooks. Specifically, for the purpose of maximizing the knowledge transfer, we design a new algorithm to learn the rating knowledge efficiently from multiple incomplete domains. Extensive experiments on real datasets demonstrate that our proposed approach significantly outperforms existing methods.
基金supported by the Fundamental Research Funds for the Central Universities of China(Grant No.2013SCU11006)the Key Laboratory of Digital Mapping and Land Information Application of National Administration of Surveying,Mapping and Geoinformation of China(Grant NO.DM2014SC02)the Key Laboratory of Geospecial Information Technology,Ministry of Land and Resources of China(Grant NO.KLGSIT201504)
文摘The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-height remote sensing technique, which is flexible, efficient with low cost and with high resolution, is widely applied to investing various resources. Based on this, a novel extraction method for cultivated land information based on Deep Convolutional Neural Network and Transfer Learning (DTCLE) was proposed. First, linear features (roads and ridges etc.) were excluded based on Deep Convolutional Neural Network (DCNN). Next, feature extraction method learned from DCNN was used to cultivated land information extraction by introducing transfer learning mechanism. Last, cultivated land information extraction results were completed by the DTCLE and eCognifion for cultivated land information extraction (ECLE). The location of the Pengzhou County and Guanghan County, Sichuan Province were selected for the experimental purpose. The experimental results showed that the overall precision for the experimental image 1, 2 and 3 (of extracting cultivated land) with the DTCLE method was 91.7%, 88.1% and 88.2% respectively, and the overall precision of ECLE is 9o.7%, 90.5% and 87.0%, respectively. Accuracy of DTCLE was equivalent to that of ECLE, and also outperformed ECLE in terms of integrity and continuity.
基金supported by the National Natural Science Foundation of China(61673045)Beijing Natural Science Foundation(4152040)
文摘This paper conducts a survey on iterative learn-ing control(ILC)with incomplete information and associated control system design,which is a frontier of the ILC field.The incomplete information,including passive and active types,can cause data loss or fragment due to various factors.Passive incomplete information refers to incomplete data and information caused by practical system limitations during data collection,storage,transmission,and processing,such as data dropouts,delays,disordering,and limited transmission bandwidth.Active incomplete information refers to incomplete data and information caused by man-made reduction of data quantity and quality on the premise that the given objective is satisfied,such as sampling and quantization.This survey emphasizes two aspects:the first one is how to guarantee good learning performance and tracking performance with passive incomplete data,and the second is how to balance the control performance index and data demand by active means.The promising research directions along this topic are also addressed,where data robustness is highly emphasized.This survey is expected to improve understanding of the restrictive relationship and trade-off between incomplete data and tracking performance,quantitatively,and promote further developments of ILC theory.
基金National Key Research and Development Program of China,No.2016YFB0502300。
文摘Coronavirus disease 2019(COVID-19)is continuing to spread globally and still poses a great threat to human health.Since its outbreak,it has had catastrophic effects on human society.A visual method of analyzing COVID-19 case information using spatio-temporal objects with multi-granularity is proposed based on the officially provided case information.This analysis reveals the spread of the epidemic,from the perspective of spatio-temporal objects,to provide references for related research and the formulation of epidemic prevention and control measures.The case information is abstracted,descripted,represented,and analyzed in the form of spatio-temporal objects through the construction of spatio-temporal case objects,multi-level visual expressions,and spatial correlation analysis.The rationality of the method is verified through visualization scenarios of case information statistics for China,Henan cases,and cases related to Shulan.The results show that the proposed method is helpful in the research and judgment of the development trend of the epidemic,the discovery of the transmission law,and the spatial traceability of the cases.It has a good portability and good expansion performance,so it can be used for the visual analysis of case information for other regions and can help users quickly discover the potential knowledge this information contains.
基金This work is supported by the National Key R&D Program of China under grant 2018YFB1003205by the National Natural Science Foundation of China under grant U1836208,U1536206,U1836110,61602253,61672294+2 种基金by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20181407by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAP-D)fundby the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China。
文摘With the development of data science and technology,information security has been further concerned.In order to solve privacy problems such as personal privacy being peeped and copyright being infringed,information hiding algorithms has been developed.Image information hiding is to make use of the redundancy of the cover image to hide secret information in it.Ensuring that the stego image cannot be distinguished from the cover image,and sending secret information to receiver through the transmission of the stego image.At present,the model based on deep learning is also widely applied to the field of information hiding.This paper makes an overall conclusion on image information hiding based on deep learning.It is divided into four parts of steganography algorithms,watermarking embedding algorithms,coverless information hiding algorithms and steganalysis algorithms based on deep learning.From these four aspects,the state-of-the-art information hiding technologies based on deep learning are illustrated and analyzed.
基金supported in part by the National Natural Science Foundation of China under grants 61901078,61771082,61871062,and U20A20157in part by the Science and Technology Research Program of Chongqing Municipal Education Commission under grant KJQN201900609+2 种基金in part by the Natural Science Foundation of Chongqing under grant cstc2020jcyj-zdxmX0024in part by University Innovation Research Group of Chongqing under grant CXQT20017in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)under grant 2021FNA04008.
文摘To guarantee the heterogeneous delay requirements of the diverse vehicular services,it is necessary to design a full cooperative policy for both Vehicle to Infrastructure(V2I)and Vehicle to Vehicle(V2V)links.This paper investigates the reduction of the delay in edge information sharing for V2V links while satisfying the delay requirements of the V2I links.Specifically,a mean delay minimization problem and a maximum individual delay minimization problem are formulated to improve the global network performance and ensure the fairness of a single user,respectively.A multi-agent reinforcement learning framework is designed to solve these two problems,where a new reward function is proposed to evaluate the utilities of the two optimization objectives in a unified framework.Thereafter,a proximal policy optimization approach is proposed to enable each V2V user to learn its policy using the shared global network reward.The effectiveness of the proposed approach is finally validated by comparing the obtained results with those of the other baseline approaches through extensive simulation experiments.
文摘The trend of distance learning education has increased year by year because of the rapid advancement of information and communication technologies. Distance learning system can be regarded as one of ubiquitous computing applications since the learners can study anywhere even in mobile environments. However, the instructor cannot know if the learners comprehend the lecture or not since each learner is physically isolated. Therefore, a framework which detects the learners’ concentration condition is required. If a distance learning system obtains the information that many learners are not concentrated on the class due to the incomprehensible lecture style, the instructor can perceive it through the system and change the presentation strategy. This is a context-aware technology which is widely used for ubiquitous computing services. In this paper, an efficient distance learning system, which accurately detects learners’ concentration condition during a class, is proposed. The proposed system uses multiple biological information which are learners’ eye movement metrics, i.e. fixation counts, fixation rate, fixation duration and average saccade length obtained by an eye tracking system. The learners’ concentration condition is classified by using machine learning techniques. The proposed system has performed the detection accuracy of 90.7% when Multilayer Perceptron is used as a classifier. In addition, the effectiveness of the proposed eye metrics has been confirmed. Furthermore, it has been clarified that the fixation duration is the most important eye metric among the four metrics based on the investigation of evaluation experiment.
基金This work was supported by the Deanship of Scientific Research at King Khalid University through a General Research Project under Grant Number GRP/41/42.
文摘Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant information has become a challenging task.Though existing deep learningbased techniques have been applied in smart agriculture for crop cultivation,crop disease detection,weed removal,and yield production,still it is difficult to find the semantics between extracted information due to unswerving effects of weather,soil,pest,and fertilizer data.This paper consists of two parts.An initial phase,which proposes a data preprocessing technique for removal of ambiguity in input corpora,and the second phase proposes a novel deep learning-based long short-term memory with rectification in Adam optimizer andmultilayer perceptron to find agricultural-based named entity recognition,events,and relations between them.The proposed algorithm has been trained and tested on four input corpora i.e.,agriculture,weather,soil,and pest&fertilizers.The experimental results have been compared with existing techniques and itwas observed that the proposed algorithm outperformsWeighted-SOM,LSTM+RAO,PLR-DBN,KNN,and Na飗e Bayes on standard parameters like accuracy,sensitivity,and specificity.
基金the Artificial Intelligence Innovation and Development Project of Shanghai Municipal Commission of Economy and Information (No. 2019-RGZN-01081)。
文摘Electronic medical record (EMR) containing rich biomedical information has a great potential in disease diagnosis and biomedical research. However, the EMR information is usually in the form of unstructured text, which increases the use cost and hinders its applications. In this work, an effective named entity recognition (NER) method is presented for information extraction on Chinese EMR, which is achieved by word embedding bootstrapped deep active learning to promote the acquisition of medical information from Chinese EMR and to release its value. In this work, deep active learning of bi-directional long short-term memory followed by conditional random field (Bi-LSTM+CRF) is used to capture the characteristics of different information from labeled corpus, and the word embedding models of contiguous bag of words and skip-gram are combined in the above model to respectively capture the text feature of Chinese EMR from unlabeled corpus. To evaluate the performance of above method, the tasks of NER on Chinese EMR with “medical history” content were used. Experimental results show that the word embedding bootstrapped deep active learning method using unlabeled medical corpus can achieve a better performance compared with other models.
基金supported by the National Natural Science Foundation of China(41701499)the Sichuan Science and Technology Program(2018GZ0265)the Geomatics Technology and Application Key Laboratory of Qinghai Province(QHDX-2018-07)
文摘A large number of debris flow disasters(called Seismic debris flows) would occur after an earthquake, which can cause a great amount of damage. UAV low-altitude remote sensing technology has become a means of quickly obtaining disaster information as it has the advantage of convenience and timeliness, but the spectral information of the image is so scarce, making it difficult to accurately detect the information of earthquake debris flow disasters. Based on the above problems, a seismic debris flow detection method based on transfer learning(TL) mechanism is proposed. On the basis of the constructed seismic debris flow disaster database, the features acquired from the training of the convolutional neural network(CNN) are transferred to the disaster information detection of the seismic debris flow. The automatic detection of earthquake debris flow disaster information is then completed, and the results of object-oriented seismic debris flow disaster information detection are compared and analyzed with the detection results supported by transfer learning.
基金supported the Strategic Priority Research Program of the Chinese Academy of Sciences[grant number XDB42000000]the National Natural Science Foundation of China[grant number U2006211]+1 种基金the Major Scientific and Technological Innovation Projects in Shandong Province[grant number 2019JZZY010102]the Chinese Academy of Sciences program[grant number Y9KY04101L].