Diabetes mellitus is a metabolic disease in which blood glucose levels rise as a result of pancreatic insulin production failure.It causes hyperglycemia and chronic multiorgan dysfunction,including blindness,renal fai...Diabetes mellitus is a metabolic disease in which blood glucose levels rise as a result of pancreatic insulin production failure.It causes hyperglycemia and chronic multiorgan dysfunction,including blindness,renal failure,and cardi-ovascular disease,if left untreated.One of the essential checks that are needed to be performed frequently in Type 1 Diabetes Mellitus is a blood test,this procedure involves extracting blood quite frequently,which leads to subject discomfort increasing the possibility of infection when the procedure is often recurring.Exist-ing methods used for diabetes classification have less classification accuracy and suffer from vanishing gradient problems,to overcome these issues,we proposed stacking ensemble learning-based convolutional gated recurrent neural network(CGRNN)Metamodel algorithm.Our proposed method initially performs outlier detection to remove outlier data,using the Gaussian distribution method,and the Box-cox method is used to correctly order the dataset.After the outliers’detec-tion,the missing values are replaced by the data’s mean rather than their elimina-tion.In the stacking ensemble base model,multiple machine learning algorithms like Naïve Bayes,Bagging with random forest,and Adaboost Decision tree have been employed.CGRNN Meta model uses two hidden layers Long-Short-Time Memory(LSTM)and Gated Recurrent Unit(GRU)to calculate the weight matrix for diabetes prediction.Finally,the calculated weight matrix is passed to the soft-max function in the output layer to produce the diabetes prediction results.By using LSTM-based CG-RNN,the mean square error(MSE)value is 0.016 and the obtained accuracy is 91.33%.展开更多
Near crash events are often regarded as an excellent surrogate measure for traffic safety research because they include abrupt changes in vehicle kinematics that can lead to deadly accident scenarios. In this paper, w...Near crash events are often regarded as an excellent surrogate measure for traffic safety research because they include abrupt changes in vehicle kinematics that can lead to deadly accident scenarios. In this paper, we introduced machine learning and deep learning algorithms for predicting near crash events using LiDAR data at a signalized intersection. To predict a near crash occurrence, we used essential vehicle kinematic variables such as lateral and longitudinal velocity, yaw, tracking status of LiDAR, etc. A deep learning hybrid model Convolutional Gated Recurrent Neural Network (CNN + GRU) was introduced, and comparative performances were evaluated with multiple machine learning classification models such as Logistic Regression, K Nearest Neighbor, Decision Tree, Random Forest, Adaptive Boost, and deep learning models like Long Short-Term Memory (LSTM). As vehicle kinematics changes occur after sudden brake, we considered average deceleration and kinematic energy drop as thresholds to identify near crashes after vehicle braking time . We looked at the next 3 seconds of this braking time as our prediction horizon. All models work best in the next 1-second prediction horizon to braking time. The results also reveal that our hybrid model gathers the greatest near crash information while working flawlessly. In comparison to existing models for near crash prediction, our hybrid Convolutional Gated Recurrent Neural Network model has 100% recall, 100% precision, and 100% F1-score: accurately capturing all near crashes. This prediction performance outperforms previous baseline models in forecasting near crash events and provides opportunities for improving traffic safety via Intelligent Transportation Systems (ITS).展开更多
Recently,many knowledge graph embedding models for knowledge graph completion have been proposed,ranging from the initial translation-based model such as TransE to recent CNN-based models such as ConvE.These models fi...Recently,many knowledge graph embedding models for knowledge graph completion have been proposed,ranging from the initial translation-based model such as TransE to recent CNN-based models such as ConvE.These models fill in the missing relations between entities by focusing on capturing the representation features to further complete the existing knowledge graph(KG).However,the above KG-based relation prediction research ignores the interaction information among entities in KG.To solve this problem,this work proposes a novel model called Gate Feature Interaction Network(GFINet)with a weighted loss function that takes the benefit of interaction information and deep expressive features together.Specifically,the proposed GFINet consists of a gate convolution block and an interaction attention module,corresponding to catching deep expressive features and interaction information based on these valid features respectively.Our method establishes state-of-the-art experimental results on the standard datasets for knowledge graph completion.In addition,we make ablation experiments to verify the effectiveness of the gate convolution block and the interaction attention module.展开更多
Vehicle detection in dim light has always been a challenging task.In addition to the unavoidable noise,the uneven spatial distribution of light and dark due to vehicle lights and street lamps can further make the prob...Vehicle detection in dim light has always been a challenging task.In addition to the unavoidable noise,the uneven spatial distribution of light and dark due to vehicle lights and street lamps can further make the problem more difficult.Conventional image enhancement methods may produce over smoothing or over exposure problems,causing irreversible information loss to the vehicle targets to be subsequently detected.Therefore,we propose a multi-exposure generation and fusion network.In the multi-exposure generation network,we employ a single gated convolutional recurrent network with two-stream progressive exposure input to generate intermediate images with gradually increasing exposure,which are provided to the multi-exposure fusion network after a spatial attention mechanism.Then,a pre-trained vehicle detection model in normal light is used as the basis of the fusion network,and the two models are connected using the convolutional kernel channel dimension expansion technique.This allows the fusion module to provide vehicle detection information,which can be used to guide the generation network tofine-tune the parameters and thus complete end-to-end enhancement and training.By coupling the two parts,we can achieve detail interaction and feature fusion under different lighting conditions.Our experimental results demonstrate that our proposed method is better than the state-of-the-art detection methods after image luminance enhancement on the ODDS dataset.展开更多
文摘Diabetes mellitus is a metabolic disease in which blood glucose levels rise as a result of pancreatic insulin production failure.It causes hyperglycemia and chronic multiorgan dysfunction,including blindness,renal failure,and cardi-ovascular disease,if left untreated.One of the essential checks that are needed to be performed frequently in Type 1 Diabetes Mellitus is a blood test,this procedure involves extracting blood quite frequently,which leads to subject discomfort increasing the possibility of infection when the procedure is often recurring.Exist-ing methods used for diabetes classification have less classification accuracy and suffer from vanishing gradient problems,to overcome these issues,we proposed stacking ensemble learning-based convolutional gated recurrent neural network(CGRNN)Metamodel algorithm.Our proposed method initially performs outlier detection to remove outlier data,using the Gaussian distribution method,and the Box-cox method is used to correctly order the dataset.After the outliers’detec-tion,the missing values are replaced by the data’s mean rather than their elimina-tion.In the stacking ensemble base model,multiple machine learning algorithms like Naïve Bayes,Bagging with random forest,and Adaboost Decision tree have been employed.CGRNN Meta model uses two hidden layers Long-Short-Time Memory(LSTM)and Gated Recurrent Unit(GRU)to calculate the weight matrix for diabetes prediction.Finally,the calculated weight matrix is passed to the soft-max function in the output layer to produce the diabetes prediction results.By using LSTM-based CG-RNN,the mean square error(MSE)value is 0.016 and the obtained accuracy is 91.33%.
文摘Near crash events are often regarded as an excellent surrogate measure for traffic safety research because they include abrupt changes in vehicle kinematics that can lead to deadly accident scenarios. In this paper, we introduced machine learning and deep learning algorithms for predicting near crash events using LiDAR data at a signalized intersection. To predict a near crash occurrence, we used essential vehicle kinematic variables such as lateral and longitudinal velocity, yaw, tracking status of LiDAR, etc. A deep learning hybrid model Convolutional Gated Recurrent Neural Network (CNN + GRU) was introduced, and comparative performances were evaluated with multiple machine learning classification models such as Logistic Regression, K Nearest Neighbor, Decision Tree, Random Forest, Adaptive Boost, and deep learning models like Long Short-Term Memory (LSTM). As vehicle kinematics changes occur after sudden brake, we considered average deceleration and kinematic energy drop as thresholds to identify near crashes after vehicle braking time . We looked at the next 3 seconds of this braking time as our prediction horizon. All models work best in the next 1-second prediction horizon to braking time. The results also reveal that our hybrid model gathers the greatest near crash information while working flawlessly. In comparison to existing models for near crash prediction, our hybrid Convolutional Gated Recurrent Neural Network model has 100% recall, 100% precision, and 100% F1-score: accurately capturing all near crashes. This prediction performance outperforms previous baseline models in forecasting near crash events and provides opportunities for improving traffic safety via Intelligent Transportation Systems (ITS).
基金supported in part by the Science and Technology Innovation 2030-"New Generation of Artificial Intelligence"Major Project under Grant No.2021ZD0111000the Henan Province Science and Technology Research Project(232102311232).
文摘Recently,many knowledge graph embedding models for knowledge graph completion have been proposed,ranging from the initial translation-based model such as TransE to recent CNN-based models such as ConvE.These models fill in the missing relations between entities by focusing on capturing the representation features to further complete the existing knowledge graph(KG).However,the above KG-based relation prediction research ignores the interaction information among entities in KG.To solve this problem,this work proposes a novel model called Gate Feature Interaction Network(GFINet)with a weighted loss function that takes the benefit of interaction information and deep expressive features together.Specifically,the proposed GFINet consists of a gate convolution block and an interaction attention module,corresponding to catching deep expressive features and interaction information based on these valid features respectively.Our method establishes state-of-the-art experimental results on the standard datasets for knowledge graph completion.In addition,we make ablation experiments to verify the effectiveness of the gate convolution block and the interaction attention module.
基金supported in part by the Science and Technology Innovation foundation(No.JSGG20210802152811033).
文摘Vehicle detection in dim light has always been a challenging task.In addition to the unavoidable noise,the uneven spatial distribution of light and dark due to vehicle lights and street lamps can further make the problem more difficult.Conventional image enhancement methods may produce over smoothing or over exposure problems,causing irreversible information loss to the vehicle targets to be subsequently detected.Therefore,we propose a multi-exposure generation and fusion network.In the multi-exposure generation network,we employ a single gated convolutional recurrent network with two-stream progressive exposure input to generate intermediate images with gradually increasing exposure,which are provided to the multi-exposure fusion network after a spatial attention mechanism.Then,a pre-trained vehicle detection model in normal light is used as the basis of the fusion network,and the two models are connected using the convolutional kernel channel dimension expansion technique.This allows the fusion module to provide vehicle detection information,which can be used to guide the generation network tofine-tune the parameters and thus complete end-to-end enhancement and training.By coupling the two parts,we can achieve detail interaction and feature fusion under different lighting conditions.Our experimental results demonstrate that our proposed method is better than the state-of-the-art detection methods after image luminance enhancement on the ODDS dataset.