This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method...Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method of time-series prediction employing multiple deep learners combined with a Bayesian network where training data is divided into clusters using K-means clustering. We decided how many clusters are the best for K-means with the Bayesian information criteria. Depending on each cluster, the multiple deep learners are trained. We used three types of deep learners: deep neural network (DNN), recurrent neural network (RNN), and long short-term memory (LSTM). A naive Bayes classifier is used to determine which deep learner is in charge of predicting a particular time-series. Our proposed method will be applied to a set of financial time-series data, the Nikkei Average Stock price, to assess the accuracy of the predictions made. Compared with the conventional method of employing a single deep learner to acquire all the data, it is demonstrated by our proposed method that F-value and accuracy are improved.展开更多
Tele health utilizes information and communication mechanisms to convey medical information for providing clinical and educational assistances.It makes an effort to get the better of issues of health service delivery ...Tele health utilizes information and communication mechanisms to convey medical information for providing clinical and educational assistances.It makes an effort to get the better of issues of health service delivery involving time factor,space and laborious terrains,validating cost-efficiency and finer ingress in both developed and developing countries.Tele health has been categorized into either real-time electronic communication,or store-andforward communication.In recent years,a third-class has been perceived as remote healthcare monitoring or tele health,presuming data obtained via Internet of Things(IOT).Although,tele health data analytics and machine learning have been researched in great depth,there is a dearth of studies that entirely concentrate on the progress of ML-based techniques for tele health data analytics in the IoT healthcare sector.Motivated by this fact,in this work a method called,Weighted Bayesian and Polynomial Taylor Deep Network(WB-PTDN)is proposed to improve health prediction in a computationally efficient and accurate manner.First,the Independent Component Data Arrangement model is designed with the objective of normalizing the data obtained from the Physionet dataset.Next,with the normalized data as input,Weighted Bayesian Feature Extraction is applied to minimize the dimensionality involved and therefore extracting the relevant features for further health risk analysis.Finally,to obtain reliable predictions concerning tele health data analytics,First Order Polynomial Taylor DNN-based Feature Homogenization is proposed that with the aid of First Order Polynomial Taylor function updates the new results based on the result analysis of old values and therefore provides increased transparency in decision making.The comparison of proposed and existing methods indicates that the WB-PTDN method achieves higher accuracy,true positive rate and lesser response time for IoT based tele health data analytics than the traditional methods.展开更多
A brain tumor is a disease in which abnormal cells form a tumor in the brain.They are rare and can take many forms,making them difficult to treat,and the survival rate of affected patients is low.Magnetic resonance im...A brain tumor is a disease in which abnormal cells form a tumor in the brain.They are rare and can take many forms,making them difficult to treat,and the survival rate of affected patients is low.Magnetic resonance imaging(MRI)is a crucial tool for diagnosing and localizing brain tumors.However,themanual interpretation of MRI images is tedious and prone to error.As artificial intelligence advances rapidly,DL techniques are increasingly used in medical imaging to accurately detect and diagnose brain tumors.In this study,we introduce a deep convolutional neural network(DCNN)framework for brain tumor classification that uses EfficientNet-B6 as the backbone architecture and adds additional layers.The model achieved an accuracy of 99.10%on the public Brain Tumor MRI datasets,and we performed an ablation study to determine the optimal batch size,optimizer,loss function,and learning rate to maximize the accuracy and robustness of the model,followed by K-Fold cross-validation and testing the model on an independent dataset,and tuning Hyperparameters with Bayesian Optimization to further enhance the performance.When comparing our model to other deep learning(DL)models such as VGG19,MobileNetv2,ResNet50,InceptionV3,and DenseNet201,aswell as variants of the EfficientNetmodel(B1–B7),the results showthat our proposedmodel outperforms all othermodels.Our investigational results demonstrate superiority in terms of precision,recall/sensitivity,accuracy,specificity,and F1-score.Such innovations can potentially enhance clinical decision-making and patient treatment in neurooncological settings.展开更多
Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision.Image-to-image translation from real-world to cartoon domains poses issues such as a l...Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision.Image-to-image translation from real-world to cartoon domains poses issues such as a lack of paired training samples,lack of good image translation,low feature extraction from the previous domain images,and lack of high-quality image translation from the traditional generator algorithms.To solve the above-mentioned issues,paired independent model,high-quality dataset,Bayesian-based feature extractor,and an improved generator must be proposed.In this study,we propose a high-quality dataset to reduce the effect of paired training samples on the model’s performance.We use a Bayesian Very Deep Convolutional Network(VGG)-based feature extractor to improve the performance of the standard feature extractor because Bayesian inference regu-larizes weights well.The generator from the Cartoon Generative Adversarial Network(GAN)is modified by introducing a depthwise convolution layer and channel attention mechanism to improve the performance of the original generator.We have used the Fréchet inception distance(FID)score and user preference score to evaluate the performance of the model.The FID scores obtained for the generated cartoon and real-world images are 107 and 76 for the TCC style,and 137 and 57 for the Hayao style,respectively.User preference score is also calculated to evaluate the quality of generated images and our proposed model acquired a high preference score compared to other models.We achieved stunning results in producing high-quality cartoon images,demonstrating the proposed model’s effectiveness in transferring style between authentic images and cartoon images.展开更多
Smart healthcare integrates an advanced wave of information technology using smart devices to collect health-related medical science data.Such data usually exist in unstructured,noisy,incomplete,and heterogeneous form...Smart healthcare integrates an advanced wave of information technology using smart devices to collect health-related medical science data.Such data usually exist in unstructured,noisy,incomplete,and heterogeneous forms.Annotating these limitations remains an open challenge in deep learning to classify health conditions.In this paper,a long short-term memory(LSTM)based health condition prediction framework is proposed to rectify imbalanced and noisy data and transform it into a useful form to predict accurate health conditions.The imbalanced and scarce data is normalized through coding to gain consistency for accurate results using synthetic minority oversampling technique.The proposed model is optimized and ne-tuned in an end to end manner to select ideal parameters using tree parzen estimator to build a probabilistic model.The patient’s medication is pigeonholed to plot the diabetic condition’s risk factor through an algorithm to classify blood glucose metrics using a modern surveillance error grid method.The proposed model can efciently train,validate,and test noisy data by obtaining consistent results around 90%over the state of the art machine and deep learning techniques and overcoming the insufciency in training data through transfer learning.The overall results of the proposed model are further tested with secondary datasets to verify model sustainability.展开更多
Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO can be computationally intensive forcomplexity problems due to repetitive evaluations o...Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO can be computationally intensive forcomplexity problems due to repetitive evaluations of the expensive forward model and itsgradient. In this work, we present a novel goal-oriented deep neural networks (DNN) surrogate approach to substantially reduce the computation burden of RTO. In particular,we propose to drawn the training points for the DNN-surrogate from a local approximatedposterior distribution – yielding a flexible and efficient sampling algorithm that convergesto the direct RTO approach. We present a Bayesian inverse problem governed by ellipticPDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO approach, which shows that DNN-RTO can significantly outperform the traditional RTO.展开更多
姿态控制系统是卫星系统中重要的组成部分,由于其高昂的造价,发生故障会引发恶劣的影响。随着航天科技的发展,卫星姿态控制系统也逐渐复杂,其可能发生故障的概率也随之增大。针对传统神经网络故障诊断结果缺少置信度、鲁棒性较差以及易...姿态控制系统是卫星系统中重要的组成部分,由于其高昂的造价,发生故障会引发恶劣的影响。随着航天科技的发展,卫星姿态控制系统也逐渐复杂,其可能发生故障的概率也随之增大。针对传统神经网络故障诊断结果缺少置信度、鲁棒性较差以及易发生过拟合的缺点,在对贝叶斯统计和深度学习理论研究的基础上,提出了一种基于贝叶斯线性层与贝叶斯卷积层的Bayesian Le Net结合的网络模型。通过对卫星姿态控制系统飞轮部件的故障数据分析和处理,进而采用该模型对故障仿真,并与贝叶斯全连接神经网络与传统Le Net进行对比,实验结果表明:在飞轮可能发生的三种故障前提下,上述网络模型准确率较高,过拟合现象较轻。验证了上述网络模型的有效性。展开更多
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
文摘Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method of time-series prediction employing multiple deep learners combined with a Bayesian network where training data is divided into clusters using K-means clustering. We decided how many clusters are the best for K-means with the Bayesian information criteria. Depending on each cluster, the multiple deep learners are trained. We used three types of deep learners: deep neural network (DNN), recurrent neural network (RNN), and long short-term memory (LSTM). A naive Bayes classifier is used to determine which deep learner is in charge of predicting a particular time-series. Our proposed method will be applied to a set of financial time-series data, the Nikkei Average Stock price, to assess the accuracy of the predictions made. Compared with the conventional method of employing a single deep learner to acquire all the data, it is demonstrated by our proposed method that F-value and accuracy are improved.
文摘Tele health utilizes information and communication mechanisms to convey medical information for providing clinical and educational assistances.It makes an effort to get the better of issues of health service delivery involving time factor,space and laborious terrains,validating cost-efficiency and finer ingress in both developed and developing countries.Tele health has been categorized into either real-time electronic communication,or store-andforward communication.In recent years,a third-class has been perceived as remote healthcare monitoring or tele health,presuming data obtained via Internet of Things(IOT).Although,tele health data analytics and machine learning have been researched in great depth,there is a dearth of studies that entirely concentrate on the progress of ML-based techniques for tele health data analytics in the IoT healthcare sector.Motivated by this fact,in this work a method called,Weighted Bayesian and Polynomial Taylor Deep Network(WB-PTDN)is proposed to improve health prediction in a computationally efficient and accurate manner.First,the Independent Component Data Arrangement model is designed with the objective of normalizing the data obtained from the Physionet dataset.Next,with the normalized data as input,Weighted Bayesian Feature Extraction is applied to minimize the dimensionality involved and therefore extracting the relevant features for further health risk analysis.Finally,to obtain reliable predictions concerning tele health data analytics,First Order Polynomial Taylor DNN-based Feature Homogenization is proposed that with the aid of First Order Polynomial Taylor function updates the new results based on the result analysis of old values and therefore provides increased transparency in decision making.The comparison of proposed and existing methods indicates that the WB-PTDN method achieves higher accuracy,true positive rate and lesser response time for IoT based tele health data analytics than the traditional methods.
基金funded by the King Saud University,Riyadh,Saudi Arabia,for funding this work through the Researchers Supporting Research Funding program,(ORF-2025-1268).
文摘A brain tumor is a disease in which abnormal cells form a tumor in the brain.They are rare and can take many forms,making them difficult to treat,and the survival rate of affected patients is low.Magnetic resonance imaging(MRI)is a crucial tool for diagnosing and localizing brain tumors.However,themanual interpretation of MRI images is tedious and prone to error.As artificial intelligence advances rapidly,DL techniques are increasingly used in medical imaging to accurately detect and diagnose brain tumors.In this study,we introduce a deep convolutional neural network(DCNN)framework for brain tumor classification that uses EfficientNet-B6 as the backbone architecture and adds additional layers.The model achieved an accuracy of 99.10%on the public Brain Tumor MRI datasets,and we performed an ablation study to determine the optimal batch size,optimizer,loss function,and learning rate to maximize the accuracy and robustness of the model,followed by K-Fold cross-validation and testing the model on an independent dataset,and tuning Hyperparameters with Bayesian Optimization to further enhance the performance.When comparing our model to other deep learning(DL)models such as VGG19,MobileNetv2,ResNet50,InceptionV3,and DenseNet201,aswell as variants of the EfficientNetmodel(B1–B7),the results showthat our proposedmodel outperforms all othermodels.Our investigational results demonstrate superiority in terms of precision,recall/sensitivity,accuracy,specificity,and F1-score.Such innovations can potentially enhance clinical decision-making and patient treatment in neurooncological settings.
文摘Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision.Image-to-image translation from real-world to cartoon domains poses issues such as a lack of paired training samples,lack of good image translation,low feature extraction from the previous domain images,and lack of high-quality image translation from the traditional generator algorithms.To solve the above-mentioned issues,paired independent model,high-quality dataset,Bayesian-based feature extractor,and an improved generator must be proposed.In this study,we propose a high-quality dataset to reduce the effect of paired training samples on the model’s performance.We use a Bayesian Very Deep Convolutional Network(VGG)-based feature extractor to improve the performance of the standard feature extractor because Bayesian inference regu-larizes weights well.The generator from the Cartoon Generative Adversarial Network(GAN)is modified by introducing a depthwise convolution layer and channel attention mechanism to improve the performance of the original generator.We have used the Fréchet inception distance(FID)score and user preference score to evaluate the performance of the model.The FID scores obtained for the generated cartoon and real-world images are 107 and 76 for the TCC style,and 137 and 57 for the Hayao style,respectively.User preference score is also calculated to evaluate the quality of generated images and our proposed model acquired a high preference score compared to other models.We achieved stunning results in producing high-quality cartoon images,demonstrating the proposed model’s effectiveness in transferring style between authentic images and cartoon images.
基金supported by Researchers Supporting Project number(RSP2020/87),King Saud University,Riyadh,Saudi Arabia.
文摘Smart healthcare integrates an advanced wave of information technology using smart devices to collect health-related medical science data.Such data usually exist in unstructured,noisy,incomplete,and heterogeneous forms.Annotating these limitations remains an open challenge in deep learning to classify health conditions.In this paper,a long short-term memory(LSTM)based health condition prediction framework is proposed to rectify imbalanced and noisy data and transform it into a useful form to predict accurate health conditions.The imbalanced and scarce data is normalized through coding to gain consistency for accurate results using synthetic minority oversampling technique.The proposed model is optimized and ne-tuned in an end to end manner to select ideal parameters using tree parzen estimator to build a probabilistic model.The patient’s medication is pigeonholed to plot the diabetic condition’s risk factor through an algorithm to classify blood glucose metrics using a modern surveillance error grid method.The proposed model can efciently train,validate,and test noisy data by obtaining consistent results around 90%over the state of the art machine and deep learning techniques and overcoming the insufciency in training data through transfer learning.The overall results of the proposed model are further tested with secondary datasets to verify model sustainability.
基金LY’s work was supported by the NSF of China(No.11771081)the science challenge project,China(No.TZ2018001)+4 种基金Zhishan Young Scholar Program of SEU,China.TZ’s work was supported by the National Key R&D Program of China(No.2020YFA0712000)the NSF of China(under grant numbers 11822111,11688101 and 11731006)the science challenge project(No.TZ2018001)the Strategic Priority Research Program of Chinese Academy of Sciences(No.XDA25000404)youth innovation promotion association(CAS),China.
文摘Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO can be computationally intensive forcomplexity problems due to repetitive evaluations of the expensive forward model and itsgradient. In this work, we present a novel goal-oriented deep neural networks (DNN) surrogate approach to substantially reduce the computation burden of RTO. In particular,we propose to drawn the training points for the DNN-surrogate from a local approximatedposterior distribution – yielding a flexible and efficient sampling algorithm that convergesto the direct RTO approach. We present a Bayesian inverse problem governed by ellipticPDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO approach, which shows that DNN-RTO can significantly outperform the traditional RTO.
文摘姿态控制系统是卫星系统中重要的组成部分,由于其高昂的造价,发生故障会引发恶劣的影响。随着航天科技的发展,卫星姿态控制系统也逐渐复杂,其可能发生故障的概率也随之增大。针对传统神经网络故障诊断结果缺少置信度、鲁棒性较差以及易发生过拟合的缺点,在对贝叶斯统计和深度学习理论研究的基础上,提出了一种基于贝叶斯线性层与贝叶斯卷积层的Bayesian Le Net结合的网络模型。通过对卫星姿态控制系统飞轮部件的故障数据分析和处理,进而采用该模型对故障仿真,并与贝叶斯全连接神经网络与传统Le Net进行对比,实验结果表明:在飞轮可能发生的三种故障前提下,上述网络模型准确率较高,过拟合现象较轻。验证了上述网络模型的有效性。