Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,ha...Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.展开更多
The selection of hyperparameters in regularized least squares plays an important role in large-scale system identification. The traditional methods for selecting hyperparameters are based on experience or marginal lik...The selection of hyperparameters in regularized least squares plays an important role in large-scale system identification. The traditional methods for selecting hyperparameters are based on experience or marginal likelihood maximization method, which are inaccurate or computationally expensive. In this paper, two posterior methods are proposed to select hyperparameters based on different prior knowledge (constraints), which can obtain the optimal hyperparameters using the optimization theory. Moreover, we also give the theoretical optimal constraints, and verify its effectiveness. Numerical simulation shows that the hyperparameters and parameter vector estimate obtained by the proposed methods are the optimal ones.展开更多
Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challen...Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challenging to propose an ideal LSM model.To investigate the impact of different boosting algorithms and hyperparameter optimization algorithms on LSM,this study constructed a geospatial database comprising 12 conditioning factors,such as elevation,stratum,and annual average rainfall.The XGBoost(XGB),LightGBM(LGBM),and CatBoost(CB)algorithms were employed to construct the LSM model.Furthermore,the Bayesian optimization(BO),particle swarm optimization(PSO),and Hyperband optimization(HO)algorithms were applied to optimizing the LSM model.The boosting algorithms exhibited varying performances,with CB demonstrating the highest precision,followed by LGBM,and XGB showing poorer precision.Additionally,the hyperparameter optimization algorithms displayed different performances,with HO outperforming PSO and BO showing poorer performance.The HO-CB model achieved the highest precision,boasting an accuracy of 0.764,an F1-score of 0.777,an area under the curve(AUC)value of 0.837 for the training set,and an AUC value of 0.863 for the test set.The model was interpreted using SHapley Additive exPlanations(SHAP),revealing that slope,curvature,topographic wetness index(TWI),degree of relief,and elevation significantly influenced landslides in the study area.This study offers a scientific reference for LSM and disaster prevention research.This study examines the utilization of various boosting algorithms and hyperparameter optimization algorithms in Wanzhou District.It proposes the HO-CB-SHAP framework as an effective approach to accurately forecast landslide disasters and interpret LSM models.However,limitations exist concerning the generalizability of the model and the data processing,which require further exploration in subsequent studies.展开更多
This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap...This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.展开更多
Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the ne...Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.展开更多
Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. Several techn...Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. Several techniques have been developed and successfully applied for certain application domains. However, this work demands professional knowledge and expert experience. And sometimes it has to resort to the brute-force search.Therefore, if an efficient hyperparameter optimization algorithm can be developed to optimize any given machine learning method, it will greatly improve the efficiency of machine learning. In this paper, we consider building the relationship between the performance of the machine learning models and their hyperparameters by Gaussian processes. In this way, the hyperparameter tuning problem can be abstracted as an optimization problem and Bayesian optimization is used to solve the problem. Bayesian optimization is based on the Bayesian theorem. It sets a prior over the optimization function and gathers the information from the previous sample to update the posterior of the optimization function. A utility function selects the next sample point to maximize the optimization function.Several experiments were conducted on standard test datasets. Experiment results show that the proposed method can find the best hyperparameters for the widely used machine learning models, such as the random forest algorithm and the neural networks, even multi-grained cascade forest under the consideration of time cost.展开更多
Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons i...Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons in each layer)has a significant influence on the accuracy of these methods.Therefore,a considerable number of studies have been carried out to optimize the NN hyperpaxameters.In this study,the genetic algorithm is applied to NN to find the optimal hyperpaxameters.Thus,the deep energy method,which contains a deep neural network,is applied first on a Timoshenko beam and a plate with a hole.Subsequently,the numbers of hidden layers,integration points,and neurons in each layer are optimized to reach the highest accuracy to predict the stress distribution through these structures.Thus,applying the proper optimization method on NN leads to significant increase in the NN prediction accuracy after conducting the optimization in various examples.展开更多
BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver dise...BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential.展开更多
Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to ...Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.展开更多
Cyberbullying(CB)is a challenging issue in social media and it becomes important to effectively identify the occurrence of CB.The recently developed deep learning(DL)models pave the way to design CB classifier models ...Cyberbullying(CB)is a challenging issue in social media and it becomes important to effectively identify the occurrence of CB.The recently developed deep learning(DL)models pave the way to design CB classifier models with maximum performance.At the same time,optimal hyperparameter tuning process plays a vital role to enhance overall results.This study introduces a Teacher Learning Genetic Optimization with Deep Learning Enabled Cyberbullying Classification(TLGODL-CBC)model in Social Media.The proposed TLGODL-CBC model intends to identify the existence and non-existence of CB in social media context.Initially,the input data is cleaned and pre-processed to make it compatible for further processing.Followed by,independent recurrent autoencoder(IRAE)model is utilized for the recognition and classification of CBs.Finally,the TLGO algorithm is used to optimally adjust the parameters related to the IRAE model and shows the novelty of the work.To assuring the improved outcomes of the TLGODLCBC approach,a wide range of simulations are executed and the outcomes are investigated under several aspects.The simulation outcomes make sure the improvements of the TLGODL-CBC model over recent approaches.展开更多
Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,ac...Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,accounting for 31%of all deaths worldwide.With a timely prognosis and thorough consideration of the patient’s medical history and lifestyle,it is possible to predict CVDs and take preventive measures to eliminate or control this life-threatening disease.In this study,we used various patient datasets from a major hospital in the United States as prognostic factors for CVD.The data was obtained by monitoring a total of 918 patients whose criteria for adults were 28-77 years old.In this study,we present a data mining modeling approach to analyze the performance,classification accuracy and number of clusters on Cardiovascular Disease Prognostic datasets in unsupervised machine learning(ML)using the Orange data mining software.Various techniques are then used to classify the model parameters,such as k-nearest neighbors,support vector machine,random forest,artificial neural network(ANN),naïve bayes,logistic regression,stochastic gradient descent(SGD),and AdaBoost.To determine the number of clusters,various unsupervised ML clustering methods were used,such as k-means,hierarchical,and density-based spatial clustering of applications with noise clustering.The results showed that the best model performance analysis and classification accuracy were SGD and ANN,both of which had a high score of 0.900 on Cardiovascular Disease Prognostic datasets.Based on the results of most clustering methods,such as k-means and hierarchical clustering,Cardiovascular Disease Prognostic datasets can be divided into two clusters.The prognostic accuracy of CVD depends on the accuracy of the proposed model in determining the diagnostic model.The more accurate the model,the better it can predict which patients are at risk for CVD.展开更多
Aiming at training the feed-forward threshold neural network consisting of nondifferentiable activation functions, the approach of noise injection forms a stochastic resonance based threshold network that can be optim...Aiming at training the feed-forward threshold neural network consisting of nondifferentiable activation functions, the approach of noise injection forms a stochastic resonance based threshold network that can be optimized by various gradientbased optimizers. The introduction of injected noise extends the noise level into the parameter space of the designed threshold network, but leads to a highly non-convex optimization landscape of the loss function. Thus, the hyperparameter on-line learning procedure with respective to network weights and noise levels becomes of challenge. It is shown that the Adam optimizer, as an adaptive variant of stochastic gradient descent, manifests its superior learning ability in training the stochastic resonance based threshold network effectively. Experimental results demonstrate the significant improvement of performance of the designed threshold network trained by the Adam optimizer for function approximation and image classification.展开更多
When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other...When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other hand,the currently employed approaches have certain restrictions,including high levels of design complexity,severe time constraints on error consolidation and propagation,and uncontaminated architectural registers(ARs).The design of near-threshold circuits,often known as NT circuits,is becoming the approach of choice for the construction of energy-efficient digital circuits.As a result of the exponentially decreased driving current,there was a reduction in performance,which was one of the downsides.Numerous studies have advised the use of NT techniques to chip multiprocessors as a means to preserve outstanding energy efficiency while minimising performance loss.Over the past several years,there has been a clear growth in interest in the development of artificial intelligence hardware with low energy consumption(AI).This has resulted in both large corporations and start-ups producing items that compete on the basis of varying degrees of performance and energy use.This technology’s ultimate goal was to provide levels of efficiency and performance that could not be achieved with graphics processing units or general-purpose CPUs.To achieve this objective,the technology was created to integrate several processing units into a single chip.To accomplish this purpose,the hardware was designed with a number of unique properties.In this study,an Energy Effi-cient Hyperparameter Tuned Deep Neural Network(EEHPT-DNN)model for Variation-Tolerant Near-Threshold Processor was developed.In order to improve the energy efficiency of artificial intelligence(AI),the EEHPT-DNN model employs several AI techniques.The notion focuses mostly on the repercussions of embedded technologies positioned at the network’s edge.The presented model employs a deep stacked sparse autoencoder(DSSAE)model with the objective of creating a variation-tolerant NT processor.The time-consuming method of modifying hyperparameters through trial and error is substituted with the marine predators optimization algorithm(MPO).This method is utilised to modify the hyperparameters associated with the DSSAE model.To validate that the proposed EEHPT-DNN model has a higher degree of functionality,a full simulation study is conducted,and the results are analysed from a variety of perspectives.This was completed so that the enhanced performance could be evaluated and analysed.According to the results of the study that compared numerous DL models,the EEHPT-DNN model performed significantly better than the other models.展开更多
Hyperparameters play a vital impact in the performance of most machine learning algorithms.It is a challenge for traditional methods to con-figure hyperparameters of the capsule network to obtain high-performance manu...Hyperparameters play a vital impact in the performance of most machine learning algorithms.It is a challenge for traditional methods to con-figure hyperparameters of the capsule network to obtain high-performance manually.Some swarm intelligence or evolutionary computation algorithms have been effectively employed to seek optimal hyperparameters as a com-binatorial optimization problem.However,these algorithms are prone to get trapped in the local optimal solution as random search strategies are adopted.The inspiration for the hybrid rice optimization(HRO)algorithm is from the breeding technology of three-line hybrid rice in China,which has the advantages of easy implementation,less parameters and fast convergence.In the paper,genetic search is combined with the hybrid rice optimization algorithm(GHRO)and employed to obtain the optimal hyperparameter of the capsule network automatically,that is,a probability search technique and a hybridization strategy belong with the primary HRO.Thirteen benchmark functions are used to evaluate the performance of GHRO.Furthermore,the MNIST,Chest X-Ray(pneumonia),and Chest X-Ray(COVID-19&pneumonia)datasets are also utilized to evaluate the capsule network learnt by GHRO.The experimental results show that GHRO is an effective method for optimizing the hyperparameters of the capsule network,which is able to boost the performance of the capsule network on image classification.展开更多
Hyperparameter optimization is considered as one of the most challenges in deep learning and dominates the precision of model in a certain.Recent proposals tried to solve this issue through the particle swarm optimiza...Hyperparameter optimization is considered as one of the most challenges in deep learning and dominates the precision of model in a certain.Recent proposals tried to solve this issue through the particle swarm optimization(PSO),but its native defect may result in the local optima trapped and convergence difficulty.In this paper,the genetic operations are introduced to the PSO,which makes the best hyperparameter combination scheme for specific network architecture be located easier.Spe-cifically,to prevent the troubles caused by the different data types and value scopes,a mixed coding method is used to ensure the effectiveness of particles.Moreover,the crossover and mutation opera-tions are added to the process of particles updating,to increase the diversity of particles and avoid local optima in searching.Verified with three benchmark datasets,MNIST,Fashion-MNIST,and CIFAR10,it is demonstrated that the proposed scheme can achieve accuracies of 99.58%,93.39%,and 78.96%,respectively,improving the accuracy by about 0.1%,0.5%,and 2%,respectively,compared with that of the PSO.展开更多
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t...ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.展开更多
Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to...Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to breast cancer’s favourable prognosis at an early stage, a model is created to utilize the Dataset on Wisconsin Diagnostic Breast Cancer (WDBC). Conversely, this model’s overarching axiom is to compare the effectiveness of five well-known ML classifiers, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB) with the conventional method. To counterbalance the effect with conventional methods, the overarching tactic we utilized was hyperparameter tuning utilizing the grid search method, which improved accuracy, secondary precision, third recall, F1 score and finally the AUC & ROC curve. In this study of hyperparameter tuning model, the rate of accuracy increased from 94.15% to 98.83% whereas the accuracy of the conventional method increased from 93.56% to 97.08%. According to this investigation, KNN outperformed all other classifiers in terms of accuracy, achieving a score of 98.83%. In conclusion, our study shows that KNN works well with the hyper-tuning method. These analyses show that this study prediction approach is useful in prognosticating women with breast cancer with a viable performance and more accurate findings when compared to the conventional approach.展开更多
The rapid evolution of distributed energy resources,particularly photovoltaic systems,poses a formidable challenge in maintaining a delicate balance between energy supply and demand while minimizing costs.The integrat...The rapid evolution of distributed energy resources,particularly photovoltaic systems,poses a formidable challenge in maintaining a delicate balance between energy supply and demand while minimizing costs.The integrated nature of distributed markets,blending centralized and decentralized elements,holds the promise of maximizing social welfare and significantly reducing overall costs,including computational and communication expenses.However,achieving this balance requires careful consideration of various hyperparameter sets,encompassing factors such as the number of communities,community detection methods,and trading mechanisms employed among nodes.To address this challenge,we introduce a groundbreaking neural network-based framework,the Energy Trading-based Artificial Neural Network(ET-ANN),which excels in performance compared to existing algorithms.Our experiments underscore the superiority of ET-ANN in minimizing total energy transaction costs while maximizing social welfare within the realm of photovoltaic networks.展开更多
Neural networks(NNs)have been used extensively in surface water prediction tasks due to computing algorithm improvements and data accumulation.An essential step in developing an NN is the hyperparameter selection.In p...Neural networks(NNs)have been used extensively in surface water prediction tasks due to computing algorithm improvements and data accumulation.An essential step in developing an NN is the hyperparameter selection.In practice,it is common to manually determine hyperparameters in the studies of NNs in water resources tasks.This may result in considerable randomness and require significant computation time;therefore,hyperparameter optimization(HPO)is essential.This study adopted five representatives of the HPO techniques in the surface water quality prediction tasks,including the grid sampling(GS),random search(RS),genetic algorithm(GA),Bayesian optimization(BO)based on the Gaussian process(GP),and the tree Parzen estimator(TPE).For the evaluation of these techniques,this study proposed a method:first,the optimal hyperparameter value sets achieved by GS were regarded as the benchmark;then,the other HPO techniques were evaluated and compared with the benchmark in convergence,optimization orientation,and consistency of the optimized values.The results indicated that the TPE-based BO algorithm was recommended because it yielded stable convergence,reasonable optimization orientation,and the highest consistency rates with the benchmark values.The optimization consistency rates via TPE for the hyperparameters hidden layers,hidden dimension,learning rate,and batch size were 86.7%,73.3%,73.3%,and 80.0%,respectively.Unlike the evaluation of HPO techniques directly based on the prediction performance of the optimized NN in a single HPO test,the proposed benchmark-based HPO evaluation approach is feasible and robust.展开更多
To advance the circular economy(CE),it is crucial to gain insights into the evolution of public attention,cognitive pathways related to circular products,and key public concerns.To achieve these objectives,we collecte...To advance the circular economy(CE),it is crucial to gain insights into the evolution of public attention,cognitive pathways related to circular products,and key public concerns.To achieve these objectives,we collected data from diverse platforms,including Twitter,Reddit,and The Guardian,and utilised three topic models to analyse the data.Given the performance of topic modelling may vary depending on hyperparameter settings,we proposed a novel framework that integrates twin(single-and multi-objective)hyperparameter timisation op-for CE analysis.Systematic experiments were conducted to determine appropriate hyperparameters under different constraints,providing valuable insights into the correlations between CE and public attention.Our findings reveal that economic implications of sustainability and circular practices,particularly around recyclable materials and environmentally sustainable technologies,remain a significant public concern.Topics related to sustainable development and environmental protection technologies are particularly prominent on The Guardian,while Twitter discussions are comparatively sparse.These insights highlight the importance of targeted education programmes,business incentives adopt CE practices,and stringent waste management policies alongside improved recycling processes.展开更多
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2024-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Technology Development Program(RS-2023-00264489)funded by the Ministry of SMEs and Startups(MSS,Republic of Korea).
文摘Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.
文摘The selection of hyperparameters in regularized least squares plays an important role in large-scale system identification. The traditional methods for selecting hyperparameters are based on experience or marginal likelihood maximization method, which are inaccurate or computationally expensive. In this paper, two posterior methods are proposed to select hyperparameters based on different prior knowledge (constraints), which can obtain the optimal hyperparameters using the optimization theory. Moreover, we also give the theoretical optimal constraints, and verify its effectiveness. Numerical simulation shows that the hyperparameters and parameter vector estimate obtained by the proposed methods are the optimal ones.
基金funded by the Natural Science Foundation of Chongqing(Grants No.CSTB2022NSCQ-MSX0594)the Humanities and Social Sciences Research Project of the Ministry of Education(Grants No.16YJCZH061).
文摘Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challenging to propose an ideal LSM model.To investigate the impact of different boosting algorithms and hyperparameter optimization algorithms on LSM,this study constructed a geospatial database comprising 12 conditioning factors,such as elevation,stratum,and annual average rainfall.The XGBoost(XGB),LightGBM(LGBM),and CatBoost(CB)algorithms were employed to construct the LSM model.Furthermore,the Bayesian optimization(BO),particle swarm optimization(PSO),and Hyperband optimization(HO)algorithms were applied to optimizing the LSM model.The boosting algorithms exhibited varying performances,with CB demonstrating the highest precision,followed by LGBM,and XGB showing poorer precision.Additionally,the hyperparameter optimization algorithms displayed different performances,with HO outperforming PSO and BO showing poorer performance.The HO-CB model achieved the highest precision,boasting an accuracy of 0.764,an F1-score of 0.777,an area under the curve(AUC)value of 0.837 for the training set,and an AUC value of 0.863 for the test set.The model was interpreted using SHapley Additive exPlanations(SHAP),revealing that slope,curvature,topographic wetness index(TWI),degree of relief,and elevation significantly influenced landslides in the study area.This study offers a scientific reference for LSM and disaster prevention research.This study examines the utilization of various boosting algorithms and hyperparameter optimization algorithms in Wanzhou District.It proposes the HO-CB-SHAP framework as an effective approach to accurately forecast landslide disasters and interpret LSM models.However,limitations exist concerning the generalizability of the model and the data processing,which require further exploration in subsequent studies.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU),Grant Number IMSIU-RG23151.
文摘This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.
文摘Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.
基金supported in part by the National Natural Science Foundation of China under Grant No.61503059
文摘Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. Several techniques have been developed and successfully applied for certain application domains. However, this work demands professional knowledge and expert experience. And sometimes it has to resort to the brute-force search.Therefore, if an efficient hyperparameter optimization algorithm can be developed to optimize any given machine learning method, it will greatly improve the efficiency of machine learning. In this paper, we consider building the relationship between the performance of the machine learning models and their hyperparameters by Gaussian processes. In this way, the hyperparameter tuning problem can be abstracted as an optimization problem and Bayesian optimization is used to solve the problem. Bayesian optimization is based on the Bayesian theorem. It sets a prior over the optimization function and gathers the information from the previous sample to update the posterior of the optimization function. A utility function selects the next sample point to maximize the optimization function.Several experiments were conducted on standard test datasets. Experiment results show that the proposed method can find the best hyperparameters for the widely used machine learning models, such as the random forest algorithm and the neural networks, even multi-grained cascade forest under the consideration of time cost.
文摘Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons in each layer)has a significant influence on the accuracy of these methods.Therefore,a considerable number of studies have been carried out to optimize the NN hyperpaxameters.In this study,the genetic algorithm is applied to NN to find the optimal hyperpaxameters.Thus,the deep energy method,which contains a deep neural network,is applied first on a Timoshenko beam and a plate with a hole.Subsequently,the numbers of hidden layers,integration points,and neurons in each layer are optimized to reach the highest accuracy to predict the stress distribution through these structures.Thus,applying the proper optimization method on NN leads to significant increase in the NN prediction accuracy after conducting the optimization in various examples.
文摘BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential.
基金supported in part by the National Key Research and Development Program of China under Grant 2019YFB2102102in part by the National Natural Science Foundations of China under Grant 62176094 and Grant 61873097+2 种基金in part by the Key‐Area Research and Development of Guangdong Province under Grant 2020B010166002in part by the Guangdong Natural Science Foundation Research Team under Grant 2018B030312003in part by the Guangdong‐Hong Kong Joint Innovation Platform under Grant 2018B050502006.
文摘Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 2/46/43)Princess Nourah bint Abdulrahman UniversityResearchers Supporting Project number(PNURSP2022R140)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research atUmmAl-Qura University for supporting this work by Grant Code:(22UQU4210118DSR12).
文摘Cyberbullying(CB)is a challenging issue in social media and it becomes important to effectively identify the occurrence of CB.The recently developed deep learning(DL)models pave the way to design CB classifier models with maximum performance.At the same time,optimal hyperparameter tuning process plays a vital role to enhance overall results.This study introduces a Teacher Learning Genetic Optimization with Deep Learning Enabled Cyberbullying Classification(TLGODL-CBC)model in Social Media.The proposed TLGODL-CBC model intends to identify the existence and non-existence of CB in social media context.Initially,the input data is cleaned and pre-processed to make it compatible for further processing.Followed by,independent recurrent autoencoder(IRAE)model is utilized for the recognition and classification of CBs.Finally,the TLGO algorithm is used to optimally adjust the parameters related to the IRAE model and shows the novelty of the work.To assuring the improved outcomes of the TLGODLCBC approach,a wide range of simulations are executed and the outcomes are investigated under several aspects.The simulation outcomes make sure the improvements of the TLGODL-CBC model over recent approaches.
文摘Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,accounting for 31%of all deaths worldwide.With a timely prognosis and thorough consideration of the patient’s medical history and lifestyle,it is possible to predict CVDs and take preventive measures to eliminate or control this life-threatening disease.In this study,we used various patient datasets from a major hospital in the United States as prognostic factors for CVD.The data was obtained by monitoring a total of 918 patients whose criteria for adults were 28-77 years old.In this study,we present a data mining modeling approach to analyze the performance,classification accuracy and number of clusters on Cardiovascular Disease Prognostic datasets in unsupervised machine learning(ML)using the Orange data mining software.Various techniques are then used to classify the model parameters,such as k-nearest neighbors,support vector machine,random forest,artificial neural network(ANN),naïve bayes,logistic regression,stochastic gradient descent(SGD),and AdaBoost.To determine the number of clusters,various unsupervised ML clustering methods were used,such as k-means,hierarchical,and density-based spatial clustering of applications with noise clustering.The results showed that the best model performance analysis and classification accuracy were SGD and ANN,both of which had a high score of 0.900 on Cardiovascular Disease Prognostic datasets.Based on the results of most clustering methods,such as k-means and hierarchical clustering,Cardiovascular Disease Prognostic datasets can be divided into two clusters.The prognostic accuracy of CVD depends on the accuracy of the proposed model in determining the diagnostic model.The more accurate the model,the better it can predict which patients are at risk for CVD.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021MF051)。
文摘Aiming at training the feed-forward threshold neural network consisting of nondifferentiable activation functions, the approach of noise injection forms a stochastic resonance based threshold network that can be optimized by various gradientbased optimizers. The introduction of injected noise extends the noise level into the parameter space of the designed threshold network, but leads to a highly non-convex optimization landscape of the loss function. Thus, the hyperparameter on-line learning procedure with respective to network weights and noise levels becomes of challenge. It is shown that the Adam optimizer, as an adaptive variant of stochastic gradient descent, manifests its superior learning ability in training the stochastic resonance based threshold network effectively. Experimental results demonstrate the significant improvement of performance of the designed threshold network trained by the Adam optimizer for function approximation and image classification.
文摘When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other hand,the currently employed approaches have certain restrictions,including high levels of design complexity,severe time constraints on error consolidation and propagation,and uncontaminated architectural registers(ARs).The design of near-threshold circuits,often known as NT circuits,is becoming the approach of choice for the construction of energy-efficient digital circuits.As a result of the exponentially decreased driving current,there was a reduction in performance,which was one of the downsides.Numerous studies have advised the use of NT techniques to chip multiprocessors as a means to preserve outstanding energy efficiency while minimising performance loss.Over the past several years,there has been a clear growth in interest in the development of artificial intelligence hardware with low energy consumption(AI).This has resulted in both large corporations and start-ups producing items that compete on the basis of varying degrees of performance and energy use.This technology’s ultimate goal was to provide levels of efficiency and performance that could not be achieved with graphics processing units or general-purpose CPUs.To achieve this objective,the technology was created to integrate several processing units into a single chip.To accomplish this purpose,the hardware was designed with a number of unique properties.In this study,an Energy Effi-cient Hyperparameter Tuned Deep Neural Network(EEHPT-DNN)model for Variation-Tolerant Near-Threshold Processor was developed.In order to improve the energy efficiency of artificial intelligence(AI),the EEHPT-DNN model employs several AI techniques.The notion focuses mostly on the repercussions of embedded technologies positioned at the network’s edge.The presented model employs a deep stacked sparse autoencoder(DSSAE)model with the objective of creating a variation-tolerant NT processor.The time-consuming method of modifying hyperparameters through trial and error is substituted with the marine predators optimization algorithm(MPO).This method is utilised to modify the hyperparameters associated with the DSSAE model.To validate that the proposed EEHPT-DNN model has a higher degree of functionality,a full simulation study is conducted,and the results are analysed from a variety of perspectives.This was completed so that the enhanced performance could be evaluated and analysed.According to the results of the study that compared numerous DL models,the EEHPT-DNN model performed significantly better than the other models.
基金supported by National Natural Science Foundation of China (Grant:41901296,62202147).
文摘Hyperparameters play a vital impact in the performance of most machine learning algorithms.It is a challenge for traditional methods to con-figure hyperparameters of the capsule network to obtain high-performance manually.Some swarm intelligence or evolutionary computation algorithms have been effectively employed to seek optimal hyperparameters as a com-binatorial optimization problem.However,these algorithms are prone to get trapped in the local optimal solution as random search strategies are adopted.The inspiration for the hybrid rice optimization(HRO)algorithm is from the breeding technology of three-line hybrid rice in China,which has the advantages of easy implementation,less parameters and fast convergence.In the paper,genetic search is combined with the hybrid rice optimization algorithm(GHRO)and employed to obtain the optimal hyperparameter of the capsule network automatically,that is,a probability search technique and a hybridization strategy belong with the primary HRO.Thirteen benchmark functions are used to evaluate the performance of GHRO.Furthermore,the MNIST,Chest X-Ray(pneumonia),and Chest X-Ray(COVID-19&pneumonia)datasets are also utilized to evaluate the capsule network learnt by GHRO.The experimental results show that GHRO is an effective method for optimizing the hyperparameters of the capsule network,which is able to boost the performance of the capsule network on image classification.
基金the National Key Research and Development Program of China(No.2022ZD0119003)the National Natural Science Foundation of China(No.61834005).
文摘Hyperparameter optimization is considered as one of the most challenges in deep learning and dominates the precision of model in a certain.Recent proposals tried to solve this issue through the particle swarm optimization(PSO),but its native defect may result in the local optima trapped and convergence difficulty.In this paper,the genetic operations are introduced to the PSO,which makes the best hyperparameter combination scheme for specific network architecture be located easier.Spe-cifically,to prevent the troubles caused by the different data types and value scopes,a mixed coding method is used to ensure the effectiveness of particles.Moreover,the crossover and mutation opera-tions are added to the process of particles updating,to increase the diversity of particles and avoid local optima in searching.Verified with three benchmark datasets,MNIST,Fashion-MNIST,and CIFAR10,it is demonstrated that the proposed scheme can achieve accuracies of 99.58%,93.39%,and 78.96%,respectively,improving the accuracy by about 0.1%,0.5%,and 2%,respectively,compared with that of the PSO.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4210118DSR33The authors are thankful to the Deanship of ScientificResearch atNajranUniversity for funding thiswork under theResearch Groups Funding Program Grant Code(NU/RG/SERC/11/7).
文摘ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.
文摘Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to breast cancer’s favourable prognosis at an early stage, a model is created to utilize the Dataset on Wisconsin Diagnostic Breast Cancer (WDBC). Conversely, this model’s overarching axiom is to compare the effectiveness of five well-known ML classifiers, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB) with the conventional method. To counterbalance the effect with conventional methods, the overarching tactic we utilized was hyperparameter tuning utilizing the grid search method, which improved accuracy, secondary precision, third recall, F1 score and finally the AUC & ROC curve. In this study of hyperparameter tuning model, the rate of accuracy increased from 94.15% to 98.83% whereas the accuracy of the conventional method increased from 93.56% to 97.08%. According to this investigation, KNN outperformed all other classifiers in terms of accuracy, achieving a score of 98.83%. In conclusion, our study shows that KNN works well with the hyper-tuning method. These analyses show that this study prediction approach is useful in prognosticating women with breast cancer with a viable performance and more accurate findings when compared to the conventional approach.
基金supported by the National Key R&D Program of China(No.2022YFE0196100)the National Natural Science Foundation of China(Nos.12071460 and 72401205)+1 种基金the Special Innovation Projects of Ordinary Colleges and Universities in Guangdong Province(No.2024KTSCX258)Shenzhen Fundamental Research Program Stability Support Program for Higher Education Institution(No.20231127142912001).
文摘The rapid evolution of distributed energy resources,particularly photovoltaic systems,poses a formidable challenge in maintaining a delicate balance between energy supply and demand while minimizing costs.The integrated nature of distributed markets,blending centralized and decentralized elements,holds the promise of maximizing social welfare and significantly reducing overall costs,including computational and communication expenses.However,achieving this balance requires careful consideration of various hyperparameter sets,encompassing factors such as the number of communities,community detection methods,and trading mechanisms employed among nodes.To address this challenge,we introduce a groundbreaking neural network-based framework,the Energy Trading-based Artificial Neural Network(ET-ANN),which excels in performance compared to existing algorithms.Our experiments underscore the superiority of ET-ANN in minimizing total energy transaction costs while maximizing social welfare within the realm of photovoltaic networks.
基金financially supported by the National Key R&D Project(No.2022YFC3203203)the Shaanxi Province Science Fund for Distinguished Young Scholars(No.S2023-JC-JQ-0036).
文摘Neural networks(NNs)have been used extensively in surface water prediction tasks due to computing algorithm improvements and data accumulation.An essential step in developing an NN is the hyperparameter selection.In practice,it is common to manually determine hyperparameters in the studies of NNs in water resources tasks.This may result in considerable randomness and require significant computation time;therefore,hyperparameter optimization(HPO)is essential.This study adopted five representatives of the HPO techniques in the surface water quality prediction tasks,including the grid sampling(GS),random search(RS),genetic algorithm(GA),Bayesian optimization(BO)based on the Gaussian process(GP),and the tree Parzen estimator(TPE).For the evaluation of these techniques,this study proposed a method:first,the optimal hyperparameter value sets achieved by GS were regarded as the benchmark;then,the other HPO techniques were evaluated and compared with the benchmark in convergence,optimization orientation,and consistency of the optimized values.The results indicated that the TPE-based BO algorithm was recommended because it yielded stable convergence,reasonable optimization orientation,and the highest consistency rates with the benchmark values.The optimization consistency rates via TPE for the hyperparameters hidden layers,hidden dimension,learning rate,and batch size were 86.7%,73.3%,73.3%,and 80.0%,respectively.Unlike the evaluation of HPO techniques directly based on the prediction performance of the optimized NN in a single HPO test,the proposed benchmark-based HPO evaluation approach is feasible and robust.
基金funded by Digital Circular Electrochemical omy Econ-(DCEE)[EP/V042432/1]the UK Research and tion Innova-(UKRI)Interdisciplinary Centre for Circular Chemical Economy[EP/V011863/1]and[EP/V011863/2].
文摘To advance the circular economy(CE),it is crucial to gain insights into the evolution of public attention,cognitive pathways related to circular products,and key public concerns.To achieve these objectives,we collected data from diverse platforms,including Twitter,Reddit,and The Guardian,and utilised three topic models to analyse the data.Given the performance of topic modelling may vary depending on hyperparameter settings,we proposed a novel framework that integrates twin(single-and multi-objective)hyperparameter timisation op-for CE analysis.Systematic experiments were conducted to determine appropriate hyperparameters under different constraints,providing valuable insights into the correlations between CE and public attention.Our findings reveal that economic implications of sustainability and circular practices,particularly around recyclable materials and environmentally sustainable technologies,remain a significant public concern.Topics related to sustainable development and environmental protection technologies are particularly prominent on The Guardian,while Twitter discussions are comparatively sparse.These insights highlight the importance of targeted education programmes,business incentives adopt CE practices,and stringent waste management policies alongside improved recycling processes.