Osteoarthritis(OA)is a degenerative joint disease with significant clinical and societal impact.Traditional diagnostic methods,including subjective clinical assessments and imaging techniques such as X-rays and MRIs,a...Osteoarthritis(OA)is a degenerative joint disease with significant clinical and societal impact.Traditional diagnostic methods,including subjective clinical assessments and imaging techniques such as X-rays and MRIs,are often limited in their ability to detect early-stage OA or capture subtle joint changes.These limitations result in delayed diagnoses and inconsistent outcomes.Additionally,the analysis of omics data is challenged by the complexity and high dimensionality of biological datasets,making it difficult to identify key molecular mechanisms and biomarkers.Recent advancements in artificial intelligence(AI)offer transformative potential to address these challenges.This review systematically explores the integration of AI into OA research,focusing on applications such as AI-driven early screening and risk prediction from electronic health records(EHR),automated grading and morphological analysis of imaging data,and biomarker discovery through multi-omics integration.By consolidating progress across clinical,imaging,and omics domains,this review provides a comprehensive perspective on how AI is reshaping OA research.The findings have the potential to drive innovations in personalized medicine and targeted interventions,addressing longstanding challenges in OA diagnosis and management.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
Over the past decade,artificial intelligence(AI)has evolved at an unprecedented pace,transforming technology,industry,and society.From diagnosing diseases with remarkable accuracy to powering self-driving cars and rev...Over the past decade,artificial intelligence(AI)has evolved at an unprecedented pace,transforming technology,industry,and society.From diagnosing diseases with remarkable accuracy to powering self-driving cars and revolutionizing personalized learning,AI is reshaping our world in ways once thought impossible.Spanning fields such as machine learning,deep learning,natural language processing,robotics,and ChatGPT,AI continues to push the boundaries of innovation.As AI continues to advance,it is vital to have a platform that not only disseminates cutting-edge research innovations but also fosters broad discussions on its societal impact,ethical considerations,and interdisciplinary applications.With this vision in mind,we proudly introduce Artificial Intelligence Science and Engineering(AISE)-a journal dedicated to nurturing the next wave of AI innovation and engineering applications.Our mission is to provide a premier outlet where researchers can share high-quality,impactful studies and collaborate to advance AI across academia,industry,and beyond.展开更多
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio...Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.展开更多
Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Int...Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.展开更多
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance b...Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.展开更多
Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making ...Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making AI-based classification crucial for early detection.Therefore,automated classification using Artificial Intelligence(AI)techniques has a crucial role in addressing the limitations of manual classification and preventing the development of MS to advanced stages.This study developed hybrid systems integrating XGBoost(eXtreme Gradient Boosting)with multi-CNN(Convolutional Neural Networks)features based on Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS)algorithms for early classification of MRI(Magnetic Resonance Imaging)images in a multi-class and binary-class MS dataset.All hybrid systems started by enhancing MRI images using the fusion processes of a Gaussian filter and Contrast-Limited Adaptive Histogram Equalization(CLAHE).Then,the Gradient Vector Flow(GVF)algorithm was applied to select white matter(regions of interest)within the brain and segment them from the surrounding brain structures.These regions of interest were processed by CNN models(ResNet101,DenseNet201,and MobileNet)to extract deep feature maps,which were then combined into fused feature vectors of multi-CNN model combinations(ResNet101-DenseNet201,DenseNet201-MobileNet,ResNet101-MobileNet,and ResNet101-DenseNet201-MobileNet).The multi-CNN features underwent dimensionality reduction using ACO and MESbS algorithms to remove unimportant features and retain important features.The XGBoost classifier employed the resultant feature vectors for classification.All developed hybrid systems displayed promising outcomes.For multiclass classification,the XGBoost model using ResNet101-DenseNet201-MobileNet features selected by ACO attained 99.4%accuracy,99.45%precision,and 99.75%specificity,surpassing prior studies(93.76%accuracy).It reached 99.6%accuracy,99.65%precision,and 99.55%specificity in binary-class classification.These results demonstrate the effectiveness of multi-CNN fusion with feature selection in improving MS classification accuracy.展开更多
Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential bec...Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.展开更多
As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,fo...As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.展开更多
Results of a research about statistical reasoning that six high school teachers developed in a computer environment are presented in this article. A sequence of three activities with the support of software Fathom was...Results of a research about statistical reasoning that six high school teachers developed in a computer environment are presented in this article. A sequence of three activities with the support of software Fathom was presented to the teachers in a course to investigate about the reasoning that teachers develop about the data analysis, particularly about the distribution concept, that involves important concepts such as averages, variability and graphics representations. The design of the activities was planned so that the teachers analyzed quantitative variables separately first, and later made an analysis of a qualitative variable versus a quantitative variable with the objective of establishing comparisons between distributions and use concepts as averages, variability, shape and outliers. The instructions in each activity indicated to the teachers to use all the resources of the software that were necessary to make the complete analysis and respond to certain questions that pretended to capture the type of representations they used to answer. The results indicate that despite the abundance of representations provided by the software, teachers focu,; on the calculation of averages to describe and compare distributions, rather than on the important properties of data such as variability, :shape and outliers. Many teachers were able to build interesting graphs reflecting important properties of the data, but cannot use them 1:o support data analysis. Hence, it is necessary to extend the teachers' understanding on data analysis so they can take advantage of the cognitive potential that computer tools to offer.展开更多
Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction...Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction in river,lake,and urban areas.However,these models require various types of data,in-depth domain knowledge,experience with modeling,and intensive computational time,which hinders short-term or real-time prediction.In this paper,we propose a new framework based on machine learning methods to alleviate the aforementioned limitation.We develop a wide range of machine learning models such as linear regression(LR),support vector regression(SVR),random forest regression(RFR),multilayer perceptron regression(MLPR),and light gradient boosting machine regression(LGBMR)to predict the hourly water level at Le Thuy and Kien Giang stations of the Kien Giang river based on collected data of 2010,2012,and 2020.Four evaluation metrics,that is,R^(2),Nash-Sutcliffe efficiency,mean absolute error,and root mean square error,are employed to examine the reliability of the proposed models.The results show that the LR model outperforms the SVR,RFR,MLPR,and LGBMR models.展开更多
Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary ...Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary to develop a machine-learning model to detect and predict commercial flights using automatic dependent surveillance–broadcast data.This study combined data-quality detection,anomaly detection,and abnormality-classification-model development.The research methodology involved the following stages:problem statement,data selection and labeling,prediction-model development,deployment,and testing.The data labeling process was based on the rules framed by the international civil aviation organization for commercial,jet-engine flights and validated by expert commercial pilots.The results showed that the best prediction model,the quadratic-discriminant-analysis,was 93%accurate,indicating a“good fit”.Moreover,the model’s area-under-the-curve results for abnormal and normal detection were 0.97 and 0.96,respectively,thus confirming its“good fit”.展开更多
Automatic detection of student engagement levels from videos,which is a spatio-temporal classification problem is crucial for enhancing the quality of online education.This paper addresses this challenge by proposing ...Automatic detection of student engagement levels from videos,which is a spatio-temporal classification problem is crucial for enhancing the quality of online education.This paper addresses this challenge by proposing four novel hybrid end-to-end deep learning models designed for the automatic detection of student engagement levels in e-learning videos.The evaluation of these models utilizes the DAiSEE dataset,a public repository capturing student affective states in e-learning scenarios.The initial model integrates EfficientNetV2-L with Gated Recurrent Unit(GRU)and attains an accuracy of 61.45%.Subsequently,the second model combines EfficientNetV2-L with bidirectional GRU(Bi-GRU),yielding an accuracy of 61.56%.The third and fourth models leverage a fusion of EfficientNetV2-L with Long Short-Term Memory(LSTM)and bidirectional LSTM(Bi-LSTM),achieving accuracies of 62.11%and 61.67%,respectively.Our findings demonstrate the viability of these models in effectively discerning student engagement levels,with the EfficientNetV2-L+LSTM model emerging as the most proficient,reaching an accuracy of 62.11%.This study underscores the potential of hybrid spatio-temporal networks in automating the detection of student engagement,thereby contributing to advancements in online education quality.展开更多
As the global population continues to expand,the demand for natural resources increases.Unfortunately,human activities account for 23%of greenhouse gas emissions.On a positive note,remote sensing technologies have eme...As the global population continues to expand,the demand for natural resources increases.Unfortunately,human activities account for 23%of greenhouse gas emissions.On a positive note,remote sensing technologies have emerged as a valuable tool in managing our environment.These technologies allow us to monitor land use,plan urban areas,and drive advancements in areas such as agriculture,climate changemitigation,disaster recovery,and environmentalmonitoring.Recent advances in Artificial Intelligence(AI),computer vision,and earth observation data have enabled unprecedented accuracy in land use mapping.By using transfer learning and fine-tuning with red-green-blue(RGB)bands,we achieved an impressive 99.19%accuracy in land use analysis.Such findings can be used to inform conservation and urban planning policies.展开更多
Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particula...Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.展开更多
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h...Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.展开更多
Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques...Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.展开更多
In this review paper,we present a thorough investigation into the role of pavement technologies in advancing urban sustainability.Our analysis traverses the historical evolution of these technologies,meticulously eval...In this review paper,we present a thorough investigation into the role of pavement technologies in advancing urban sustainability.Our analysis traverses the historical evolution of these technologies,meticulously evaluating their socio-economic and environmental impacts,with a particular emphasis on their role in mitigating the urban heat island effect.The evaluation of pavement types and variables influencing pavement performance to be used in the multi-criteria decision-making(MCDM)framework to choose the optimal pavement application are at the heart of our research.Which serves to assess a spectrum of pavement options,revealing insights into the most effective and sustainable practices.By highlighting both the existing challenges and potential innovative solutions within thefield,this paper aims to offer a directional compass for future urban planning and infrastructural advancements.This review not only synthesizes the current state of knowledge but also aims to chart a course for future exploration,emphasizing the critical need for innovative and environmentally sensitive pavement tech-nologies in the creation of resilient and sustainable urban environments.展开更多
Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image a...Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.展开更多
Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary mea...Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.展开更多
基金supported by the National Natural Science Foundation of China(82302757)Shenzhen Science and Technology Program(JCY20240813145204006,SGDX20201103095600002,JCYJ20220818103417037,KJZD20230923115200002)+1 种基金Shenzhen Key Laboratory of Digital Surgical Printing Project(ZDSYS201707311542415)Shenzhen Development and Reform Program(XMHT20220106001).
文摘Osteoarthritis(OA)is a degenerative joint disease with significant clinical and societal impact.Traditional diagnostic methods,including subjective clinical assessments and imaging techniques such as X-rays and MRIs,are often limited in their ability to detect early-stage OA or capture subtle joint changes.These limitations result in delayed diagnoses and inconsistent outcomes.Additionally,the analysis of omics data is challenged by the complexity and high dimensionality of biological datasets,making it difficult to identify key molecular mechanisms and biomarkers.Recent advancements in artificial intelligence(AI)offer transformative potential to address these challenges.This review systematically explores the integration of AI into OA research,focusing on applications such as AI-driven early screening and risk prediction from electronic health records(EHR),automated grading and morphological analysis of imaging data,and biomarker discovery through multi-omics integration.By consolidating progress across clinical,imaging,and omics domains,this review provides a comprehensive perspective on how AI is reshaping OA research.The findings have the potential to drive innovations in personalized medicine and targeted interventions,addressing longstanding challenges in OA diagnosis and management.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
文摘Over the past decade,artificial intelligence(AI)has evolved at an unprecedented pace,transforming technology,industry,and society.From diagnosing diseases with remarkable accuracy to powering self-driving cars and revolutionizing personalized learning,AI is reshaping our world in ways once thought impossible.Spanning fields such as machine learning,deep learning,natural language processing,robotics,and ChatGPT,AI continues to push the boundaries of innovation.As AI continues to advance,it is vital to have a platform that not only disseminates cutting-edge research innovations but also fosters broad discussions on its societal impact,ethical considerations,and interdisciplinary applications.With this vision in mind,we proudly introduce Artificial Intelligence Science and Engineering(AISE)-a journal dedicated to nurturing the next wave of AI innovation and engineering applications.Our mission is to provide a premier outlet where researchers can share high-quality,impactful studies and collaborate to advance AI across academia,industry,and beyond.
基金supported by the Intelligent System Research Group(ISysRG)supported by Universitas Sriwijaya funded by the Competitive Research 2024.
文摘Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.
基金funded by Scientific Research Deanship at University of Ha’il,Saudi Arabia,through project number GR-24009.
文摘Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.
基金Supported by the Bavarian Academic Forum(BayWISS),as a part of the joint academic partnership digitalization program.
文摘Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.
文摘Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making AI-based classification crucial for early detection.Therefore,automated classification using Artificial Intelligence(AI)techniques has a crucial role in addressing the limitations of manual classification and preventing the development of MS to advanced stages.This study developed hybrid systems integrating XGBoost(eXtreme Gradient Boosting)with multi-CNN(Convolutional Neural Networks)features based on Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS)algorithms for early classification of MRI(Magnetic Resonance Imaging)images in a multi-class and binary-class MS dataset.All hybrid systems started by enhancing MRI images using the fusion processes of a Gaussian filter and Contrast-Limited Adaptive Histogram Equalization(CLAHE).Then,the Gradient Vector Flow(GVF)algorithm was applied to select white matter(regions of interest)within the brain and segment them from the surrounding brain structures.These regions of interest were processed by CNN models(ResNet101,DenseNet201,and MobileNet)to extract deep feature maps,which were then combined into fused feature vectors of multi-CNN model combinations(ResNet101-DenseNet201,DenseNet201-MobileNet,ResNet101-MobileNet,and ResNet101-DenseNet201-MobileNet).The multi-CNN features underwent dimensionality reduction using ACO and MESbS algorithms to remove unimportant features and retain important features.The XGBoost classifier employed the resultant feature vectors for classification.All developed hybrid systems displayed promising outcomes.For multiclass classification,the XGBoost model using ResNet101-DenseNet201-MobileNet features selected by ACO attained 99.4%accuracy,99.45%precision,and 99.75%specificity,surpassing prior studies(93.76%accuracy).It reached 99.6%accuracy,99.65%precision,and 99.55%specificity in binary-class classification.These results demonstrate the effectiveness of multi-CNN fusion with feature selection in improving MS classification accuracy.
文摘Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.
基金supported by the University Putra Malaysia and the Ministry of Higher Education Malaysia under grantNumber:(FRGS/1/2023/ICT11/UPM/02/3).
文摘As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.
文摘Results of a research about statistical reasoning that six high school teachers developed in a computer environment are presented in this article. A sequence of three activities with the support of software Fathom was presented to the teachers in a course to investigate about the reasoning that teachers develop about the data analysis, particularly about the distribution concept, that involves important concepts such as averages, variability and graphics representations. The design of the activities was planned so that the teachers analyzed quantitative variables separately first, and later made an analysis of a qualitative variable versus a quantitative variable with the objective of establishing comparisons between distributions and use concepts as averages, variability, shape and outliers. The instructions in each activity indicated to the teachers to use all the resources of the software that were necessary to make the complete analysis and respond to certain questions that pretended to capture the type of representations they used to answer. The results indicate that despite the abundance of representations provided by the software, teachers focu,; on the calculation of averages to describe and compare distributions, rather than on the important properties of data such as variability, :shape and outliers. Many teachers were able to build interesting graphs reflecting important properties of the data, but cannot use them 1:o support data analysis. Hence, it is necessary to extend the teachers' understanding on data analysis so they can take advantage of the cognitive potential that computer tools to offer.
基金Scientific Research and Technology Development Project。
文摘Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction in river,lake,and urban areas.However,these models require various types of data,in-depth domain knowledge,experience with modeling,and intensive computational time,which hinders short-term or real-time prediction.In this paper,we propose a new framework based on machine learning methods to alleviate the aforementioned limitation.We develop a wide range of machine learning models such as linear regression(LR),support vector regression(SVR),random forest regression(RFR),multilayer perceptron regression(MLPR),and light gradient boosting machine regression(LGBMR)to predict the hourly water level at Le Thuy and Kien Giang stations of the Kien Giang river based on collected data of 2010,2012,and 2020.Four evaluation metrics,that is,R^(2),Nash-Sutcliffe efficiency,mean absolute error,and root mean square error,are employed to examine the reliability of the proposed models.The results show that the LR model outperforms the SVR,RFR,MLPR,and LGBMR models.
文摘Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary to develop a machine-learning model to detect and predict commercial flights using automatic dependent surveillance–broadcast data.This study combined data-quality detection,anomaly detection,and abnormality-classification-model development.The research methodology involved the following stages:problem statement,data selection and labeling,prediction-model development,deployment,and testing.The data labeling process was based on the rules framed by the international civil aviation organization for commercial,jet-engine flights and validated by expert commercial pilots.The results showed that the best prediction model,the quadratic-discriminant-analysis,was 93%accurate,indicating a“good fit”.Moreover,the model’s area-under-the-curve results for abnormal and normal detection were 0.97 and 0.96,respectively,thus confirming its“good fit”.
文摘Automatic detection of student engagement levels from videos,which is a spatio-temporal classification problem is crucial for enhancing the quality of online education.This paper addresses this challenge by proposing four novel hybrid end-to-end deep learning models designed for the automatic detection of student engagement levels in e-learning videos.The evaluation of these models utilizes the DAiSEE dataset,a public repository capturing student affective states in e-learning scenarios.The initial model integrates EfficientNetV2-L with Gated Recurrent Unit(GRU)and attains an accuracy of 61.45%.Subsequently,the second model combines EfficientNetV2-L with bidirectional GRU(Bi-GRU),yielding an accuracy of 61.56%.The third and fourth models leverage a fusion of EfficientNetV2-L with Long Short-Term Memory(LSTM)and bidirectional LSTM(Bi-LSTM),achieving accuracies of 62.11%and 61.67%,respectively.Our findings demonstrate the viability of these models in effectively discerning student engagement levels,with the EfficientNetV2-L+LSTM model emerging as the most proficient,reaching an accuracy of 62.11%.This study underscores the potential of hybrid spatio-temporal networks in automating the detection of student engagement,thereby contributing to advancements in online education quality.
文摘As the global population continues to expand,the demand for natural resources increases.Unfortunately,human activities account for 23%of greenhouse gas emissions.On a positive note,remote sensing technologies have emerged as a valuable tool in managing our environment.These technologies allow us to monitor land use,plan urban areas,and drive advancements in areas such as agriculture,climate changemitigation,disaster recovery,and environmentalmonitoring.Recent advances in Artificial Intelligence(AI),computer vision,and earth observation data have enabled unprecedented accuracy in land use mapping.By using transfer learning and fine-tuning with red-green-blue(RGB)bands,we achieved an impressive 99.19%accuracy in land use analysis.Such findings can be used to inform conservation and urban planning policies.
文摘Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.
文摘Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.
基金This paper’s logical organisation and content quality have been enhanced,so the authors thank anonymous reviewers and journal editors for assistance.
文摘Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.
文摘In this review paper,we present a thorough investigation into the role of pavement technologies in advancing urban sustainability.Our analysis traverses the historical evolution of these technologies,meticulously evaluating their socio-economic and environmental impacts,with a particular emphasis on their role in mitigating the urban heat island effect.The evaluation of pavement types and variables influencing pavement performance to be used in the multi-criteria decision-making(MCDM)framework to choose the optimal pavement application are at the heart of our research.Which serves to assess a spectrum of pavement options,revealing insights into the most effective and sustainable practices.By highlighting both the existing challenges and potential innovative solutions within thefield,this paper aims to offer a directional compass for future urban planning and infrastructural advancements.This review not only synthesizes the current state of knowledge but also aims to chart a course for future exploration,emphasizing the critical need for innovative and environmentally sensitive pavement tech-nologies in the creation of resilient and sustainable urban environments.
文摘Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.
基金Supported by Shandong Province Key R and D Program,No.2021SFGC0504Shandong Provincial Natural Science Foundation,No.ZR2021MF079Science and Technology Development Plan of Jinan(Clinical Medicine Science and Technology Innovation Plan),No.202225054.
文摘Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.