Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computa...Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.展开更多
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN model...Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.展开更多
The Internet of Things (IoT) integrates diverse devices into the Internet infrastructure, including sensors, meters, and wearable devices. Designing efficient IoT networks with these heterogeneous devices requires the...The Internet of Things (IoT) integrates diverse devices into the Internet infrastructure, including sensors, meters, and wearable devices. Designing efficient IoT networks with these heterogeneous devices requires the selection of appropriate routing protocols, which is crucial for maintaining high Quality of Service (QoS). The Internet Engineering Task Force’s Routing Over Low Power and Lossy Networks (IETF ROLL) working group developed the IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) to meet these needs. While the initial RPL standard focused on single-metric route selection, ongoing research explores enhancing RPL by incorporating multiple routing metrics and developing new Objective Functions (OFs). This paper introduces a novel Objective Function (OF), the Reliable and Secure Objective Function (RSOF), designed to enhance the reliability and trustworthiness of parent selection at both the node and link levels within IoT and RPL routing protocols. The RSOF employs an adaptive parent node selection mechanism that incorporates multiple metrics, including Residual Energy (RE), Expected Transmission Count (ETX), Extended RPL Node Trustworthiness (ERNT), and a novel metric that measures node failure rate (NFR). In this mechanism, nodes with a high NFR are excluded from the parent selection process to improve network reliability and stability. The proposed RSOF was evaluated using random and grid topologies in the Cooja Simulator, with tests conducted across small, medium, and large-scale networks to examine the impact of varying node densities. The simulation results indicate a significant improvement in network performance, particularly in terms of average latency, packet acknowledgment ratio (PAR), packet delivery ratio (PDR), and Control Message Overhead (CMO), compared to the standard Minimum Rank with Hysteresis Objective Function (MRHOF).展开更多
This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak...This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.展开更多
Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential bec...Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.展开更多
Background:In the field of genetic diagnostics,DNA sequencing is an important tool because the depth and complexity of this field have major implications in light of the genetic architectures of diseases and the ident...Background:In the field of genetic diagnostics,DNA sequencing is an important tool because the depth and complexity of this field have major implications in light of the genetic architectures of diseases and the identification of risk factors associated with genetic disorders.Methods:Our study introduces a novel two-tiered analytical framework to raise the precision and reliability of genetic data interpretation.It is initiated by extracting and analyzing salient features from DNA sequences through a CNN-based feature analysis,taking advantage of the power inherent in Convolutional neural networks(CNNs)to attain complex patterns and minute mutations in genetic data.This study embraces an elite collection of machine learning classifiers interweaved through a stern voting mechanism,which synergistically joins the predictions made from multiple classifiers to generate comprehensive and well-balanced interpretations of the genetic data.Results:This state-of-the-art method was further tested by carrying out an empirical analysis on a variants'dataset of DNA sequences taken from patients affected by breast cancer,juxtaposed with a control group composed of healthy people.Thus,the integration of CNNs with a voting-based ensemble of classifiers returned outstanding outcomes,with performance metrics accuracy,precision,recall,and F1-scorereaching the outstanding rate of 0.88,outperforming previous models.Conclusions:This dual accomplishment underlines the transformative potential that integrating deep learning techniques with ensemble machine learning might provide in real added value for further genetic diagnostics and prognostics.These results from this study set a new benchmark in the accuracy of disease diagnosis through DNA sequencing and promise future studies on improved personalized medicine and healthcare approaches with precise genetic information.展开更多
The rapid and increasing growth in the volume and number of cyber threats from malware is not a real danger;the real threat lies in the obfuscation of these cyberattacks,as they constantly change their behavior,making...The rapid and increasing growth in the volume and number of cyber threats from malware is not a real danger;the real threat lies in the obfuscation of these cyberattacks,as they constantly change their behavior,making detection more difficult.Numerous researchers and developers have devoted considerable attention to this topic;however,the research field has not yet been fully saturated with high-quality studies that address these problems.For this reason,this paper presents a novel multi-objective Markov-enhanced adaptive whale optimization(MOMEAWO)cybersecurity model to improve the classification of binary and multi-class malware threats through the proposed MOMEAWO approach.The proposed MOMEAWO cybersecurity model aims to provide an innovative solution for analyzing,detecting,and classifying the behavior of obfuscated malware within their respective families.The proposed model includes three classification types:Binary classification and multi-class classification(e.g.,four families and 16 malware families).To evaluate the performance of this model,we used a recently published dataset called the Canadian Institute for Cybersecurity Malware Memory Analysis(CIC-MalMem-2022)that contains balanced data.The results show near-perfect accuracy in binary classification and high accuracy in multi-class classification compared with related work using the same dataset.展开更多
Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to syst...Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.展开更多
As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,fo...As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.展开更多
Parametric survival models are essential for analyzing time-to-event data in fields such as engineering and biomedicine.While the log-logistic distribution is popular for its simplicity and closed-form expressions,it ...Parametric survival models are essential for analyzing time-to-event data in fields such as engineering and biomedicine.While the log-logistic distribution is popular for its simplicity and closed-form expressions,it often lacks the flexibility needed to capture complex hazard patterns.In this article,we propose a novel extension of the classical log-logistic distribution,termed the new exponential log-logistic(NExLL)distribution,designed to provide enhanced flexibility in modeling time-to-event data with complex failure behaviors.The NExLL model incorporates a new exponential generator to expand the shape adaptability of the baseline log-logistic distribution,allowing it to capture a wide range of hazard rate shapes,including increasing,decreasing,J-shaped,reversed J-shaped,modified bathtub,and unimodal forms.A key feature of the NExLL distribution is its formulation as a mixture of log-logistic densities,offering both symmetric and asymmetric patterns suitable for diverse real-world reliability scenarios.We establish several theoretical properties of the model,including closed-form expressions for its probability density function,cumulative distribution function,moments,hazard rate function,and quantiles.Parameter estimation is performed using seven classical estimation techniques,with extensive Monte Carlo simulations used to evaluate and compare their performance under various conditions.The practical utility and flexibility of the proposed model are illustrated using two real-world datasets from reliability and engineering applications,where the NExLL model demonstrates superior fit and predictive performance compared to existing log-logistic-basedmodels.This contribution advances the toolbox of parametric survivalmodels,offering a robust alternative formodeling complex aging and failure patterns in reliability,engineering,and other applied domains.展开更多
Duplicate bug reporting is a critical problem in the software repositories’mining area.Duplicate bug reports can lead to redundant efforts,wasted resources,and delayed software releases.Thus,their accurate identifica...Duplicate bug reporting is a critical problem in the software repositories’mining area.Duplicate bug reports can lead to redundant efforts,wasted resources,and delayed software releases.Thus,their accurate identification is essential for streamlining the bug triage process mining area.Several researchers have explored classical information retrieval,natural language processing,text and data mining,and machine learning approaches.The emergence of large language models(LLMs)(ChatGPT and Huggingface)has presented a new line of models for semantic textual similarity(STS).Although LLMs have shown remarkable advancements,there remains a need for longitudinal studies to determine whether performance improvements are due to the scale of the models or the unique embeddings they produce compared to classical encoding models.This study systematically investigates this issue by comparing classical word embedding techniques against LLM-based embeddings for duplicate bug detection.In this study,we have proposed an amalgamation of models to detect duplicate bug reports using textual and non-textual information about bug reports.The empirical evaluation has been performed on the open-source datasets and evaluated based on established metrics using the mean reciprocal rank(MRR),mean average precision(MAP),and recall rate.The experimental results have shown that combined LLMs can outperform(recall-rate@k 68%–74%)other individual=models for duplicate bug detection.These findings highlight the effectiveness of amalgamating multiple techniques in improving the duplicate bug report detection accuracy.展开更多
Predicting human motion based on historical motion sequences is a fundamental problem in computer vision,which is at the core of many applications.Existing approaches primarily focus on encoding spatial dependencies a...Predicting human motion based on historical motion sequences is a fundamental problem in computer vision,which is at the core of many applications.Existing approaches primarily focus on encoding spatial dependencies among human joints while ignoring the temporal cues and the complex relationships across non-consecutive frames.These limitations hinder the model’s ability to generate accurate predictions over longer time horizons and in scenarios with complex motion patterns.To address the above problems,we proposed a novel multi-level spatial and temporal learning model,which consists of a Cross Spatial Dependencies Encoding Module(CSM)and a Dynamic Temporal Connection Encoding Module(DTM).Specifically,the CSM is designed to capture complementary local and global spatial dependent information at both the joint level and the joint pair level.We further present DTM to encode diverse temporal evolution contexts and compress motion features to a deep level,enabling the model to capture both short-term and long-term dependencies efficiently.Extensive experiments conducted on the Human 3.6M and CMU Mocap datasets demonstrate that our model achieves state-of-the-art performance in both short-term and long-term predictions,outperforming existing methods by up to 20.3% in accuracy.Furthermore,ablation studies confirm the significant contributions of the CSM and DTM in enhancing prediction accuracy.展开更多
Robot-assisted surgery has evolved into a crucial treatment for prostate cancer(PCa).However,from its appearance to today,brain-computer interface,virtual reality,and metaverse have revolutionized the field of robot-a...Robot-assisted surgery has evolved into a crucial treatment for prostate cancer(PCa).However,from its appearance to today,brain-computer interface,virtual reality,and metaverse have revolutionized the field of robot-assisted surgery for PCa,presenting both opportunities and challenges.Especially in the context of contemporary big data and precision medicine,facing the heterogeneity of PCa and the complexity of clinical problems,it still needs to be continuously upgraded and improved.Keeping this in mind,this article summarized the 5 stages of the historical development of robot-assisted surgery for PCa,encompassing the stages of emergence,promotion,development,maturity,and intelligence.Initially,safety concerns were paramount,but subsequent research and engineering advancements have focused on enhancing device efficacy,surgical technology,and achieving precise multi modal treatment.The dominance of da Vinci robot-assisted surgical system has seen this evolution intimately tied to its successive versions.In the future,robot-assisted surgery for PCa will move towards intelligence,promising improved patient outcomes and personalized therapy,alongside formidable challenges.To guide future development,we propose 10 significant prospects spanning clinical,research,engineering,materials,social,and economic domains,envisioning a future era of artificial intelligence in the surgical treatment of PCa.展开更多
This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected featu...This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.展开更多
The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,whi...The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.展开更多
Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissecti...Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissection detection method based on CTA images is proposed.ROI is extracted based on binarization and morphology opening operation.The deep learning networks(InceptionV3,ResNet50,and DenseNet)are applied after the preprocessing of the datasets.Recall,F1-score,Matthews correlation coefficient(MCC)and other performance indexes are investigated.It is shown that the deep learning methods have much better performance than the traditional method.And among those deep learning methods,DenseNet121 can exceed other networks such as ResNet50 and InceptionV3.展开更多
Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation ...Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods.展开更多
A trajectory generator based on vehicle kinematics model was presented and an integrated navigation simulation system was designed.Considering that the tight relation between vehicle motion and topography,a new trajec...A trajectory generator based on vehicle kinematics model was presented and an integrated navigation simulation system was designed.Considering that the tight relation between vehicle motion and topography,a new trajectory generator for vehicle was proposed for more actual simulation.Firstly,a vehicle kinematics model was built based on conversion of attitude vector in different coordinate systems.Then,the principle of common trajectory generators was analyzed.Besides,combining the vehicle kinematics model with the principle of dead reckoning,a new vehicle trajectory generator was presented,which can provide process parameters of carrier anytime and achieve simulation of typical actions of running vehicle.Moreover,IMU(inertial measurement unit) elements were simulated,including accelerometer and gyroscope.After setting up the simulation conditions,the integrated navigation simulation system was verified by final performance test.The result proves the validity and flexibility of this design.展开更多
Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a sig...Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a significant role in detecting and screening lung cancer in Computed tomography(CT)scan images.Early detection plays an important role in the survival rate and treatment of lung cancer patients.Moreover,pulmonary nodule classification techniques based on the convolutional neural network can be used for the accurate and efficient detection of lung cancer.This work proposed an automatic nodule detection method in CT images based on modified AlexNet architecture and Support vector machine(SVM)algorithm namely LungNet-SVM.The proposed model consists of seven convolutional layers,three pooling layers,and two fully connected layers used to extract features.Support vector machine classifier is applied for the binary classification of nodules into benign andmalignant.The experimental analysis is performed by using the publicly available benchmark dataset Lung nodule analysis 2016(LUNA16).The proposed model has achieved 97.64%of accuracy,96.37%of sensitivity,and 99.08%of specificity.A comparative analysis has been carried out between the proposed LungNet-SVM model and existing stateof-the-art approaches for the classification of lung cancer.The experimental results indicate that the proposed LungNet-SVM model achieved remarkable performance on a LUNA16 dataset in terms of accuracy.展开更多
文摘Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
基金funded by Centre for Advanced Modelling and Geospatial Information Systems(CAMGIS),Faculty of Engineering and IT,University of Technology Sydney.Moreover,Ongoing Research Funding Program(ORF-2025-14)King Saud University,Riyadh,Saudi Arabia,under Project ORF-2025-。
文摘Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.
文摘The Internet of Things (IoT) integrates diverse devices into the Internet infrastructure, including sensors, meters, and wearable devices. Designing efficient IoT networks with these heterogeneous devices requires the selection of appropriate routing protocols, which is crucial for maintaining high Quality of Service (QoS). The Internet Engineering Task Force’s Routing Over Low Power and Lossy Networks (IETF ROLL) working group developed the IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) to meet these needs. While the initial RPL standard focused on single-metric route selection, ongoing research explores enhancing RPL by incorporating multiple routing metrics and developing new Objective Functions (OFs). This paper introduces a novel Objective Function (OF), the Reliable and Secure Objective Function (RSOF), designed to enhance the reliability and trustworthiness of parent selection at both the node and link levels within IoT and RPL routing protocols. The RSOF employs an adaptive parent node selection mechanism that incorporates multiple metrics, including Residual Energy (RE), Expected Transmission Count (ETX), Extended RPL Node Trustworthiness (ERNT), and a novel metric that measures node failure rate (NFR). In this mechanism, nodes with a high NFR are excluded from the parent selection process to improve network reliability and stability. The proposed RSOF was evaluated using random and grid topologies in the Cooja Simulator, with tests conducted across small, medium, and large-scale networks to examine the impact of varying node densities. The simulation results indicate a significant improvement in network performance, particularly in terms of average latency, packet acknowledgment ratio (PAR), packet delivery ratio (PDR), and Control Message Overhead (CMO), compared to the standard Minimum Rank with Hysteresis Objective Function (MRHOF).
文摘This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.
文摘Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.
文摘Background:In the field of genetic diagnostics,DNA sequencing is an important tool because the depth and complexity of this field have major implications in light of the genetic architectures of diseases and the identification of risk factors associated with genetic disorders.Methods:Our study introduces a novel two-tiered analytical framework to raise the precision and reliability of genetic data interpretation.It is initiated by extracting and analyzing salient features from DNA sequences through a CNN-based feature analysis,taking advantage of the power inherent in Convolutional neural networks(CNNs)to attain complex patterns and minute mutations in genetic data.This study embraces an elite collection of machine learning classifiers interweaved through a stern voting mechanism,which synergistically joins the predictions made from multiple classifiers to generate comprehensive and well-balanced interpretations of the genetic data.Results:This state-of-the-art method was further tested by carrying out an empirical analysis on a variants'dataset of DNA sequences taken from patients affected by breast cancer,juxtaposed with a control group composed of healthy people.Thus,the integration of CNNs with a voting-based ensemble of classifiers returned outstanding outcomes,with performance metrics accuracy,precision,recall,and F1-scorereaching the outstanding rate of 0.88,outperforming previous models.Conclusions:This dual accomplishment underlines the transformative potential that integrating deep learning techniques with ensemble machine learning might provide in real added value for further genetic diagnostics and prognostics.These results from this study set a new benchmark in the accuracy of disease diagnosis through DNA sequencing and promise future studies on improved personalized medicine and healthcare approaches with precise genetic information.
文摘The rapid and increasing growth in the volume and number of cyber threats from malware is not a real danger;the real threat lies in the obfuscation of these cyberattacks,as they constantly change their behavior,making detection more difficult.Numerous researchers and developers have devoted considerable attention to this topic;however,the research field has not yet been fully saturated with high-quality studies that address these problems.For this reason,this paper presents a novel multi-objective Markov-enhanced adaptive whale optimization(MOMEAWO)cybersecurity model to improve the classification of binary and multi-class malware threats through the proposed MOMEAWO approach.The proposed MOMEAWO cybersecurity model aims to provide an innovative solution for analyzing,detecting,and classifying the behavior of obfuscated malware within their respective families.The proposed model includes three classification types:Binary classification and multi-class classification(e.g.,four families and 16 malware families).To evaluate the performance of this model,we used a recently published dataset called the Canadian Institute for Cybersecurity Malware Memory Analysis(CIC-MalMem-2022)that contains balanced data.The results show near-perfect accuracy in binary classification and high accuracy in multi-class classification compared with related work using the same dataset.
基金supported by the International Scientific and Technological Cooperation Project of Huangpu and Development Districts in Guangzhou(2023GH17)the National Science and Technology Council in Taiwan under grant number NSTC-113-2224-E-027-001,Private Funding(PV009-2023)the KW IPPP(Research Maintenance Fee)Individual/Centre/Group(RMF1506-2021)at Universiti Malaya,Malaysia.
文摘Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.
基金supported by the University Putra Malaysia and the Ministry of Higher Education Malaysia under grantNumber:(FRGS/1/2023/ICT11/UPM/02/3).
文摘As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.
文摘Parametric survival models are essential for analyzing time-to-event data in fields such as engineering and biomedicine.While the log-logistic distribution is popular for its simplicity and closed-form expressions,it often lacks the flexibility needed to capture complex hazard patterns.In this article,we propose a novel extension of the classical log-logistic distribution,termed the new exponential log-logistic(NExLL)distribution,designed to provide enhanced flexibility in modeling time-to-event data with complex failure behaviors.The NExLL model incorporates a new exponential generator to expand the shape adaptability of the baseline log-logistic distribution,allowing it to capture a wide range of hazard rate shapes,including increasing,decreasing,J-shaped,reversed J-shaped,modified bathtub,and unimodal forms.A key feature of the NExLL distribution is its formulation as a mixture of log-logistic densities,offering both symmetric and asymmetric patterns suitable for diverse real-world reliability scenarios.We establish several theoretical properties of the model,including closed-form expressions for its probability density function,cumulative distribution function,moments,hazard rate function,and quantiles.Parameter estimation is performed using seven classical estimation techniques,with extensive Monte Carlo simulations used to evaluate and compare their performance under various conditions.The practical utility and flexibility of the proposed model are illustrated using two real-world datasets from reliability and engineering applications,where the NExLL model demonstrates superior fit and predictive performance compared to existing log-logistic-basedmodels.This contribution advances the toolbox of parametric survivalmodels,offering a robust alternative formodeling complex aging and failure patterns in reliability,engineering,and other applied domains.
文摘Duplicate bug reporting is a critical problem in the software repositories’mining area.Duplicate bug reports can lead to redundant efforts,wasted resources,and delayed software releases.Thus,their accurate identification is essential for streamlining the bug triage process mining area.Several researchers have explored classical information retrieval,natural language processing,text and data mining,and machine learning approaches.The emergence of large language models(LLMs)(ChatGPT and Huggingface)has presented a new line of models for semantic textual similarity(STS).Although LLMs have shown remarkable advancements,there remains a need for longitudinal studies to determine whether performance improvements are due to the scale of the models or the unique embeddings they produce compared to classical encoding models.This study systematically investigates this issue by comparing classical word embedding techniques against LLM-based embeddings for duplicate bug detection.In this study,we have proposed an amalgamation of models to detect duplicate bug reports using textual and non-textual information about bug reports.The empirical evaluation has been performed on the open-source datasets and evaluated based on established metrics using the mean reciprocal rank(MRR),mean average precision(MAP),and recall rate.The experimental results have shown that combined LLMs can outperform(recall-rate@k 68%–74%)other individual=models for duplicate bug detection.These findings highlight the effectiveness of amalgamating multiple techniques in improving the duplicate bug report detection accuracy.
基金supported by the Urgent Need for Overseas Talent Project of Jiangxi Province(Grant No.20223BCJ25040)the Thousand Talents Plan of Jiangxi Province(Grant No.jxsg2023101085)+3 种基金the National Natural Science Foundation of China(Grant No.62106093)the Natural Science Foundation of Jiangxi(Grant Nos.20224BAB212011,20232BAB212008,20242BAB25078,and 20232BAB202051)The Youth Talent Cultivation Innovation Fund Project of Nanchang University(Grant No.XX202506030015)funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R759),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Predicting human motion based on historical motion sequences is a fundamental problem in computer vision,which is at the core of many applications.Existing approaches primarily focus on encoding spatial dependencies among human joints while ignoring the temporal cues and the complex relationships across non-consecutive frames.These limitations hinder the model’s ability to generate accurate predictions over longer time horizons and in scenarios with complex motion patterns.To address the above problems,we proposed a novel multi-level spatial and temporal learning model,which consists of a Cross Spatial Dependencies Encoding Module(CSM)and a Dynamic Temporal Connection Encoding Module(DTM).Specifically,the CSM is designed to capture complementary local and global spatial dependent information at both the joint level and the joint pair level.We further present DTM to encode diverse temporal evolution contexts and compress motion features to a deep level,enabling the model to capture both short-term and long-term dependencies efficiently.Extensive experiments conducted on the Human 3.6M and CMU Mocap datasets demonstrate that our model achieves state-of-the-art performance in both short-term and long-term predictions,outperforming existing methods by up to 20.3% in accuracy.Furthermore,ablation studies confirm the significant contributions of the CSM and DTM in enhancing prediction accuracy.
基金supported by the Fundamental Research Funds for the Central Universities(2023SCU12057)the National Natural Science Foundation of China(82373106,82372831,and 32270690).
文摘Robot-assisted surgery has evolved into a crucial treatment for prostate cancer(PCa).However,from its appearance to today,brain-computer interface,virtual reality,and metaverse have revolutionized the field of robot-assisted surgery for PCa,presenting both opportunities and challenges.Especially in the context of contemporary big data and precision medicine,facing the heterogeneity of PCa and the complexity of clinical problems,it still needs to be continuously upgraded and improved.Keeping this in mind,this article summarized the 5 stages of the historical development of robot-assisted surgery for PCa,encompassing the stages of emergence,promotion,development,maturity,and intelligence.Initially,safety concerns were paramount,but subsequent research and engineering advancements have focused on enhancing device efficacy,surgical technology,and achieving precise multi modal treatment.The dominance of da Vinci robot-assisted surgical system has seen this evolution intimately tied to its successive versions.In the future,robot-assisted surgery for PCa will move towards intelligence,promising improved patient outcomes and personalized therapy,alongside formidable challenges.To guide future development,we propose 10 significant prospects spanning clinical,research,engineering,materials,social,and economic domains,envisioning a future era of artificial intelligence in the surgical treatment of PCa.
文摘This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.
文摘The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.
基金This work is supported by the National Natural Science Foundation of China(No.61772561)the National Natural Science Foundation of Hunan(No.2019JJ50866)+1 种基金the Key Research&Development Plan of Hunan Province(No.2018NK2012)the Postgraduate Science and Technology Innovation Foundation of Central South University of Forestry and Technology(No.20183034).
文摘Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissection detection method based on CTA images is proposed.ROI is extracted based on binarization and morphology opening operation.The deep learning networks(InceptionV3,ResNet50,and DenseNet)are applied after the preprocessing of the datasets.Recall,F1-score,Matthews correlation coefficient(MCC)and other performance indexes are investigated.It is shown that the deep learning methods have much better performance than the traditional method.And among those deep learning methods,DenseNet121 can exceed other networks such as ResNet50 and InceptionV3.
基金This work was supported in part by the National Natural Science Foundation of China under Grant 61772561,author J.Q,http://www.nsfc.gov.cn/in part by the Science Research Projects of Hunan Provincial Education Department under Grant 18A174,author X.X,http://kxjsc.gov.hnedu.cn/+5 种基金in part by the Science Research Projects of Hunan Provincial Education Department under Grant 19B584,author Y.T,http://kxjsc.gov.hnedu.cn/in part by the Natural Science Foundation of Hunan Province(No.2020JJ4140),author Y.T,http://kjt.hunan.gov.cn/in part by the Natural Science Foundation of Hunan Province(No.2020JJ4141),author X.X,http://kjt.hunan.gov.cn/in part by the Key Research and Development Plan of Hunan Province under Grant 2019SK2022,author Y.T,http://kjt.hunan.gov.cn/in part by the Key Research and Development Plan of Hunan Province under Grant CX20200730,author G.H,http://kjt.hunan.gov.cn/in part by the Graduate Science and Technology Innovation Fund Project of Central South University of Forestry and Technology under Grant CX20202038,author G.H,http://jwc.csuft.edu.cn/.
文摘Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods.
基金Projects(90820302, 60805027, 61175064) supported by the National Natural Science Foundation of ChinaProject(2011ssxt231) supported by the Master Degree Thesis Innovation Project Foundation of Central South University, China+1 种基金Project(200805330005) supported by the Research Fund for the Doctoral Program of Higher Education, ChinaProject(2011FJ4043) supported by the Academician Foundation of Hunan Province, China
文摘A trajectory generator based on vehicle kinematics model was presented and an integrated navigation simulation system was designed.Considering that the tight relation between vehicle motion and topography,a new trajectory generator for vehicle was proposed for more actual simulation.Firstly,a vehicle kinematics model was built based on conversion of attitude vector in different coordinate systems.Then,the principle of common trajectory generators was analyzed.Besides,combining the vehicle kinematics model with the principle of dead reckoning,a new vehicle trajectory generator was presented,which can provide process parameters of carrier anytime and achieve simulation of typical actions of running vehicle.Moreover,IMU(inertial measurement unit) elements were simulated,including accelerometer and gyroscope.After setting up the simulation conditions,the integrated navigation simulation system was verified by final performance test.The result proves the validity and flexibility of this design.
文摘Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a significant role in detecting and screening lung cancer in Computed tomography(CT)scan images.Early detection plays an important role in the survival rate and treatment of lung cancer patients.Moreover,pulmonary nodule classification techniques based on the convolutional neural network can be used for the accurate and efficient detection of lung cancer.This work proposed an automatic nodule detection method in CT images based on modified AlexNet architecture and Support vector machine(SVM)algorithm namely LungNet-SVM.The proposed model consists of seven convolutional layers,three pooling layers,and two fully connected layers used to extract features.Support vector machine classifier is applied for the binary classification of nodules into benign andmalignant.The experimental analysis is performed by using the publicly available benchmark dataset Lung nodule analysis 2016(LUNA16).The proposed model has achieved 97.64%of accuracy,96.37%of sensitivity,and 99.08%of specificity.A comparative analysis has been carried out between the proposed LungNet-SVM model and existing stateof-the-art approaches for the classification of lung cancer.The experimental results indicate that the proposed LungNet-SVM model achieved remarkable performance on a LUNA16 dataset in terms of accuracy.