This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak...This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.展开更多
Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have m...Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have made early contributions;however,recent advancements in deep learning(DL)have revolutionized the field,offering state-of-the-art performance in image classification,segmentation,detection,fusion,registration,and enhancement.This comprehensive review presents an in-depth analysis of deep learning methodologies applied across medical image analysis tasks,highlighting both foundational models and recent innovations.The article begins by introducing conventional techniques and their limitations,setting the stage for DL-based solutions.Core DL architectures,including Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),Generative Adversarial Networks(GANs),Vision Transformers(ViTs),and hybrid models,are discussed in detail,including their advantages and domain-specific adaptations.Advanced learning paradigms such as semi-supervised learning,selfsupervised learning,and few-shot learning are explored for their potential to mitigate data annotation challenges in clinical datasets.This review further categorizes major tasks in medical image analysis,elaborating on how DL techniques have enabled precise tumor segmentation,lesion detection,modality fusion,super-resolution,and robust classification across diverse clinical settings.Emphasis is placed on applications in oncology,cardiology,neurology,and infectious diseases,including COVID-19.Challenges such as data scarcity,label imbalance,model generalizability,interpretability,and integration into clinical workflows are critically examined.Ethical considerations,explainable AI(XAI),federated learning,and regulatory compliance are discussed as essential components of real-world deployment.Benchmark datasets,evaluation metrics,and comparative performance analyses are presented to support future research.The article concludes with a forward-looking perspective on the role of foundation models,multimodal learning,edge AI,and bio-inspired computing in the future of medical imaging.Overall,this review serves as a valuable resource for researchers,clinicians,and developers aiming to harness deep learning for intelligent,efficient,and clinically viable medical image analysis.展开更多
Robot-assisted surgery has evolved into a crucial treatment for prostate cancer(PCa).However,from its appearance to today,brain-computer interface,virtual reality,and metaverse have revolutionized the field of robot-a...Robot-assisted surgery has evolved into a crucial treatment for prostate cancer(PCa).However,from its appearance to today,brain-computer interface,virtual reality,and metaverse have revolutionized the field of robot-assisted surgery for PCa,presenting both opportunities and challenges.Especially in the context of contemporary big data and precision medicine,facing the heterogeneity of PCa and the complexity of clinical problems,it still needs to be continuously upgraded and improved.Keeping this in mind,this article summarized the 5 stages of the historical development of robot-assisted surgery for PCa,encompassing the stages of emergence,promotion,development,maturity,and intelligence.Initially,safety concerns were paramount,but subsequent research and engineering advancements have focused on enhancing device efficacy,surgical technology,and achieving precise multi modal treatment.The dominance of da Vinci robot-assisted surgical system has seen this evolution intimately tied to its successive versions.In the future,robot-assisted surgery for PCa will move towards intelligence,promising improved patient outcomes and personalized therapy,alongside formidable challenges.To guide future development,we propose 10 significant prospects spanning clinical,research,engineering,materials,social,and economic domains,envisioning a future era of artificial intelligence in the surgical treatment of PCa.展开更多
Diabetic retinopathy (DR) is a retinal disease that causes irreversible blindness.DR occurs due to the high blood sugar level of the patient, and it is clumsy tobe detected at an early stage as no early symptoms appea...Diabetic retinopathy (DR) is a retinal disease that causes irreversible blindness.DR occurs due to the high blood sugar level of the patient, and it is clumsy tobe detected at an early stage as no early symptoms appear at the initial level. To preventblindness, early detection and regular treatment are needed. Automated detectionbased on machine intelligence may assist the ophthalmologist in examining thepatients’ condition more accurately and efficiently. The purpose of this study is toproduce an automated screening system for recognition and grading of diabetic retinopathyusing machine learning through deep transfer and representational learning.The artificial intelligence technique used is transfer learning on the deep neural network,Inception-v4. Two configuration variants of transfer learning are applied onInception-v4: Fine-tune mode and fixed feature extractor mode. Both configurationmodes have achieved decent accuracy values, but the fine-tuning method outperformsthe fixed feature extractor configuration mode. Fine-tune configuration modehas gained 96.6% accuracy in early detection of DR and 97.7% accuracy in gradingthe disease and has outperformed the state of the art methods in the relevant literature.展开更多
New technologies that take advantage of the emergence of massive Internet of Things(IoT)and a hyper-connected network environment have rapidly increased in recent years.These technologies are used in diverse environme...New technologies that take advantage of the emergence of massive Internet of Things(IoT)and a hyper-connected network environment have rapidly increased in recent years.These technologies are used in diverse environments,such as smart factories,digital healthcare,and smart grids,with increased security concerns.We intend to operate Security Orchestration,Automation and Response(SOAR)in various environments through new concept definitions as the need to detect and respond automatically to rapidly increasing security incidents without the intervention of security personnel has emerged.To facilitate the understanding of the security concern involved in this newly emerging area,we offer the definition of Internet of Blended Environment(IoBE)where various convergence environments are interconnected and the data analyzed in automation.We define Blended Threat(BT)as a security threat that exploits security vulnerabilities through various attack surfaces in the IoBE.We propose a novel SOAR-CUBE architecture to respond to security incidents with minimal human intervention by automating the BT response process.The Security Orchestration,Automation,and Response(SOAR)part of our architecture is used to link heterogeneous security technologies and the threat intelligence function that collects threat data and performs a correlation analysis of the data.SOAR is operated under Collaborative Units of Blended Environment(CUBE)which facilitates dynamic exchanges of data according to the environment applied to the IoBE by distributing and deploying security technologies for each BT type and dynamically combining them according to the cyber kill chain stage to minimize the damage and respond efficiently to BT.展开更多
Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookie...Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookies are exploited to carry client patterns observed by a website.These client patterns facilitate the particular client’s future visit to the corresponding website.However,security and privacy are the primary concerns owing to the value of information over public channels and the storage of client information on the browser.Several protocols have been introduced that maintain HTTP cookies,but many of those fail to achieve the required security,or require a lot of resource overheads.In this article,we have introduced a lightweight Elliptic Curve Cryptographic(ECC)based protocol for authenticating client and server transactions to maintain the privacy and security of HTTP cookies.Our proposed protocol uses a secret key embedded within a cookie.The proposed protocol ismore efficient and lightweight than related protocols because of its reduced computation,storage,and communication costs.Moreover,the analysis presented in this paper confirms that proposed protocol resists various known attacks.展开更多
In this paper, probabilistic models for three redundant configurations have been developed to analyze and compare some reliability characteristics. Each system is connected to a repairable supporting external device f...In this paper, probabilistic models for three redundant configurations have been developed to analyze and compare some reliability characteristics. Each system is connected to a repairable supporting external device for operation. Repairable service station is provided for immediate repair of failed unit. Explicit expressions for mean time to system failure and steady-state availability for the three configurations are developed. Furthermore, we compare the three configurations based on their reliability characteristics and found that configuration II is more reliable and efficient than the remaining configurations.展开更多
Lysine Lipoylation is a protective and conserved Post Translational Modification(PTM)in proteomics research like prokaryotes and eukaryotes.It is connected with many biological processes and closely linked with many m...Lysine Lipoylation is a protective and conserved Post Translational Modification(PTM)in proteomics research like prokaryotes and eukaryotes.It is connected with many biological processes and closely linked with many metabolic diseases.To develop a perfect and accurate classification model for identifying lipoylation sites at the protein level,the computational methods and several other factors play a key role in this purpose.Usually,most of the techniques and different traditional experimental models have a very high cost.They are time-consuming;so,it is required to construct a predictor model to extract lysine lipoylation sites.This study proposes a model that could predict lysine lipoylation sites with the help of a classification method known as Artificial Neural Network(ANN).The ANN algorithm deals with the noise problem and imbalance classification in lipoylation sites dataset samples.As the result shows in ten-fold cross-validation,a brilliant performance is achieved through the predictor model with an accuracy of 99.88%,and also achieved 0.9976 as the highest value of MCC.So,the predictor model is a very useful and helpful tool for lipoylation sites prediction.Some of the residues around lysine lipoylation sites play a vital part in prediction,as demonstrated during feature analysis.The wonderful results reported through the evaluation and prediction of this model can provide an informative and relative explanation for lipoylation and its molecular mechanisms.展开更多
The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social network...The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.展开更多
This paper presents a large gathering dataset of images extracted from publicly filmed videos by 24 cameras installed on the premises of Masjid Al-Nabvi,Madinah,Saudi Arabia.This dataset consists of raw and processed ...This paper presents a large gathering dataset of images extracted from publicly filmed videos by 24 cameras installed on the premises of Masjid Al-Nabvi,Madinah,Saudi Arabia.This dataset consists of raw and processed images reflecting a highly challenging and unconstraint environment.The methodology for building the dataset consists of four core phases;that include acquisition of videos,extraction of frames,localization of face regions,and cropping and resizing of detected face regions.The raw images in the dataset consist of a total of 4613 frames obtained fromvideo sequences.The processed images in the dataset consist of the face regions of 250 persons extracted from raw data images to ensure the authenticity of the presented data.The dataset further consists of 8 images corresponding to each of the 250 subjects(persons)for a total of 2000 images.It portrays a highly unconstrained and challenging environment with human faces of varying sizes and pixel quality(resolution).Since the face regions in video sequences are severely degraded due to various unavoidable factors,it can be used as a benchmark to test and evaluate face detection and recognition algorithms for research purposes.We have also gathered and displayed records of the presence of subjects who appear in presented frames;in a temporal context.This can also be used as a temporal benchmark for tracking,finding persons,activity monitoring,and crowd counting in large crowd scenarios.展开更多
Since the beginning of web applications,security has been a critical study area.There has been a lot of research done to figure out how to define and identify security goals or issues.However,high-security web apps ha...Since the beginning of web applications,security has been a critical study area.There has been a lot of research done to figure out how to define and identify security goals or issues.However,high-security web apps have been found to be less durable in recent years;thus reducing their business continuity.High security features of a web application are worthless unless they provide effective services to the user and meet the standards of commercial viability.Hence,there is a necessity to link in the gap between durability and security of the web application.Indeed,security mechanisms must be used to enhance durability as well as the security of the web application.Although durability and security are not related directly,some of their factors influence each other indirectly.Characteristics play an important role in reducing the void between durability and security.In this respect,the present study identifies key characteristics of security and durability that affect each other indirectly and directly,including confidentiality,integrity availability,human trust and trustworthiness.The importance of all the attributes in terms of their weight is essential for their influence on the whole security during the development procedure of web application.To estimate the efficacy of present study,authors employed the Hesitant Fuzzy Analytic Hierarchy Process(H-Fuzzy AHP).The outcomes of our investigations and conclusions will be a useful reference for the web application developers in achieving a more secure and durable web application.展开更多
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
The Internet of Things (IoT) integrates diverse devices into the Internet infrastructure, including sensors, meters, and wearable devices. Designing efficient IoT networks with these heterogeneous devices requires the...The Internet of Things (IoT) integrates diverse devices into the Internet infrastructure, including sensors, meters, and wearable devices. Designing efficient IoT networks with these heterogeneous devices requires the selection of appropriate routing protocols, which is crucial for maintaining high Quality of Service (QoS). The Internet Engineering Task Force’s Routing Over Low Power and Lossy Networks (IETF ROLL) working group developed the IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) to meet these needs. While the initial RPL standard focused on single-metric route selection, ongoing research explores enhancing RPL by incorporating multiple routing metrics and developing new Objective Functions (OFs). This paper introduces a novel Objective Function (OF), the Reliable and Secure Objective Function (RSOF), designed to enhance the reliability and trustworthiness of parent selection at both the node and link levels within IoT and RPL routing protocols. The RSOF employs an adaptive parent node selection mechanism that incorporates multiple metrics, including Residual Energy (RE), Expected Transmission Count (ETX), Extended RPL Node Trustworthiness (ERNT), and a novel metric that measures node failure rate (NFR). In this mechanism, nodes with a high NFR are excluded from the parent selection process to improve network reliability and stability. The proposed RSOF was evaluated using random and grid topologies in the Cooja Simulator, with tests conducted across small, medium, and large-scale networks to examine the impact of varying node densities. The simulation results indicate a significant improvement in network performance, particularly in terms of average latency, packet acknowledgment ratio (PAR), packet delivery ratio (PDR), and Control Message Overhead (CMO), compared to the standard Minimum Rank with Hysteresis Objective Function (MRHOF).展开更多
The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the ...The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.展开更多
Predicting human motion based on historical motion sequences is a fundamental problem in computer vision,which is at the core of many applications.Existing approaches primarily focus on encoding spatial dependencies a...Predicting human motion based on historical motion sequences is a fundamental problem in computer vision,which is at the core of many applications.Existing approaches primarily focus on encoding spatial dependencies among human joints while ignoring the temporal cues and the complex relationships across non-consecutive frames.These limitations hinder the model’s ability to generate accurate predictions over longer time horizons and in scenarios with complex motion patterns.To address the above problems,we proposed a novel multi-level spatial and temporal learning model,which consists of a Cross Spatial Dependencies Encoding Module(CSM)and a Dynamic Temporal Connection Encoding Module(DTM).Specifically,the CSM is designed to capture complementary local and global spatial dependent information at both the joint level and the joint pair level.We further present DTM to encode diverse temporal evolution contexts and compress motion features to a deep level,enabling the model to capture both short-term and long-term dependencies efficiently.Extensive experiments conducted on the Human 3.6M and CMU Mocap datasets demonstrate that our model achieves state-of-the-art performance in both short-term and long-term predictions,outperforming existing methods by up to 20.3% in accuracy.Furthermore,ablation studies confirm the significant contributions of the CSM and DTM in enhancing prediction accuracy.展开更多
The global increase in life expectancy poses challenges related to the safety and well-being of the elderly population,especially in relation to falls.While falls can lead to significant cognitive impairments,timely i...The global increase in life expectancy poses challenges related to the safety and well-being of the elderly population,especially in relation to falls.While falls can lead to significant cognitive impairments,timely intervention can mitigate their adverse effects.In this context,the need for non-invasive,efficient monitoring systems becomes paramount.Although wearable sensors have gained traction for monitoring health activities,they may cause discomfort during prolonged use,especially for the elderly.To address this issue,we present an intelligent,non-invasive Software-Defined Radio Frequency(SDRF)sensing system,tailored red for monitoring elderly people’s falls during routine activities.Harnessing the power of deep learning and machine learning,our system processes the Wireless Channel State Information(WCSI)generated during regular and fall activities.By employing sophisticated signal processing techniques,the system captures unique patterns that distinguish falls from normal activities.In addition,we use statistical features to streamline data processing,thereby optimizing the computational efficiency of the system.Our experiments,conducted for a typical home environment while using treadmill,demonstrate the robustness of the system.The results show high classification accuracies of 92.5%,95.1%,and 99.8%for three Artificial Intelligence(AI)algorithms.Notably,the SDRF-based approach offers flexibility,cost-effectiveness,and adaptability through software modifications,circumventing the need for hardware overhaul.This research attempts to bridge the gap in RF-based sensing for elderly fall monitoring,providing a solution that combines the benefits of non-invasiveness with the precision of deep learning and machine learning.展开更多
This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected featu...This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.展开更多
Spatial and temporal informationon urban infrastructure is essential and requires various land-cover/land-use planning and management applications.Besides,a change in infrastructure has a direct impact on other land-c...Spatial and temporal informationon urban infrastructure is essential and requires various land-cover/land-use planning and management applications.Besides,a change in infrastructure has a direct impact on other land-cover and climatic conditions.This study assessed changes in the rate and spatial distribution of Peshawar district’s infrastructure and its effects on Land Surface Temperature(LST)during the years 1996 and 2019.For this purpose,firstly,satellite images of bands7 and 8 ETM+(Enhanced Thematic Mapper)plus and OLI(Operational Land Imager)of 30 m resolution were taken.Secondly,for classification and image processing,remote sensing(RS)applications ENVI(Environment for Visualising Images)and GIS(Geographic Information System)were used.Thirdly,for better visualization and more in-depth analysis of land sat images,pre-processing techniques were employed.For Land use and Land cover(LU/LC)four types of land cover areas were identified-vegetation area,water cover,urbanized area,and infertile land for the years under research.The composition of red,green,and near infra-red bands was used for supervised classification.Classified images were extracted for analyzing the relative infrastructure change.A comparative analysis for the classification of images is performed for SVM(Support Vector Machine)and ANN(Artificial Neural Network).Based on analyzing these images,the result shows the rise in the average temperature from 30.04℃ to 45.25℃.This only possible reason is the increase in the built-up area from 78.73 to 332.78 Area km^(2) from 1996 to 2019.It has also been witnessed that the city’s sides are hotter than the city’s center due to the barren land on the borders.展开更多
Business process improvement is a systematic approach used by several organizations to continuously improve their quality of service.Integral to that is analyzing the current performance of each task of the process an...Business process improvement is a systematic approach used by several organizations to continuously improve their quality of service.Integral to that is analyzing the current performance of each task of the process and assigning the most appropriate resources to each task.In continuation of our previous work,we categorize resources into human and non-human resources.For instance,in the healthcare domain,human resources include doctors,nurses,and other associated staff responsible for the execution of healthcare activities;whereas the non-human resources include surgical and other equipment needed for execution.In this study,we contend that the two types of resources(human and non-human)have a different impact on the process performance,so their suitability should be measured differently.However,no work has been done to evaluate the suitability of non-human resources for the tasks of a process.Consequently,it becomes difficult to identify and subsequently overcome the inefficiencies caused by the non-human resources to the task.To address this problem,we present a three-step method to compute a suitability score of non-human resources for the task.As an evaluation of the proposed method,a healthcare case study is used to illustrate the applicability of the proposed method.Furthermore,we performed a controlled experiment to evaluate the usability of the proposed method.The encouraging response shows the usefulness of the proposed method.展开更多
Satellite communication systems are facing serious electromagnetic interference,and interference signal recognition is a crucial foundation for targeted anti-interference.In this paper,we propose a novel interference ...Satellite communication systems are facing serious electromagnetic interference,and interference signal recognition is a crucial foundation for targeted anti-interference.In this paper,we propose a novel interference recognition algorithm called HDCGD-CBAM,which adopts the time-frequency images(TFIs)of signals to effectively extract the temporal and spectral characteristics.In the proposed method,we improve the Convolutional Long Short-Term Memory Deep Neural Network(CLDNN)in two ways.First,the simpler Gate Recurrent Unit(GRU)is used instead of the Long Short-Term Memory(LSTM),reducing model parameters while maintaining the recognition accuracy.Second,we replace convolutional layers with hybrid dilated convolution(HDC)to expand the receptive field of feature maps,which captures the correlation of time-frequency data on a larger spatial scale.Additionally,Convolutional Block Attention Module(CBAM)is introduced before and after the HDC layers to strengthen the extraction of critical features and improve the recognition performance.The experiment results show that the HDCGD-CBAM model significantly outper-forms existing methods in terms of recognition accuracy and complexity.When Jamming-to-Signal Ratio(JSR)varies from-30dB to 10dB,it achieves an average accuracy of 78.7%and outperforms the CLDNN by 7.29%while reducing the Floating Point Operations(FLOPs)by 79.8%to 114.75M.Moreover,the proposed model has fewer parameters with 301k compared to several state-of-the-art methods.展开更多
文摘This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.
文摘Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have made early contributions;however,recent advancements in deep learning(DL)have revolutionized the field,offering state-of-the-art performance in image classification,segmentation,detection,fusion,registration,and enhancement.This comprehensive review presents an in-depth analysis of deep learning methodologies applied across medical image analysis tasks,highlighting both foundational models and recent innovations.The article begins by introducing conventional techniques and their limitations,setting the stage for DL-based solutions.Core DL architectures,including Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),Generative Adversarial Networks(GANs),Vision Transformers(ViTs),and hybrid models,are discussed in detail,including their advantages and domain-specific adaptations.Advanced learning paradigms such as semi-supervised learning,selfsupervised learning,and few-shot learning are explored for their potential to mitigate data annotation challenges in clinical datasets.This review further categorizes major tasks in medical image analysis,elaborating on how DL techniques have enabled precise tumor segmentation,lesion detection,modality fusion,super-resolution,and robust classification across diverse clinical settings.Emphasis is placed on applications in oncology,cardiology,neurology,and infectious diseases,including COVID-19.Challenges such as data scarcity,label imbalance,model generalizability,interpretability,and integration into clinical workflows are critically examined.Ethical considerations,explainable AI(XAI),federated learning,and regulatory compliance are discussed as essential components of real-world deployment.Benchmark datasets,evaluation metrics,and comparative performance analyses are presented to support future research.The article concludes with a forward-looking perspective on the role of foundation models,multimodal learning,edge AI,and bio-inspired computing in the future of medical imaging.Overall,this review serves as a valuable resource for researchers,clinicians,and developers aiming to harness deep learning for intelligent,efficient,and clinically viable medical image analysis.
基金supported by the Fundamental Research Funds for the Central Universities(2023SCU12057)the National Natural Science Foundation of China(82373106,82372831,and 32270690).
文摘Robot-assisted surgery has evolved into a crucial treatment for prostate cancer(PCa).However,from its appearance to today,brain-computer interface,virtual reality,and metaverse have revolutionized the field of robot-assisted surgery for PCa,presenting both opportunities and challenges.Especially in the context of contemporary big data and precision medicine,facing the heterogeneity of PCa and the complexity of clinical problems,it still needs to be continuously upgraded and improved.Keeping this in mind,this article summarized the 5 stages of the historical development of robot-assisted surgery for PCa,encompassing the stages of emergence,promotion,development,maturity,and intelligence.Initially,safety concerns were paramount,but subsequent research and engineering advancements have focused on enhancing device efficacy,surgical technology,and achieving precise multi modal treatment.The dominance of da Vinci robot-assisted surgical system has seen this evolution intimately tied to its successive versions.In the future,robot-assisted surgery for PCa will move towards intelligence,promising improved patient outcomes and personalized therapy,alongside formidable challenges.To guide future development,we propose 10 significant prospects spanning clinical,research,engineering,materials,social,and economic domains,envisioning a future era of artificial intelligence in the surgical treatment of PCa.
基金the National Research Foundation(NRF)of Korea under the auspices of the Ministry of Science and ICT,Republic of Korea(Grant No.NRF-2020R1G1A1012741)received by M.R.Bhutta.https://nrf.kird.re.kr/main.do.
文摘Diabetic retinopathy (DR) is a retinal disease that causes irreversible blindness.DR occurs due to the high blood sugar level of the patient, and it is clumsy tobe detected at an early stage as no early symptoms appear at the initial level. To preventblindness, early detection and regular treatment are needed. Automated detectionbased on machine intelligence may assist the ophthalmologist in examining thepatients’ condition more accurately and efficiently. The purpose of this study is toproduce an automated screening system for recognition and grading of diabetic retinopathyusing machine learning through deep transfer and representational learning.The artificial intelligence technique used is transfer learning on the deep neural network,Inception-v4. Two configuration variants of transfer learning are applied onInception-v4: Fine-tune mode and fixed feature extractor mode. Both configurationmodes have achieved decent accuracy values, but the fine-tuning method outperformsthe fixed feature extractor configuration mode. Fine-tune configuration modehas gained 96.6% accuracy in early detection of DR and 97.7% accuracy in gradingthe disease and has outperformed the state of the art methods in the relevant literature.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A2C2011391)and was supported by the Ajou University research fund.
文摘New technologies that take advantage of the emergence of massive Internet of Things(IoT)and a hyper-connected network environment have rapidly increased in recent years.These technologies are used in diverse environments,such as smart factories,digital healthcare,and smart grids,with increased security concerns.We intend to operate Security Orchestration,Automation and Response(SOAR)in various environments through new concept definitions as the need to detect and respond automatically to rapidly increasing security incidents without the intervention of security personnel has emerged.To facilitate the understanding of the security concern involved in this newly emerging area,we offer the definition of Internet of Blended Environment(IoBE)where various convergence environments are interconnected and the data analyzed in automation.We define Blended Threat(BT)as a security threat that exploits security vulnerabilities through various attack surfaces in the IoBE.We propose a novel SOAR-CUBE architecture to respond to security incidents with minimal human intervention by automating the BT response process.The Security Orchestration,Automation,and Response(SOAR)part of our architecture is used to link heterogeneous security technologies and the threat intelligence function that collects threat data and performs a correlation analysis of the data.SOAR is operated under Collaborative Units of Blended Environment(CUBE)which facilitates dynamic exchanges of data according to the environment applied to the IoBE by distributing and deploying security technologies for each BT type and dynamically combining them according to the cyber kill chain stage to minimize the damage and respond efficiently to BT.
基金support from Abu Dhabi University’s Office of Research and Sponsored Programs Grant Number:19300810.
文摘Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookies are exploited to carry client patterns observed by a website.These client patterns facilitate the particular client’s future visit to the corresponding website.However,security and privacy are the primary concerns owing to the value of information over public channels and the storage of client information on the browser.Several protocols have been introduced that maintain HTTP cookies,but many of those fail to achieve the required security,or require a lot of resource overheads.In this article,we have introduced a lightweight Elliptic Curve Cryptographic(ECC)based protocol for authenticating client and server transactions to maintain the privacy and security of HTTP cookies.Our proposed protocol uses a secret key embedded within a cookie.The proposed protocol ismore efficient and lightweight than related protocols because of its reduced computation,storage,and communication costs.Moreover,the analysis presented in this paper confirms that proposed protocol resists various known attacks.
文摘In this paper, probabilistic models for three redundant configurations have been developed to analyze and compare some reliability characteristics. Each system is connected to a repairable supporting external device for operation. Repairable service station is provided for immediate repair of failed unit. Explicit expressions for mean time to system failure and steady-state availability for the three configurations are developed. Furthermore, we compare the three configurations based on their reliability characteristics and found that configuration II is more reliable and efficient than the remaining configurations.
文摘Lysine Lipoylation is a protective and conserved Post Translational Modification(PTM)in proteomics research like prokaryotes and eukaryotes.It is connected with many biological processes and closely linked with many metabolic diseases.To develop a perfect and accurate classification model for identifying lipoylation sites at the protein level,the computational methods and several other factors play a key role in this purpose.Usually,most of the techniques and different traditional experimental models have a very high cost.They are time-consuming;so,it is required to construct a predictor model to extract lysine lipoylation sites.This study proposes a model that could predict lysine lipoylation sites with the help of a classification method known as Artificial Neural Network(ANN).The ANN algorithm deals with the noise problem and imbalance classification in lipoylation sites dataset samples.As the result shows in ten-fold cross-validation,a brilliant performance is achieved through the predictor model with an accuracy of 99.88%,and also achieved 0.9976 as the highest value of MCC.So,the predictor model is a very useful and helpful tool for lipoylation sites prediction.Some of the residues around lysine lipoylation sites play a vital part in prediction,as demonstrated during feature analysis.The wonderful results reported through the evaluation and prediction of this model can provide an informative and relative explanation for lipoylation and its molecular mechanisms.
文摘The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.
基金This research was supported by the Deanship of Scientific Research,Islamic University of Madinah,Madinah(KSA),under Tammayuz program Grant Number 1442/505.
文摘This paper presents a large gathering dataset of images extracted from publicly filmed videos by 24 cameras installed on the premises of Masjid Al-Nabvi,Madinah,Saudi Arabia.This dataset consists of raw and processed images reflecting a highly challenging and unconstraint environment.The methodology for building the dataset consists of four core phases;that include acquisition of videos,extraction of frames,localization of face regions,and cropping and resizing of detected face regions.The raw images in the dataset consist of a total of 4613 frames obtained fromvideo sequences.The processed images in the dataset consist of the face regions of 250 persons extracted from raw data images to ensure the authenticity of the presented data.The dataset further consists of 8 images corresponding to each of the 250 subjects(persons)for a total of 2000 images.It portrays a highly unconstrained and challenging environment with human faces of varying sizes and pixel quality(resolution).Since the face regions in video sequences are severely degraded due to various unavoidable factors,it can be used as a benchmark to test and evaluate face detection and recognition algorithms for research purposes.We have also gathered and displayed records of the presence of subjects who appear in presented frames;in a temporal context.This can also be used as a temporal benchmark for tracking,finding persons,activity monitoring,and crowd counting in large crowd scenarios.
基金funded by the Taif University Researchers Supporting Projects at Taif University,Kingdom of Saudi Arabia,under Grant Number:TURSP-2020/231.
文摘Since the beginning of web applications,security has been a critical study area.There has been a lot of research done to figure out how to define and identify security goals or issues.However,high-security web apps have been found to be less durable in recent years;thus reducing their business continuity.High security features of a web application are worthless unless they provide effective services to the user and meet the standards of commercial viability.Hence,there is a necessity to link in the gap between durability and security of the web application.Indeed,security mechanisms must be used to enhance durability as well as the security of the web application.Although durability and security are not related directly,some of their factors influence each other indirectly.Characteristics play an important role in reducing the void between durability and security.In this respect,the present study identifies key characteristics of security and durability that affect each other indirectly and directly,including confidentiality,integrity availability,human trust and trustworthiness.The importance of all the attributes in terms of their weight is essential for their influence on the whole security during the development procedure of web application.To estimate the efficacy of present study,authors employed the Hesitant Fuzzy Analytic Hierarchy Process(H-Fuzzy AHP).The outcomes of our investigations and conclusions will be a useful reference for the web application developers in achieving a more secure and durable web application.
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
文摘The Internet of Things (IoT) integrates diverse devices into the Internet infrastructure, including sensors, meters, and wearable devices. Designing efficient IoT networks with these heterogeneous devices requires the selection of appropriate routing protocols, which is crucial for maintaining high Quality of Service (QoS). The Internet Engineering Task Force’s Routing Over Low Power and Lossy Networks (IETF ROLL) working group developed the IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) to meet these needs. While the initial RPL standard focused on single-metric route selection, ongoing research explores enhancing RPL by incorporating multiple routing metrics and developing new Objective Functions (OFs). This paper introduces a novel Objective Function (OF), the Reliable and Secure Objective Function (RSOF), designed to enhance the reliability and trustworthiness of parent selection at both the node and link levels within IoT and RPL routing protocols. The RSOF employs an adaptive parent node selection mechanism that incorporates multiple metrics, including Residual Energy (RE), Expected Transmission Count (ETX), Extended RPL Node Trustworthiness (ERNT), and a novel metric that measures node failure rate (NFR). In this mechanism, nodes with a high NFR are excluded from the parent selection process to improve network reliability and stability. The proposed RSOF was evaluated using random and grid topologies in the Cooja Simulator, with tests conducted across small, medium, and large-scale networks to examine the impact of varying node densities. The simulation results indicate a significant improvement in network performance, particularly in terms of average latency, packet acknowledgment ratio (PAR), packet delivery ratio (PDR), and Control Message Overhead (CMO), compared to the standard Minimum Rank with Hysteresis Objective Function (MRHOF).
基金funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan(Grant No.AP23489127)。
文摘The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.
基金supported by the Urgent Need for Overseas Talent Project of Jiangxi Province(Grant No.20223BCJ25040)the Thousand Talents Plan of Jiangxi Province(Grant No.jxsg2023101085)+3 种基金the National Natural Science Foundation of China(Grant No.62106093)the Natural Science Foundation of Jiangxi(Grant Nos.20224BAB212011,20232BAB212008,20242BAB25078,and 20232BAB202051)The Youth Talent Cultivation Innovation Fund Project of Nanchang University(Grant No.XX202506030015)funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R759),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Predicting human motion based on historical motion sequences is a fundamental problem in computer vision,which is at the core of many applications.Existing approaches primarily focus on encoding spatial dependencies among human joints while ignoring the temporal cues and the complex relationships across non-consecutive frames.These limitations hinder the model’s ability to generate accurate predictions over longer time horizons and in scenarios with complex motion patterns.To address the above problems,we proposed a novel multi-level spatial and temporal learning model,which consists of a Cross Spatial Dependencies Encoding Module(CSM)and a Dynamic Temporal Connection Encoding Module(DTM).Specifically,the CSM is designed to capture complementary local and global spatial dependent information at both the joint level and the joint pair level.We further present DTM to encode diverse temporal evolution contexts and compress motion features to a deep level,enabling the model to capture both short-term and long-term dependencies efficiently.Extensive experiments conducted on the Human 3.6M and CMU Mocap datasets demonstrate that our model achieves state-of-the-art performance in both short-term and long-term predictions,outperforming existing methods by up to 20.3% in accuracy.Furthermore,ablation studies confirm the significant contributions of the CSM and DTM in enhancing prediction accuracy.
基金supported in part by the Institute of Advanced Technology,University of Science and Technology of China (USTC) under Grant PF02023001Ythe Zayed Health Center at United Arab Emirates University (UAEU) under Grant G00003476COMSATS University Islamabad,Attock Campus。
文摘The global increase in life expectancy poses challenges related to the safety and well-being of the elderly population,especially in relation to falls.While falls can lead to significant cognitive impairments,timely intervention can mitigate their adverse effects.In this context,the need for non-invasive,efficient monitoring systems becomes paramount.Although wearable sensors have gained traction for monitoring health activities,they may cause discomfort during prolonged use,especially for the elderly.To address this issue,we present an intelligent,non-invasive Software-Defined Radio Frequency(SDRF)sensing system,tailored red for monitoring elderly people’s falls during routine activities.Harnessing the power of deep learning and machine learning,our system processes the Wireless Channel State Information(WCSI)generated during regular and fall activities.By employing sophisticated signal processing techniques,the system captures unique patterns that distinguish falls from normal activities.In addition,we use statistical features to streamline data processing,thereby optimizing the computational efficiency of the system.Our experiments,conducted for a typical home environment while using treadmill,demonstrate the robustness of the system.The results show high classification accuracies of 92.5%,95.1%,and 99.8%for three Artificial Intelligence(AI)algorithms.Notably,the SDRF-based approach offers flexibility,cost-effectiveness,and adaptability through software modifications,circumventing the need for hardware overhaul.This research attempts to bridge the gap in RF-based sensing for elderly fall monitoring,providing a solution that combines the benefits of non-invasiveness with the precision of deep learning and machine learning.
文摘This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.
文摘Spatial and temporal informationon urban infrastructure is essential and requires various land-cover/land-use planning and management applications.Besides,a change in infrastructure has a direct impact on other land-cover and climatic conditions.This study assessed changes in the rate and spatial distribution of Peshawar district’s infrastructure and its effects on Land Surface Temperature(LST)during the years 1996 and 2019.For this purpose,firstly,satellite images of bands7 and 8 ETM+(Enhanced Thematic Mapper)plus and OLI(Operational Land Imager)of 30 m resolution were taken.Secondly,for classification and image processing,remote sensing(RS)applications ENVI(Environment for Visualising Images)and GIS(Geographic Information System)were used.Thirdly,for better visualization and more in-depth analysis of land sat images,pre-processing techniques were employed.For Land use and Land cover(LU/LC)four types of land cover areas were identified-vegetation area,water cover,urbanized area,and infertile land for the years under research.The composition of red,green,and near infra-red bands was used for supervised classification.Classified images were extracted for analyzing the relative infrastructure change.A comparative analysis for the classification of images is performed for SVM(Support Vector Machine)and ANN(Artificial Neural Network).Based on analyzing these images,the result shows the rise in the average temperature from 30.04℃ to 45.25℃.This only possible reason is the increase in the built-up area from 78.73 to 332.78 Area km^(2) from 1996 to 2019.It has also been witnessed that the city’s sides are hotter than the city’s center due to the barren land on the borders.
文摘Business process improvement is a systematic approach used by several organizations to continuously improve their quality of service.Integral to that is analyzing the current performance of each task of the process and assigning the most appropriate resources to each task.In continuation of our previous work,we categorize resources into human and non-human resources.For instance,in the healthcare domain,human resources include doctors,nurses,and other associated staff responsible for the execution of healthcare activities;whereas the non-human resources include surgical and other equipment needed for execution.In this study,we contend that the two types of resources(human and non-human)have a different impact on the process performance,so their suitability should be measured differently.However,no work has been done to evaluate the suitability of non-human resources for the tasks of a process.Consequently,it becomes difficult to identify and subsequently overcome the inefficiencies caused by the non-human resources to the task.To address this problem,we present a three-step method to compute a suitability score of non-human resources for the task.As an evaluation of the proposed method,a healthcare case study is used to illustrate the applicability of the proposed method.Furthermore,we performed a controlled experiment to evaluate the usability of the proposed method.The encouraging response shows the usefulness of the proposed method.
基金This work was supported by the Beijing Natural Science Foundation(L202003).
文摘Satellite communication systems are facing serious electromagnetic interference,and interference signal recognition is a crucial foundation for targeted anti-interference.In this paper,we propose a novel interference recognition algorithm called HDCGD-CBAM,which adopts the time-frequency images(TFIs)of signals to effectively extract the temporal and spectral characteristics.In the proposed method,we improve the Convolutional Long Short-Term Memory Deep Neural Network(CLDNN)in two ways.First,the simpler Gate Recurrent Unit(GRU)is used instead of the Long Short-Term Memory(LSTM),reducing model parameters while maintaining the recognition accuracy.Second,we replace convolutional layers with hybrid dilated convolution(HDC)to expand the receptive field of feature maps,which captures the correlation of time-frequency data on a larger spatial scale.Additionally,Convolutional Block Attention Module(CBAM)is introduced before and after the HDC layers to strengthen the extraction of critical features and improve the recognition performance.The experiment results show that the HDCGD-CBAM model significantly outper-forms existing methods in terms of recognition accuracy and complexity.When Jamming-to-Signal Ratio(JSR)varies from-30dB to 10dB,it achieves an average accuracy of 78.7%and outperforms the CLDNN by 7.29%while reducing the Floating Point Operations(FLOPs)by 79.8%to 114.75M.Moreover,the proposed model has fewer parameters with 301k compared to several state-of-the-art methods.