期刊文献+
共找到27篇文章
< 1 2 >
每页显示 20 50 100
Comparative study of IoT-and AI-based computing disease detection approaches
1
作者 Wasiur Rhmann Jalaluddin Khan +8 位作者 Ghufran Ahmad Khan Zubair Ashraf Babita Pandey Mohammad Ahmar Khan Ashraf Ali Amaan Ishrat Abdulrahman Abdullah Alghamdi Bilal Ahamad Mohammad Khaja Shaik 《Data Science and Management》 2025年第1期94-106,共13页
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin... The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms. 展开更多
关键词 Deep learning Internet of Things(IoT) Cloud computing Fog computing Edge computing
在线阅读 下载PDF
Hybrid AI-IoT Framework with Digital Twin Integration for Predictive Urban Infrastructure Management in Smart Cities
2
作者 Abdullah Alourani Mehtab Alam +2 位作者 Ashraf Ali Ihtiram Raza Khan Chandra Kanta Samal 《Computers, Materials & Continua》 2026年第1期462-493,共32页
The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often... The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often advanced one dimension—such as Internet of Things(IoT)-based data acquisition,Artificial Intelligence(AI)-driven analytics,or digital twin visualization—without fully integrating these strands into a single operational loop.As a result,many existing solutions encounter bottlenecks in responsiveness,interoperability,and scalability,while also leaving concerns about data privacy unresolved.This research introduces a hybrid AI–IoT–Digital Twin framework that combines continuous sensing,distributed intelligence,and simulation-based decision support.The design incorporates multi-source sensor data,lightweight edge inference through Convolutional Neural Networks(CNN)and Long ShortTerm Memory(LSTM)models,and federated learning enhanced with secure aggregation and differential privacy to maintain confidentiality.A digital twin layer extends these capabilities by simulating city assets such as traffic flows and water networks,generating what-if scenarios,and issuing actionable control signals.Complementary modules,including model compression and synchronization protocols,are embedded to ensure reliability in bandwidth-constrained and heterogeneous urban environments.The framework is validated in two urban domains:traffic management,where it adapts signal cycles based on real-time congestion patterns,and pipeline monitoring,where it anticipates leaks through pressure and vibration data.Experimental results show a 28%reduction in response time,a 35%decrease in maintenance costs,and a marked reduction in false positives relative to conventional baselines.The architecture also demonstrates stability across 50+edge devices under federated training and resilience to uneven node participation.The proposed system provides a scalable and privacy-aware foundation for predictive urban infrastructure management.By closing the loop between sensing,learning,and control,it reduces operator dependence,enhances resource efficiency,and supports transparent governance models for emerging smart cities. 展开更多
关键词 Smart cities digital twin AI-IOT framework predictive infrastructure management edge computing reinforcement learning optimization methods federated learning urban systems modeling smart governance
在线阅读 下载PDF
Complex Network Formation and Analysis of Online Social Media Systems
3
作者 Hafiz Abid Mahmood Malik 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第3期1737-1750,共14页
To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spr... To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spread and stopped in a short span of time.Both targets can be achieved,since network of information can be extended and as well destroyed.So,information spread and community formation have become one of the most crucial issues in the world of SNA(Social Network Analysis).In this work,the complex network of twitter social network has been formalized and results are analyzed.For this purpose,different network metrics have been utilized.Visualization of the network is provided in its original form and then filter out(different percentages)from the network to eliminate the less impacting nodes and edges for better analysis.This network is analyzed according to different centrality measures,like edge-betweenness,betweenness centrality,closeness centrality and eigenvector centrality.Influential nodes are detected and their impact is observed on the network.The communities are analyzed in terms of network coverage considering theMinimum Spanning Tree,shortest path distribution and network diameter.It is found that these are the very effective ways to find influential and central nodes from such big social networks like Facebook,Instagram,Twitter,LinkedIn,etc. 展开更多
关键词 Complex network data extraction nodes and edges network visualization social media network main hubs centrality measures
在线阅读 下载PDF
A Measurement Study of the Ethereum Underlying P2P Network
4
作者 Mohammad ZMasoud Yousef Jaradat +3 位作者 Ahmad Manasrah Mohammad Alia Khaled Suwais Sally Almanasra 《Computers, Materials & Continua》 SCIE EI 2024年第1期515-532,共18页
This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contra... This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network. 展开更多
关键词 Ethereum MEASUREMENT ethereum client neural network time series forecasting web-scarping wayback machine blockchain
在线阅读 下载PDF
Performance of Gradient-Based Optimizer for Optimum Wind Cube Design
5
作者 Alaa A.K.Ismaeel Essam H.Houssein +1 位作者 Amir Y.Hassan Mokhtar Said 《Computers, Materials & Continua》 SCIE EI 2022年第4期339-353,共15页
Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines o... Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines occurs due to variation of wind velocity.A wind cube is used to decrease power fluctuation and increase the wind turbine’s power.The optimum design for a wind cube is the main contribution of this work.The decisive design parameters used to optimize the wind cube are its inner and outer radius,the roughness factor,and the height of the wind turbine hub.A Gradient-Based Optimizer(GBO)is used as a new metaheuristic algorithm in this problem.The objective function of this research includes two parts:the first part is to minimize the probability of generated energy loss,and the second is to minimize the cost of the wind turbine and wind cube.The Gradient-Based Optimizer(GBO)is applied to optimize the variables of two wind turbine types and the design of the wind cube.The metrological data of the Red Sea governorate of Egypt is used as a case study for this analysis.Based on the results,the optimum design of a wind cube is achieved,and an improvement in energy produced from the wind turbine with a wind cube will be compared with energy generated without a wind cube.The energy generated from a wind turbine with the optimized cube is more than 20 times that of a wind turbine without a wind cube for all cases studied. 展开更多
关键词 Wind turbine wind cube gradient-based optimizer metaheuristics energy source
在线阅读 下载PDF
Channel-Attention DenseNet with Dilated Convolutions for MRI Brain Tumor Classification
6
作者 Abdu Salam Mohammad Abrar +5 位作者 Raja Waseem Anwer Farhan Amin Faizan Ullah Isabel de la Torre Gerardo Mendez Mezquita Henry Fabian Gongora 《Computer Modeling in Engineering & Sciences》 2025年第11期2457-2479,共23页
Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intell... Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intelligence(AI)and deep learning,there has been potential to improve diagnostic accuracy,especially with Magnetic Resonance Imaging(MRI).However,traditional state-of-the-art models lack the sensitivity essential for reliable tumor identification and segmentation.Thus,our research aims to enhance brain tumor diagnosis in MRI by proposing an advanced model.The proposed model incorporates dilated convolutions to optimize the brain tumor segmentation and classification.The proposed model is first trained and later evaluated using the BraTS 2020 dataset.In our proposed model preprocessing consists of normalization,noise reduction,and data augmentation to improve model robustness.The attention mechanism and dilated convolutions were introduced to increase the model’s focus on critical regions and capture finer spatial details without compromising image resolution.We have performed experimentation to measure efficiency.For this,we have used various metrics including accuracy,sensitivity,and curve(AUC-ROC).The proposed model achieved a high accuracy of 94%,a sensitivity of 93%,a specificity of 92%,and an AUC-ROC of 0.98,outperforming traditional diagnostic models in brain tumor detection.The proposed model accurately identifies tumor regions,while dilated convolutions enhanced the segmentation accuracy,especially for complex tumor structures.The proposed model demonstrates significant potential for clinical application,providing reliable and precise brain tumor detection in MRI. 展开更多
关键词 Artificial intelligence MRI analysis deep learning dilated convolution DenseNet brain tumor detection brain tumor segmentation
在线阅读 下载PDF
Toward Robust Deepfake Defense:A Review of Deepfake Detection and Prevention Techniques in Images
7
作者 Ahmed Abdel-Wahab Mohammad Alkhatib 《Computers, Materials & Continua》 2026年第2期119-152,共34页
Deepfake is a sort of fake media made by advanced AI methods like Generative Adversarial Networks(GANs).Deepfake technology has many useful uses in education and entertainment,but it also raises a lot of ethical,socia... Deepfake is a sort of fake media made by advanced AI methods like Generative Adversarial Networks(GANs).Deepfake technology has many useful uses in education and entertainment,but it also raises a lot of ethical,social,and security issues,such as identity theft,the dissemination of false information,and privacy violations.This study seeks to provide a comprehensive analysis of several methods for identifying and circumventing Deepfakes,with a particular focus on image-based Deepfakes.There are three main types of detection methods:classical,machine learning(ML)and deep learning(DL)-based,and hybrid methods.There are three main types of preventative methods:technical,legal,and moral.The study investigates the effectiveness of several detection approaches,such as convolutional neural networks(CNNs),frequency domain analysis,and hybrid CNN-LSTM models,focusing on the respective advantages and disadvantages of each method.We also look at new technologies like Explainable Artificial Intelligence(XAI)and blockchain-based frameworks.The essay looks at the use of algorithmic protocols,watermarking,and blockchain-based content verification as possible ways to stop certain things from happening.Recent advancements,including adversarial training and anti-Deepfake data generation,are essential because of their potential to alleviate rising concerns.This reviewshows that there aremajor problems,such as the difficulty of improving the capabilities of existing systems,the high running expenses,and the risk of being attacked by enemies.It stresses the importance of working together across fields,including academia,business,and government,to create robust,scalable,and ethical solutions.Themain goals of futurework should be to create lightweight,real-timedetection systems,connect them to large language models(LLMs),and put in place worldwide regulatory frameworks.This essay argues for a complete and varied plan to keep digital information real and build confidence in a time when media is driven by artificial intelligence.It uses both technical and non-technical means. 展开更多
关键词 Deepfake detection deepfake prevention generative adversarial networks(GANs) digital media integrity artificial intelligence ethics
在线阅读 下载PDF
Saudi License Plate Recognition Algorithm Based on Support Vector Machine 被引量:2
8
作者 Khaled Suwais Rana Al-Otaibi Ali Alshahrani 《Journal of Electronic Science and Technology》 CAS 2013年第4期424-428,共5页
-License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on t... -License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on the support vector machine (SVM) algorithm. The new algorithm is efficient in recognizing the vehicles from the Arabic part of the plate. The performance of the system has been investigated and analyzed. The recognition accuracy of the algorithm is about 93.3%. 展开更多
关键词 Image processing license platesrecognition systems support vector machine.
在线阅读 下载PDF
Smart MobiNet:A Deep Learning Approach for Accurate Skin Cancer Diagnosis 被引量:1
9
作者 Muhammad Suleman Faizan Ullah +4 位作者 Ghadah Aldehim Dilawar Shah Mohammad Abrar Asma Irshad Sarra Ayouni 《Computers, Materials & Continua》 SCIE EI 2023年第12期3533-3549,共17页
The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization o... The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization of deep learning techniques.Nevertheless,the existing methods exhibit certain constraints in terms of accessibility,diagnostic precision,data availability,and scalability.To address these obstacles,we put out a lightweight model known as Smart MobiNet,which is derived from MobileNet and incorporates additional distinctive attributes.The model utilizes a multi-scale feature extraction methodology by using various convolutional layers.The ISIC 2019 dataset,sourced from the International Skin Imaging Collaboration,is employed in this study.Traditional data augmentation approaches are implemented to address the issue of model overfitting.In this study,we conduct experiments to evaluate and compare the performance of three different models,namely CNN,MobileNet,and Smart MobiNet,in the task of skin cancer detection.The findings of our study indicate that the proposed model outperforms other architectures,achieving an accuracy of 0.89.Furthermore,the model exhibits balanced precision,sensitivity,and F1 scores,all measuring at 0.90.This model serves as a vital instrument that assists clinicians efficiently and precisely detecting skin cancer. 展开更多
关键词 Deep learning Smart MobiNet machine learning skin lesion MELANOMA skin cancer classification
暂未订购
Enhanced Parallelized DNA-Coded Stream Cipher Based on Multiplayer Prisoners’Dilemma
10
作者 Khaled M.Suwais 《Computers, Materials & Continua》 SCIE EI 2023年第5期2685-2704,共20页
Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of... Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of an encryption algorithm that relies on only one key for decryption and as well as encryption.Many existing encryption algorithms are developed based on either a mathematical foundation or on other biological,social or physical behaviours.One technique is to utilise the behavioural aspects of game theory in a stream cipher.In this paper,we introduce an enhanced Deoxyribonucleic acid(DNA)-coded stream cipher based on an iterated n-player prisoner’s dilemma paradigm.Our main goal is to contribute to adding more layers of randomness to the behaviour of the keystream generation process;these layers are inspired by the behaviour of multiple players playing a prisoner’s dilemma game.We implement parallelism to compensate for the additional processing time that may result fromadding these extra layers of randomness.The results show that our enhanced design passes the statistical tests and achieves an encryption throughput of about 1,877 Mbit/s,which makes it a feasible secure stream cipher. 展开更多
关键词 ENCRYPTION game theory DNA cryptography stream cipher parallel computing
在线阅读 下载PDF
Enhanced Heap-Based Optimizer Algorithm for Solving Team Formation Problem
11
作者 Nashwa Nageh Ahmed Elshamy +2 位作者 Abdel Wahab Said Hassan Mostafa Sami Mustafa Abdul Salam 《Computers, Materials & Continua》 SCIE EI 2022年第12期5245-5268,共24页
Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many r... Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many real-world problems,such as task assignment,vehicle routing,nurse scheduling,resource allocation,and airline crew scheduling,are based on the TF problem.TF has been shown to be a Nondeterministic Polynomial time(NP)problem,and high-dimensional problem with several local optima that can be solved using efficient approximation algorithms.This paper proposes two improved swarm-based algorithms for solving team formation problem.The first algorithm,entitled Hybrid Heap-Based Optimizer with Simulated Annealing Algorithm(HBOSA),uses a single crossover operator to improve the performance of a standard heap-based optimizer(HBO)algorithm.It also employs the simulated annealing(SA)approach to improve model convergence and avoid local minima trapping.The second algorithm is the Chaotic Heap-based Optimizer Algorithm(CHBO).CHBO aids in the discovery of new solutions in the search space by directing particles to different regions of the search space.During HBO’s optimization process,a logistic chaotic map is used.The performance of the two proposed algorithms(HBOSA)and(CHBO)is evaluated using thirteen benchmark functions and tested in solving the TF problem with varying number of experts and skills.Furthermore,the proposed algorithms were compared to well-known optimization algorithms such as the Heap-Based Optimizer(HBO),Developed Simulated Annealing(DSA),Particle SwarmOptimization(PSO),GreyWolfOptimization(GWO),and Genetic Algorithm(GA).Finally,the proposed algorithms were applied to a real-world benchmark dataset known as the Internet Movie Database(IMDB).The simulation results revealed that the proposed algorithms outperformed the compared algorithms in terms of efficiency and performance,with fast convergence to the global minimum. 展开更多
关键词 Team formation problem optimization problem genetic algorithm heap-based optimizer simulated annealing hybridization method chaotic local search
在线阅读 下载PDF
A Fully Adaptive Active Queue Management Method for Congestion Prevention at the Router Buffer
12
作者 Ali Alshahrani Ahmad Adel Abu-Shareha +1 位作者 Qusai Y.Shambour Basil Al-Kasasbeh 《Computers, Materials & Continua》 SCIE EI 2023年第11期1679-1698,共20页
Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and th... Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and the slake behavior of the existing AQM methods leads to unnecessary packet dropping.This paper proposes a fully adaptive active queue management(AAQM)method to maintain stable network performance,avoid congestion and packet loss,and eliminate unnecessary packet dropping.The proposed AAQM method is based on load and queue length indicators and uses an adaptive mechanism to adjust the dropping probability based on the buffer status.The proposed AAQM method adapts to single and multiclass traffic models.Extensive simulation results over two types of traffic showed that the proposed method achieved the best results compared to the existing methods,including Random Early Detection(RED),BLUE,Effective RED(ERED),Fuzzy RED(FRED),Fuzzy Gentle RED(FGRED),and Fuzzy BLUE(FBLUE).The proposed and compared methods achieved similar results with low or moderate traffic load.However,under high traffic load,the proposed AAQM method achieved the best rate of zero loss,similar to BLUE,compared to 0.01 for RED,0.27 for ERED,0.04 for FRED,0.12 for FGRED,and 0.44 for FBLUE.For throughput,the proposed AAQM method achieved the highest rate of 0.54,surpassing the BLUE method’s throughput of 0.43.For delay,the proposed AAQM method achieved the second-best delay of 28.51,while the BLUE method achieved the best delay of 13.18;however,the BLUE results are insufficient because of the low throughput.Consequently,the proposed AAQM method outperformed the compared methods with its superior throughput and acceptable delay. 展开更多
关键词 Active queue management dropping rate DELAY LOSS performance measures
在线阅读 下载PDF
Rough Sets Hybridization with Mayfly Optimization for Dimensionality Reduction
13
作者 Ahmad Taher Azar Mustafa Samy Elgendy +1 位作者 Mustafa Abdul Salam Khaled M.Fouad 《Computers, Materials & Continua》 SCIE EI 2022年第10期1087-1108,共22页
Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that ... Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies. 展开更多
关键词 Dimensionality reduction metaheuristics optimization algorithm MAYFLY particle swarm optimizer feature selection
在线阅读 下载PDF
VeriFace:Defending against Adversarial Attacks in Face Verification Systems
14
作者 Awny Sayed Sohair Kinlany +1 位作者 Alaa Zaki Ahmed Mahfouz 《Computers, Materials & Continua》 SCIE EI 2023年第9期3151-3166,共16页
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi... Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods. 展开更多
关键词 Adversarial attacks face aerification adversarial detection perturbation removal
在线阅读 下载PDF
Adaptive Segmentation for Unconstrained Iris Recognition
15
作者 Mustafa AlRifaee Sally Almanasra +3 位作者 Adnan Hnaif Ahmad Althunibat Mohammad Abdallah Thamer Alrawashdeh 《Computers, Materials & Continua》 SCIE EI 2024年第2期1591-1609,共19页
In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requ... In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2. 展开更多
关键词 Image recognition color segmentation image processing LOCALIZATION
在线阅读 下载PDF
Code Smell Detection Using Whale Optimization Algorithm
16
作者 Moatasem M.Draz Marwa S.Farhan +1 位作者 Sarah N.Abdulkader M.G.Gafar 《Computers, Materials & Continua》 SCIE EI 2021年第8期1919-1935,共17页
Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the sof... Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the software industry.They are characteristics of software source code that indicate a deeper problem in design.These smells appear not only in the design but also in software implementation.Code smells introduce bugs,affect software maintainability,and lead to higher maintenance costs.Uncovering code smells can be formulated as an optimization problem of finding the best detection rules.Although researchers have recommended different techniques to improve the accuracy of code smell detection,these methods are still unstable and need to be improved.Previous research has sought only to discover a few at a time(three or five types)and did not set rules for detecting their types.Our research improves code smell detection by applying a search-based technique;we use the Whale Optimization Algorithm as a classifier to find ideal detection rules.Applying this algorithm,the Fisher criterion is utilized as a fitness function to maximize the between-class distance over the withinclass variance.The proposed framework adopts if-then detection rules during the software development life cycle.Those rules identify the types for both medium and large projects.Experiments are conducted on five open-source software projects to discover nine smell types that mostly appear in codes.The proposed detection framework has an average of 94.24%precision and 93.4%recall.These accurate values are better than other search-based algorithms of the same field.The proposed framework improves code smell detection,which increases software quality while minimizing maintenance effort,time,and cost.Additionally,the resulting classification rules are analyzed to find the software metrics that differentiate the nine code smells. 展开更多
关键词 Software engineering intelligence search-based software engineering code smell detection software metrics whale optimization algorithm fisher criterion
在线阅读 下载PDF
MCS HOG Features and SVM Based Handwritten Digit Recognition System
17
作者 Hamayun A. Khan 《Journal of Intelligent Learning Systems and Applications》 2017年第2期21-33,共13页
Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of ... Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of Oriented Gradient (HOG) features and a Support Vector Machine (SVM) based classifier for efficient classification of Handwritten Digits. The HOG based technique is sensitive to the cell size selection used in the relevant feature extraction computations. Hence a new MCS approach has been used to perform HOG analysis and compute the HOG features. The system has been tested on the Benchmark MNIST Digit Database of handwritten digits and a classification accuracy of 99.36% has been achieved using an Independent Test set strategy. A Cross-Validation analysis of the classification system has also been performed using the 10-Fold Cross-Validation strategy and a 10-Fold classification accuracy of 99.26% has been obtained. The classification performance of the proposed system is superior to existing techniques using complex procedures since it has achieved at par or better results using simple operations in both the Feature Space and in the Classifier Space. The plots of the system’s Confusion Matrix and the Receiver Operating Characteristics (ROC) show evidence of the superior performance of the proposed new MCS HOG and SVM based digit classification system. 展开更多
关键词 Handwritten DIGIT Recognition MNIST Benchmark Database HOG ANALYSIS Multiple-Cell Size HOG ANALYSIS SVM Classifier 10-Fold Cross-Validation CONFUSION Matrix Receiver Operating Characteristics
暂未订购
Epithelial Layer Estimation Using Curvatures and Textural Features for Dysplastic Tissue Detection
18
作者 Afzan Adam Abdul Hadi Abd Rahman +3 位作者 Nor Samsiah Sani Zaid Abdi Alkareem Alyessari Nur Jumaadzan Zaleha Mamat Basela Hasan 《Computers, Materials & Continua》 SCIE EI 2021年第4期761-777,共17页
Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretical... Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretically,the shape deformation model can normalise the distortions,but it needs a 2D image.Curvatures theory,on the other hand,is not yet tested on digital pathology images.Therefore,this work proposed a curvature detection to reduce the boundary effects and estimates the epithelial layer.The boundary effect on the tissue surfaces is normalised using the frequency of a curve deviates from being a straight line.The epithelial layer’s depth is estimated from the tissue edges and the connected nucleolus only.Then,the textural and spatial features along the estimated layer are used for dysplastic tissue detection.The proposed method achieved better performance compared to the whole tissue regions in terms of detecting dysplastic tissue.The result shows a leap of kappa points from fair to a substantial agreement with the expert’s ground truth classification.The improved results demonstrate that curvatures have been effective in reducing the boundary effects on the epithelial layer of tissue.Thus,quantifying and classifying the morphological patterns for dysplasia can be automated.The textural and spatial features on the detected epithelial layer can capture the changes in tissue. 展开更多
关键词 Digital pathology grading dysplasia tissue boundary effect
在线阅读 下载PDF
DM-L Based Feature Extraction and Classifier Ensemble for Object Recognition
19
作者 Hamayun A. Khan 《Journal of Signal and Information Processing》 2018年第2期92-110,共19页
Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained ... Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance. 展开更多
关键词 DEEP Learning Object Recognition CNN DEEP MULTI-LAYER Feature Extraction Principal Component Analysis CLASSIFIER ENSEMBLE Caltech-101 BENCHMARK Database
在线阅读 下载PDF
Palm Vein Authentication Based on the Coset Decomposition Method
20
作者 Mohamed Sayed 《Journal of Information Security》 2015年第3期197-205,共9页
The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is compl... The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is complex and unique to each individual. Its non-contact function gives it a healthful advantage over other biometric technologies. This paper presents an algebraic method for personal authentication and identification using internal contactless palm vein images. We use MATLAB image processing toolbox to enhance the palm vein images and employ coset decomposition concept to store and identify the encoded palm vein feature vectors. Experimental evidence shows the validation and influence of the proposed approach. 展开更多
关键词 BIOMETRICS COSET Decomposition Method HAND VEINS PERSONAL AUTHENTICATION
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部