The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often...The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often advanced one dimension—such as Internet of Things(IoT)-based data acquisition,Artificial Intelligence(AI)-driven analytics,or digital twin visualization—without fully integrating these strands into a single operational loop.As a result,many existing solutions encounter bottlenecks in responsiveness,interoperability,and scalability,while also leaving concerns about data privacy unresolved.This research introduces a hybrid AI–IoT–Digital Twin framework that combines continuous sensing,distributed intelligence,and simulation-based decision support.The design incorporates multi-source sensor data,lightweight edge inference through Convolutional Neural Networks(CNN)and Long ShortTerm Memory(LSTM)models,and federated learning enhanced with secure aggregation and differential privacy to maintain confidentiality.A digital twin layer extends these capabilities by simulating city assets such as traffic flows and water networks,generating what-if scenarios,and issuing actionable control signals.Complementary modules,including model compression and synchronization protocols,are embedded to ensure reliability in bandwidth-constrained and heterogeneous urban environments.The framework is validated in two urban domains:traffic management,where it adapts signal cycles based on real-time congestion patterns,and pipeline monitoring,where it anticipates leaks through pressure and vibration data.Experimental results show a 28%reduction in response time,a 35%decrease in maintenance costs,and a marked reduction in false positives relative to conventional baselines.The architecture also demonstrates stability across 50+edge devices under federated training and resilience to uneven node participation.The proposed system provides a scalable and privacy-aware foundation for predictive urban infrastructure management.By closing the loop between sensing,learning,and control,it reduces operator dependence,enhances resource efficiency,and supports transparent governance models for emerging smart cities.展开更多
To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spr...To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spread and stopped in a short span of time.Both targets can be achieved,since network of information can be extended and as well destroyed.So,information spread and community formation have become one of the most crucial issues in the world of SNA(Social Network Analysis).In this work,the complex network of twitter social network has been formalized and results are analyzed.For this purpose,different network metrics have been utilized.Visualization of the network is provided in its original form and then filter out(different percentages)from the network to eliminate the less impacting nodes and edges for better analysis.This network is analyzed according to different centrality measures,like edge-betweenness,betweenness centrality,closeness centrality and eigenvector centrality.Influential nodes are detected and their impact is observed on the network.The communities are analyzed in terms of network coverage considering theMinimum Spanning Tree,shortest path distribution and network diameter.It is found that these are the very effective ways to find influential and central nodes from such big social networks like Facebook,Instagram,Twitter,LinkedIn,etc.展开更多
This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contra...This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.展开更多
Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines o...Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines occurs due to variation of wind velocity.A wind cube is used to decrease power fluctuation and increase the wind turbine’s power.The optimum design for a wind cube is the main contribution of this work.The decisive design parameters used to optimize the wind cube are its inner and outer radius,the roughness factor,and the height of the wind turbine hub.A Gradient-Based Optimizer(GBO)is used as a new metaheuristic algorithm in this problem.The objective function of this research includes two parts:the first part is to minimize the probability of generated energy loss,and the second is to minimize the cost of the wind turbine and wind cube.The Gradient-Based Optimizer(GBO)is applied to optimize the variables of two wind turbine types and the design of the wind cube.The metrological data of the Red Sea governorate of Egypt is used as a case study for this analysis.Based on the results,the optimum design of a wind cube is achieved,and an improvement in energy produced from the wind turbine with a wind cube will be compared with energy generated without a wind cube.The energy generated from a wind turbine with the optimized cube is more than 20 times that of a wind turbine without a wind cube for all cases studied.展开更多
Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intell...Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intelligence(AI)and deep learning,there has been potential to improve diagnostic accuracy,especially with Magnetic Resonance Imaging(MRI).However,traditional state-of-the-art models lack the sensitivity essential for reliable tumor identification and segmentation.Thus,our research aims to enhance brain tumor diagnosis in MRI by proposing an advanced model.The proposed model incorporates dilated convolutions to optimize the brain tumor segmentation and classification.The proposed model is first trained and later evaluated using the BraTS 2020 dataset.In our proposed model preprocessing consists of normalization,noise reduction,and data augmentation to improve model robustness.The attention mechanism and dilated convolutions were introduced to increase the model’s focus on critical regions and capture finer spatial details without compromising image resolution.We have performed experimentation to measure efficiency.For this,we have used various metrics including accuracy,sensitivity,and curve(AUC-ROC).The proposed model achieved a high accuracy of 94%,a sensitivity of 93%,a specificity of 92%,and an AUC-ROC of 0.98,outperforming traditional diagnostic models in brain tumor detection.The proposed model accurately identifies tumor regions,while dilated convolutions enhanced the segmentation accuracy,especially for complex tumor structures.The proposed model demonstrates significant potential for clinical application,providing reliable and precise brain tumor detection in MRI.展开更多
Deepfake is a sort of fake media made by advanced AI methods like Generative Adversarial Networks(GANs).Deepfake technology has many useful uses in education and entertainment,but it also raises a lot of ethical,socia...Deepfake is a sort of fake media made by advanced AI methods like Generative Adversarial Networks(GANs).Deepfake technology has many useful uses in education and entertainment,but it also raises a lot of ethical,social,and security issues,such as identity theft,the dissemination of false information,and privacy violations.This study seeks to provide a comprehensive analysis of several methods for identifying and circumventing Deepfakes,with a particular focus on image-based Deepfakes.There are three main types of detection methods:classical,machine learning(ML)and deep learning(DL)-based,and hybrid methods.There are three main types of preventative methods:technical,legal,and moral.The study investigates the effectiveness of several detection approaches,such as convolutional neural networks(CNNs),frequency domain analysis,and hybrid CNN-LSTM models,focusing on the respective advantages and disadvantages of each method.We also look at new technologies like Explainable Artificial Intelligence(XAI)and blockchain-based frameworks.The essay looks at the use of algorithmic protocols,watermarking,and blockchain-based content verification as possible ways to stop certain things from happening.Recent advancements,including adversarial training and anti-Deepfake data generation,are essential because of their potential to alleviate rising concerns.This reviewshows that there aremajor problems,such as the difficulty of improving the capabilities of existing systems,the high running expenses,and the risk of being attacked by enemies.It stresses the importance of working together across fields,including academia,business,and government,to create robust,scalable,and ethical solutions.Themain goals of futurework should be to create lightweight,real-timedetection systems,connect them to large language models(LLMs),and put in place worldwide regulatory frameworks.This essay argues for a complete and varied plan to keep digital information real and build confidence in a time when media is driven by artificial intelligence.It uses both technical and non-technical means.展开更多
-License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on t...-License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on the support vector machine (SVM) algorithm. The new algorithm is efficient in recognizing the vehicles from the Arabic part of the plate. The performance of the system has been investigated and analyzed. The recognition accuracy of the algorithm is about 93.3%.展开更多
The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization o...The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization of deep learning techniques.Nevertheless,the existing methods exhibit certain constraints in terms of accessibility,diagnostic precision,data availability,and scalability.To address these obstacles,we put out a lightweight model known as Smart MobiNet,which is derived from MobileNet and incorporates additional distinctive attributes.The model utilizes a multi-scale feature extraction methodology by using various convolutional layers.The ISIC 2019 dataset,sourced from the International Skin Imaging Collaboration,is employed in this study.Traditional data augmentation approaches are implemented to address the issue of model overfitting.In this study,we conduct experiments to evaluate and compare the performance of three different models,namely CNN,MobileNet,and Smart MobiNet,in the task of skin cancer detection.The findings of our study indicate that the proposed model outperforms other architectures,achieving an accuracy of 0.89.Furthermore,the model exhibits balanced precision,sensitivity,and F1 scores,all measuring at 0.90.This model serves as a vital instrument that assists clinicians efficiently and precisely detecting skin cancer.展开更多
Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of...Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of an encryption algorithm that relies on only one key for decryption and as well as encryption.Many existing encryption algorithms are developed based on either a mathematical foundation or on other biological,social or physical behaviours.One technique is to utilise the behavioural aspects of game theory in a stream cipher.In this paper,we introduce an enhanced Deoxyribonucleic acid(DNA)-coded stream cipher based on an iterated n-player prisoner’s dilemma paradigm.Our main goal is to contribute to adding more layers of randomness to the behaviour of the keystream generation process;these layers are inspired by the behaviour of multiple players playing a prisoner’s dilemma game.We implement parallelism to compensate for the additional processing time that may result fromadding these extra layers of randomness.The results show that our enhanced design passes the statistical tests and achieves an encryption throughput of about 1,877 Mbit/s,which makes it a feasible secure stream cipher.展开更多
Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many r...Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many real-world problems,such as task assignment,vehicle routing,nurse scheduling,resource allocation,and airline crew scheduling,are based on the TF problem.TF has been shown to be a Nondeterministic Polynomial time(NP)problem,and high-dimensional problem with several local optima that can be solved using efficient approximation algorithms.This paper proposes two improved swarm-based algorithms for solving team formation problem.The first algorithm,entitled Hybrid Heap-Based Optimizer with Simulated Annealing Algorithm(HBOSA),uses a single crossover operator to improve the performance of a standard heap-based optimizer(HBO)algorithm.It also employs the simulated annealing(SA)approach to improve model convergence and avoid local minima trapping.The second algorithm is the Chaotic Heap-based Optimizer Algorithm(CHBO).CHBO aids in the discovery of new solutions in the search space by directing particles to different regions of the search space.During HBO’s optimization process,a logistic chaotic map is used.The performance of the two proposed algorithms(HBOSA)and(CHBO)is evaluated using thirteen benchmark functions and tested in solving the TF problem with varying number of experts and skills.Furthermore,the proposed algorithms were compared to well-known optimization algorithms such as the Heap-Based Optimizer(HBO),Developed Simulated Annealing(DSA),Particle SwarmOptimization(PSO),GreyWolfOptimization(GWO),and Genetic Algorithm(GA).Finally,the proposed algorithms were applied to a real-world benchmark dataset known as the Internet Movie Database(IMDB).The simulation results revealed that the proposed algorithms outperformed the compared algorithms in terms of efficiency and performance,with fast convergence to the global minimum.展开更多
Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and th...Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and the slake behavior of the existing AQM methods leads to unnecessary packet dropping.This paper proposes a fully adaptive active queue management(AAQM)method to maintain stable network performance,avoid congestion and packet loss,and eliminate unnecessary packet dropping.The proposed AAQM method is based on load and queue length indicators and uses an adaptive mechanism to adjust the dropping probability based on the buffer status.The proposed AAQM method adapts to single and multiclass traffic models.Extensive simulation results over two types of traffic showed that the proposed method achieved the best results compared to the existing methods,including Random Early Detection(RED),BLUE,Effective RED(ERED),Fuzzy RED(FRED),Fuzzy Gentle RED(FGRED),and Fuzzy BLUE(FBLUE).The proposed and compared methods achieved similar results with low or moderate traffic load.However,under high traffic load,the proposed AAQM method achieved the best rate of zero loss,similar to BLUE,compared to 0.01 for RED,0.27 for ERED,0.04 for FRED,0.12 for FGRED,and 0.44 for FBLUE.For throughput,the proposed AAQM method achieved the highest rate of 0.54,surpassing the BLUE method’s throughput of 0.43.For delay,the proposed AAQM method achieved the second-best delay of 28.51,while the BLUE method achieved the best delay of 13.18;however,the BLUE results are insufficient because of the low throughput.Consequently,the proposed AAQM method outperformed the compared methods with its superior throughput and acceptable delay.展开更多
Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that ...Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.展开更多
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi...Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.展开更多
In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requ...In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.展开更多
Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the sof...Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the software industry.They are characteristics of software source code that indicate a deeper problem in design.These smells appear not only in the design but also in software implementation.Code smells introduce bugs,affect software maintainability,and lead to higher maintenance costs.Uncovering code smells can be formulated as an optimization problem of finding the best detection rules.Although researchers have recommended different techniques to improve the accuracy of code smell detection,these methods are still unstable and need to be improved.Previous research has sought only to discover a few at a time(three or five types)and did not set rules for detecting their types.Our research improves code smell detection by applying a search-based technique;we use the Whale Optimization Algorithm as a classifier to find ideal detection rules.Applying this algorithm,the Fisher criterion is utilized as a fitness function to maximize the between-class distance over the withinclass variance.The proposed framework adopts if-then detection rules during the software development life cycle.Those rules identify the types for both medium and large projects.Experiments are conducted on five open-source software projects to discover nine smell types that mostly appear in codes.The proposed detection framework has an average of 94.24%precision and 93.4%recall.These accurate values are better than other search-based algorithms of the same field.The proposed framework improves code smell detection,which increases software quality while minimizing maintenance effort,time,and cost.Additionally,the resulting classification rules are analyzed to find the software metrics that differentiate the nine code smells.展开更多
Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of ...Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of Oriented Gradient (HOG) features and a Support Vector Machine (SVM) based classifier for efficient classification of Handwritten Digits. The HOG based technique is sensitive to the cell size selection used in the relevant feature extraction computations. Hence a new MCS approach has been used to perform HOG analysis and compute the HOG features. The system has been tested on the Benchmark MNIST Digit Database of handwritten digits and a classification accuracy of 99.36% has been achieved using an Independent Test set strategy. A Cross-Validation analysis of the classification system has also been performed using the 10-Fold Cross-Validation strategy and a 10-Fold classification accuracy of 99.26% has been obtained. The classification performance of the proposed system is superior to existing techniques using complex procedures since it has achieved at par or better results using simple operations in both the Feature Space and in the Classifier Space. The plots of the system’s Confusion Matrix and the Receiver Operating Characteristics (ROC) show evidence of the superior performance of the proposed new MCS HOG and SVM based digit classification system.展开更多
Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretical...Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretically,the shape deformation model can normalise the distortions,but it needs a 2D image.Curvatures theory,on the other hand,is not yet tested on digital pathology images.Therefore,this work proposed a curvature detection to reduce the boundary effects and estimates the epithelial layer.The boundary effect on the tissue surfaces is normalised using the frequency of a curve deviates from being a straight line.The epithelial layer’s depth is estimated from the tissue edges and the connected nucleolus only.Then,the textural and spatial features along the estimated layer are used for dysplastic tissue detection.The proposed method achieved better performance compared to the whole tissue regions in terms of detecting dysplastic tissue.The result shows a leap of kappa points from fair to a substantial agreement with the expert’s ground truth classification.The improved results demonstrate that curvatures have been effective in reducing the boundary effects on the epithelial layer of tissue.Thus,quantifying and classifying the morphological patterns for dysplasia can be automated.The textural and spatial features on the detected epithelial layer can capture the changes in tissue.展开更多
Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained ...Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.展开更多
The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is compl...The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is complex and unique to each individual. Its non-contact function gives it a healthful advantage over other biometric technologies. This paper presents an algebraic method for personal authentication and identification using internal contactless palm vein images. We use MATLAB image processing toolbox to enhance the palm vein images and employ coset decomposition concept to store and identify the encoded palm vein feature vectors. Experimental evidence shows the validation and influence of the proposed approach.展开更多
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
基金The researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025)。
文摘The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often advanced one dimension—such as Internet of Things(IoT)-based data acquisition,Artificial Intelligence(AI)-driven analytics,or digital twin visualization—without fully integrating these strands into a single operational loop.As a result,many existing solutions encounter bottlenecks in responsiveness,interoperability,and scalability,while also leaving concerns about data privacy unresolved.This research introduces a hybrid AI–IoT–Digital Twin framework that combines continuous sensing,distributed intelligence,and simulation-based decision support.The design incorporates multi-source sensor data,lightweight edge inference through Convolutional Neural Networks(CNN)and Long ShortTerm Memory(LSTM)models,and federated learning enhanced with secure aggregation and differential privacy to maintain confidentiality.A digital twin layer extends these capabilities by simulating city assets such as traffic flows and water networks,generating what-if scenarios,and issuing actionable control signals.Complementary modules,including model compression and synchronization protocols,are embedded to ensure reliability in bandwidth-constrained and heterogeneous urban environments.The framework is validated in two urban domains:traffic management,where it adapts signal cycles based on real-time congestion patterns,and pipeline monitoring,where it anticipates leaks through pressure and vibration data.Experimental results show a 28%reduction in response time,a 35%decrease in maintenance costs,and a marked reduction in false positives relative to conventional baselines.The architecture also demonstrates stability across 50+edge devices under federated training and resilience to uneven node participation.The proposed system provides a scalable and privacy-aware foundation for predictive urban infrastructure management.By closing the loop between sensing,learning,and control,it reduces operator dependence,enhances resource efficiency,and supports transparent governance models for emerging smart cities.
文摘To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spread and stopped in a short span of time.Both targets can be achieved,since network of information can be extended and as well destroyed.So,information spread and community formation have become one of the most crucial issues in the world of SNA(Social Network Analysis).In this work,the complex network of twitter social network has been formalized and results are analyzed.For this purpose,different network metrics have been utilized.Visualization of the network is provided in its original form and then filter out(different percentages)from the network to eliminate the less impacting nodes and edges for better analysis.This network is analyzed according to different centrality measures,like edge-betweenness,betweenness centrality,closeness centrality and eigenvector centrality.Influential nodes are detected and their impact is observed on the network.The communities are analyzed in terms of network coverage considering theMinimum Spanning Tree,shortest path distribution and network diameter.It is found that these are the very effective ways to find influential and central nodes from such big social networks like Facebook,Instagram,Twitter,LinkedIn,etc.
基金the Arab Open University for Funding this work through AOU Research Fund No.(AOURG-2023-006).
文摘This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.
文摘Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines occurs due to variation of wind velocity.A wind cube is used to decrease power fluctuation and increase the wind turbine’s power.The optimum design for a wind cube is the main contribution of this work.The decisive design parameters used to optimize the wind cube are its inner and outer radius,the roughness factor,and the height of the wind turbine hub.A Gradient-Based Optimizer(GBO)is used as a new metaheuristic algorithm in this problem.The objective function of this research includes two parts:the first part is to minimize the probability of generated energy loss,and the second is to minimize the cost of the wind turbine and wind cube.The Gradient-Based Optimizer(GBO)is applied to optimize the variables of two wind turbine types and the design of the wind cube.The metrological data of the Red Sea governorate of Egypt is used as a case study for this analysis.Based on the results,the optimum design of a wind cube is achieved,and an improvement in energy produced from the wind turbine with a wind cube will be compared with energy generated without a wind cube.The energy generated from a wind turbine with the optimized cube is more than 20 times that of a wind turbine without a wind cube for all cases studied.
基金supported by the European University of Atlantic.
文摘Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intelligence(AI)and deep learning,there has been potential to improve diagnostic accuracy,especially with Magnetic Resonance Imaging(MRI).However,traditional state-of-the-art models lack the sensitivity essential for reliable tumor identification and segmentation.Thus,our research aims to enhance brain tumor diagnosis in MRI by proposing an advanced model.The proposed model incorporates dilated convolutions to optimize the brain tumor segmentation and classification.The proposed model is first trained and later evaluated using the BraTS 2020 dataset.In our proposed model preprocessing consists of normalization,noise reduction,and data augmentation to improve model robustness.The attention mechanism and dilated convolutions were introduced to increase the model’s focus on critical regions and capture finer spatial details without compromising image resolution.We have performed experimentation to measure efficiency.For this,we have used various metrics including accuracy,sensitivity,and curve(AUC-ROC).The proposed model achieved a high accuracy of 94%,a sensitivity of 93%,a specificity of 92%,and an AUC-ROC of 0.98,outperforming traditional diagnostic models in brain tumor detection.The proposed model accurately identifies tumor regions,while dilated convolutions enhanced the segmentation accuracy,especially for complex tumor structures.The proposed model demonstrates significant potential for clinical application,providing reliable and precise brain tumor detection in MRI.
基金funded by the Arab Open University,Riyadh,Saudi Arabia.
文摘Deepfake is a sort of fake media made by advanced AI methods like Generative Adversarial Networks(GANs).Deepfake technology has many useful uses in education and entertainment,but it also raises a lot of ethical,social,and security issues,such as identity theft,the dissemination of false information,and privacy violations.This study seeks to provide a comprehensive analysis of several methods for identifying and circumventing Deepfakes,with a particular focus on image-based Deepfakes.There are three main types of detection methods:classical,machine learning(ML)and deep learning(DL)-based,and hybrid methods.There are three main types of preventative methods:technical,legal,and moral.The study investigates the effectiveness of several detection approaches,such as convolutional neural networks(CNNs),frequency domain analysis,and hybrid CNN-LSTM models,focusing on the respective advantages and disadvantages of each method.We also look at new technologies like Explainable Artificial Intelligence(XAI)and blockchain-based frameworks.The essay looks at the use of algorithmic protocols,watermarking,and blockchain-based content verification as possible ways to stop certain things from happening.Recent advancements,including adversarial training and anti-Deepfake data generation,are essential because of their potential to alleviate rising concerns.This reviewshows that there aremajor problems,such as the difficulty of improving the capabilities of existing systems,the high running expenses,and the risk of being attacked by enemies.It stresses the importance of working together across fields,including academia,business,and government,to create robust,scalable,and ethical solutions.Themain goals of futurework should be to create lightweight,real-timedetection systems,connect them to large language models(LLMs),and put in place worldwide regulatory frameworks.This essay argues for a complete and varied plan to keep digital information real and build confidence in a time when media is driven by artificial intelligence.It uses both technical and non-technical means.
文摘-License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on the support vector machine (SVM) algorithm. The new algorithm is efficient in recognizing the vehicles from the Arabic part of the plate. The performance of the system has been investigated and analyzed. The recognition accuracy of the algorithm is about 93.3%.
文摘The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization of deep learning techniques.Nevertheless,the existing methods exhibit certain constraints in terms of accessibility,diagnostic precision,data availability,and scalability.To address these obstacles,we put out a lightweight model known as Smart MobiNet,which is derived from MobileNet and incorporates additional distinctive attributes.The model utilizes a multi-scale feature extraction methodology by using various convolutional layers.The ISIC 2019 dataset,sourced from the International Skin Imaging Collaboration,is employed in this study.Traditional data augmentation approaches are implemented to address the issue of model overfitting.In this study,we conduct experiments to evaluate and compare the performance of three different models,namely CNN,MobileNet,and Smart MobiNet,in the task of skin cancer detection.The findings of our study indicate that the proposed model outperforms other architectures,achieving an accuracy of 0.89.Furthermore,the model exhibits balanced precision,sensitivity,and F1 scores,all measuring at 0.90.This model serves as a vital instrument that assists clinicians efficiently and precisely detecting skin cancer.
文摘Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of an encryption algorithm that relies on only one key for decryption and as well as encryption.Many existing encryption algorithms are developed based on either a mathematical foundation or on other biological,social or physical behaviours.One technique is to utilise the behavioural aspects of game theory in a stream cipher.In this paper,we introduce an enhanced Deoxyribonucleic acid(DNA)-coded stream cipher based on an iterated n-player prisoner’s dilemma paradigm.Our main goal is to contribute to adding more layers of randomness to the behaviour of the keystream generation process;these layers are inspired by the behaviour of multiple players playing a prisoner’s dilemma game.We implement parallelism to compensate for the additional processing time that may result fromadding these extra layers of randomness.The results show that our enhanced design passes the statistical tests and achieves an encryption throughput of about 1,877 Mbit/s,which makes it a feasible secure stream cipher.
文摘Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many real-world problems,such as task assignment,vehicle routing,nurse scheduling,resource allocation,and airline crew scheduling,are based on the TF problem.TF has been shown to be a Nondeterministic Polynomial time(NP)problem,and high-dimensional problem with several local optima that can be solved using efficient approximation algorithms.This paper proposes two improved swarm-based algorithms for solving team formation problem.The first algorithm,entitled Hybrid Heap-Based Optimizer with Simulated Annealing Algorithm(HBOSA),uses a single crossover operator to improve the performance of a standard heap-based optimizer(HBO)algorithm.It also employs the simulated annealing(SA)approach to improve model convergence and avoid local minima trapping.The second algorithm is the Chaotic Heap-based Optimizer Algorithm(CHBO).CHBO aids in the discovery of new solutions in the search space by directing particles to different regions of the search space.During HBO’s optimization process,a logistic chaotic map is used.The performance of the two proposed algorithms(HBOSA)and(CHBO)is evaluated using thirteen benchmark functions and tested in solving the TF problem with varying number of experts and skills.Furthermore,the proposed algorithms were compared to well-known optimization algorithms such as the Heap-Based Optimizer(HBO),Developed Simulated Annealing(DSA),Particle SwarmOptimization(PSO),GreyWolfOptimization(GWO),and Genetic Algorithm(GA).Finally,the proposed algorithms were applied to a real-world benchmark dataset known as the Internet Movie Database(IMDB).The simulation results revealed that the proposed algorithms outperformed the compared algorithms in terms of efficiency and performance,with fast convergence to the global minimum.
基金funded by Arab Open University Grant Number(AOURG2023–005).
文摘Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and the slake behavior of the existing AQM methods leads to unnecessary packet dropping.This paper proposes a fully adaptive active queue management(AAQM)method to maintain stable network performance,avoid congestion and packet loss,and eliminate unnecessary packet dropping.The proposed AAQM method is based on load and queue length indicators and uses an adaptive mechanism to adjust the dropping probability based on the buffer status.The proposed AAQM method adapts to single and multiclass traffic models.Extensive simulation results over two types of traffic showed that the proposed method achieved the best results compared to the existing methods,including Random Early Detection(RED),BLUE,Effective RED(ERED),Fuzzy RED(FRED),Fuzzy Gentle RED(FGRED),and Fuzzy BLUE(FBLUE).The proposed and compared methods achieved similar results with low or moderate traffic load.However,under high traffic load,the proposed AAQM method achieved the best rate of zero loss,similar to BLUE,compared to 0.01 for RED,0.27 for ERED,0.04 for FRED,0.12 for FGRED,and 0.44 for FBLUE.For throughput,the proposed AAQM method achieved the highest rate of 0.54,surpassing the BLUE method’s throughput of 0.43.For delay,the proposed AAQM method achieved the second-best delay of 28.51,while the BLUE method achieved the best delay of 13.18;however,the BLUE results are insufficient because of the low throughput.Consequently,the proposed AAQM method outperformed the compared methods with its superior throughput and acceptable delay.
文摘Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.
基金funded by Institutional Fund Projects under Grant No.(IFPIP:329-611-1443)the technical and financial support provided by the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.
基金The authors extend their appreciation to the Arab Open University,Saudi Arabia,for funding this work through AOU research fund No.AOURG-2023-009.
文摘In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.
文摘Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the software industry.They are characteristics of software source code that indicate a deeper problem in design.These smells appear not only in the design but also in software implementation.Code smells introduce bugs,affect software maintainability,and lead to higher maintenance costs.Uncovering code smells can be formulated as an optimization problem of finding the best detection rules.Although researchers have recommended different techniques to improve the accuracy of code smell detection,these methods are still unstable and need to be improved.Previous research has sought only to discover a few at a time(three or five types)and did not set rules for detecting their types.Our research improves code smell detection by applying a search-based technique;we use the Whale Optimization Algorithm as a classifier to find ideal detection rules.Applying this algorithm,the Fisher criterion is utilized as a fitness function to maximize the between-class distance over the withinclass variance.The proposed framework adopts if-then detection rules during the software development life cycle.Those rules identify the types for both medium and large projects.Experiments are conducted on five open-source software projects to discover nine smell types that mostly appear in codes.The proposed detection framework has an average of 94.24%precision and 93.4%recall.These accurate values are better than other search-based algorithms of the same field.The proposed framework improves code smell detection,which increases software quality while minimizing maintenance effort,time,and cost.Additionally,the resulting classification rules are analyzed to find the software metrics that differentiate the nine code smells.
文摘Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of Oriented Gradient (HOG) features and a Support Vector Machine (SVM) based classifier for efficient classification of Handwritten Digits. The HOG based technique is sensitive to the cell size selection used in the relevant feature extraction computations. Hence a new MCS approach has been used to perform HOG analysis and compute the HOG features. The system has been tested on the Benchmark MNIST Digit Database of handwritten digits and a classification accuracy of 99.36% has been achieved using an Independent Test set strategy. A Cross-Validation analysis of the classification system has also been performed using the 10-Fold Cross-Validation strategy and a 10-Fold classification accuracy of 99.26% has been obtained. The classification performance of the proposed system is superior to existing techniques using complex procedures since it has achieved at par or better results using simple operations in both the Feature Space and in the Classifier Space. The plots of the system’s Confusion Matrix and the Receiver Operating Characteristics (ROC) show evidence of the superior performance of the proposed new MCS HOG and SVM based digit classification system.
基金supported by the Center for Research and Innovation Management[CRIM]Universiti Kebangsaan Malaysia[Grant No.FRGS-1-2019-ICT02-UKM02-6]and Ministry of Higher Education Malaysia.
文摘Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretically,the shape deformation model can normalise the distortions,but it needs a 2D image.Curvatures theory,on the other hand,is not yet tested on digital pathology images.Therefore,this work proposed a curvature detection to reduce the boundary effects and estimates the epithelial layer.The boundary effect on the tissue surfaces is normalised using the frequency of a curve deviates from being a straight line.The epithelial layer’s depth is estimated from the tissue edges and the connected nucleolus only.Then,the textural and spatial features along the estimated layer are used for dysplastic tissue detection.The proposed method achieved better performance compared to the whole tissue regions in terms of detecting dysplastic tissue.The result shows a leap of kappa points from fair to a substantial agreement with the expert’s ground truth classification.The improved results demonstrate that curvatures have been effective in reducing the boundary effects on the epithelial layer of tissue.Thus,quantifying and classifying the morphological patterns for dysplasia can be automated.The textural and spatial features on the detected epithelial layer can capture the changes in tissue.
文摘Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.
文摘The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is complex and unique to each individual. Its non-contact function gives it a healthful advantage over other biometric technologies. This paper presents an algebraic method for personal authentication and identification using internal contactless palm vein images. We use MATLAB image processing toolbox to enhance the palm vein images and employ coset decomposition concept to store and identify the encoded palm vein feature vectors. Experimental evidence shows the validation and influence of the proposed approach.