This research aims to address the challenges of fault detection and isolation(FDI)in digital grids,focusing on improving the reliability and stability of power systems.Traditional fault detection techniques,such as ru...This research aims to address the challenges of fault detection and isolation(FDI)in digital grids,focusing on improving the reliability and stability of power systems.Traditional fault detection techniques,such as rule-based fuzzy systems and conventional FDI methods,often struggle with the dynamic nature of modern grids,resulting in delays and inaccuracies in fault classification.To overcome these limitations,this study introduces a Hybrid NeuroFuzzy Fault Detection Model that combines the adaptive learning capabilities of neural networks with the reasoning strength of fuzzy logic.The model’s performance was evaluated through extensive simulations on the IEEE 33-bus test system,considering various fault scenarios,including line-to-ground faults(LGF),three-phase short circuits(3PSC),and harmonic distortions(HD).The quantitative results show that the model achieves 97.2%accuracy,a false negative rate(FNR)of 1.9%,and a false positive rate(FPR)of 2.3%,demonstrating its high precision in fault diagnosis.The qualitative analysis further highlights the model’s adaptability and its potential for seamless integration into smart grids,micro grids,and renewable energy systems.By dynamically refining fuzzy inference rules,the model enhances fault detection efficiency without compromising computational feasibility.These findings contribute to the development of more resilient and adaptive fault management systems,paving the way for advanced smart grid technologies.展开更多
Recent architectures of multi-core systems may have a relatively large number of cores that typically ranges from tens to hundreds;therefore called many-core systems.Such systems require an efficient interconnection n...Recent architectures of multi-core systems may have a relatively large number of cores that typically ranges from tens to hundreds;therefore called many-core systems.Such systems require an efficient interconnection network that tries to address two major problems.First,the overhead of power and area cost and its effect on scalability.Second,high access latency is caused by multiple cores’simultaneous accesses of the same shared module.This paper presents an interconnection scheme called N-conjugate Shuffle Clusters(NCSC)based on multi-core multicluster architecture to reduce the overhead of the just mentioned problems.NCSC eliminated the need for router devices and their complexity and hence reduced the power and area costs.It also resigned and distributed the shared caches across the interconnection network to increase the ability for simultaneous access and hence reduce the access latency.For intra-cluster communication,Multi-port Content Addressable Memory(MPCAM)is used.The experimental results using four clusters and four cores each indicated that the average access latency for a write process is 1.14785±0.04532 ns which is nearly equal to the latency of a write operation in MPCAM.Moreover,it was demonstrated that the average read latency within a cluster is 1.26226±0.090591 ns and around 1.92738±0.139588 ns for read access between cores from different clusters.展开更多
In time division synchronous code division multiple access (TD-SCDMA) wireless communication systems, QPSK or 8PSK has been employed to support high data rate services and high efficiency in available bandwidth. The...In time division synchronous code division multiple access (TD-SCDMA) wireless communication systems, QPSK or 8PSK has been employed to support high data rate services and high efficiency in available bandwidth. The performance of such systems is affected by the phase noise of the microwave local oscillator. The phase noise model of synthesizer and the RF transceiver model for the phase noise effect are proposed for applications of TD-SCDMA systems. The relationship between the power spectral density (PSD) and root mean square (RMS) phase error is given. Then, the error vector magnitude (EVM) performance is analytically evaluated by using the single side band (SSB) phase noise. Theoretical results show agreement with those obtained by measurement data and therefore can be used to derive the TD-SCDMA system performance.展开更多
The analytical capacity of massive data has become increasingly necessary, given the high volume of data that has been generated daily by different sources. The data sources are varied and can generate a huge amount o...The analytical capacity of massive data has become increasingly necessary, given the high volume of data that has been generated daily by different sources. The data sources are varied and can generate a huge amount of data, which can be processed in batch or stream settings. The stream setting corresponds to the treatment of a continuous sequence of data that arrives in real-time flow and needs to be processed in real-time. The models, tools, methods and algorithms for generating intelligence from data stream culminate in the approaches of Data Stream Mining and Data Stream Learning. The activities of such approaches can be organized and structured according to Engineering principles, thus allowing the principles of Analytical Engineering, or more specifically, Analytical Engineering for Data Stream (AEDS). Thus, this article presents the AEDS conceptual framework composed of four pillars (Data, Model, Tool, People) and three processes (Acquisition, Retention, Review). The definition of these pillars and processes is carried out based on the main components of data stream setting, corresponding to four pillars, and also on the necessity to operationalize the activities of an Analytical Organization (AO) in the use of AEDS four pillars, which determines the three proposed processes. The AEDS framework favors the projects carried out in an AO, that is, its Analytical Projects (AP), to favor the delivery of results, or Analytical Deliverables (AD), carried out by the Analytical Teams (AT) in order to provide intelligence from stream data.展开更多
Breast cancer is among the leading causes of cancer mortality globally,and its diagnosis through histopathological image analysis is often prone to inter-observer variability and misclassification.Existing machine lea...Breast cancer is among the leading causes of cancer mortality globally,and its diagnosis through histopathological image analysis is often prone to inter-observer variability and misclassification.Existing machine learning(ML)methods struggle with intra-class heterogeneity and inter-class similarity,necessitating more robust classification models.This study presents an ML classifier ensemble hybrid model for deep feature extraction with deep learning(DL)and Bat Swarm Optimization(BSO)hyperparameter optimization to improve breast cancer histopathology(BCH)image classification.A dataset of 804 Hematoxylin and Eosin(H&E)stained images classified as Benign,in situ,Invasive,and Normal categories(ICIAR2018_BACH_Challenge)has been utilized.ResNet50 was utilized for feature extraction,while Support Vector Machines(SVM),Random Forests(RF),XGBoosts(XGB),Decision Trees(DT),and AdaBoosts(ADB)were utilized for classification.BSO was utilized for hyperparameter optimization in a soft voting ensemble approach.Accuracy,precision,recall,specificity,F1-score,Receiver Operating Characteristic(ROC),and Precision-Recall(PR)were utilized for model performance metrics.The model using an ensemble outperformed individual classifiers in terms of having greater accuracy(~90.0%),precision(~86.4%),recall(~86.3%),and specificity(~96.6%).The robustness of the model was verified by both ROC and PR curves,which showed AUC values of 1.00,0.99,and 0.98 for Benign,Invasive,and in situ instances,respectively.This ensemble model delivers a strong and clinically valid methodology for breast cancer classification that enhances precision and minimizes diagnostic errors.Future work should focus on explainable AI,multi-modal fusion,few-shot learning,and edge computing for real-world deployment.展开更多
Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveill...Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveillance,biometric authentication,and human-computer interaction.This paper provides a comprehensive review of face detection techniques developed to handle occluded faces.Studies are categorized into four main approaches:feature-based,machine learning-based,deep learning-based,and hybrid methods.We analyzed state-of-the-art studies within each category,examining their methodologies,strengths,and limitations based on widely used benchmark datasets,highlighting their adaptability to partial and severe occlusions.The review also identifies key challenges,including dataset diversity,model generalization,and computational efficiency.Our findings reveal that deep learning methods dominate recent studies,benefiting from their ability to extract hierarchical features and handle complex occlusion patterns.More recently,researchers have increasingly explored Transformer-based architectures,such as Vision Transformer(ViT)and Swin Transformer,to further improve detection robustness under challenging occlusion scenarios.In addition,hybrid approaches,which aim to combine traditional andmodern techniques,are emerging as a promising direction for improving robustness.This review provides valuable insights for researchers aiming to develop more robust face detection systems and for practitioners seeking to deploy reliable solutions in real-world,occlusionprone environments.Further improvements and the proposal of broader datasets are required to developmore scalable,robust,and efficient models that can handle complex occlusions in real-world scenarios.展开更多
Face detection is a critical component inmodern security,surveillance,and human-computer interaction systems,with widespread applications in smartphones,biometric access control,and public monitoring.However,detecting...Face detection is a critical component inmodern security,surveillance,and human-computer interaction systems,with widespread applications in smartphones,biometric access control,and public monitoring.However,detecting faces with high levels of occlusion,such as those covered by masks,veils,or scarves,remains a significant challenge,as traditional models often fail to generalize under such conditions.This paper presents a hybrid approach that combines traditional handcrafted feature extraction technique called Histogram of Oriented Gradients(HOG)and Canny edge detection with modern deep learning models.The goal is to improve face detection accuracy under occlusions.The proposed method leverages the structural strengths of HOG and edge-based object proposals while exploiting the feature extraction capabilities of Convolutional Neural Networks(CNNs).The effectiveness of the proposed model is assessed using a custom dataset containing 10,000 heavily occluded face images and a subset of the Common Objects in Context(COCO)dataset for non-face samples.The COCO dataset was selected for its variety and realism in background contexts.Experimental evaluations demonstrate significant performance improvements compared to baseline CNN models.Results indicate that DenseNet121 combined with HOG outperforms other counterparts in classification metrics with an F1-score of 87.96%and precision of 88.02%.Enhanced performance is achieved through reduced false positives and improved localization accuracy with the integration of object proposals based on Canny and contour detection.While the proposed method increases inference time from 33.52 to 97.80 ms,it achieves a notable improvement in precision from 80.85% to 88.02% when comparing the baseline DenseNet121 model to its hybrid counterpart.Limitations of the method include higher computational cost and the need for careful tuning of parameters across the edge detection,handcrafted features,and CNN components.These findings highlight the potential of combining handcrafted and learned features for occluded face detection tasks.展开更多
The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods...The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.展开更多
Underwater Wireless Sensor Networks(UWSNs)are gaining popularity because of their potential uses in oceanography,seismic activity monitoring,environmental preservation,and underwater mapping.Yet,these networks are fac...Underwater Wireless Sensor Networks(UWSNs)are gaining popularity because of their potential uses in oceanography,seismic activity monitoring,environmental preservation,and underwater mapping.Yet,these networks are faced with challenges such as self-interference,long propagation delays,limited bandwidth,and changing network topologies.These challenges are coped with by designing advanced routing protocols.In this work,we present Under Water Fuzzy-Routing Protocol for Low power and Lossy networks(UWF-RPL),an enhanced fuzzy-based protocol that improves decision-making during path selection and traffic distribution over different network nodes.Our method extends RPL with the aid of fuzzy logic to optimize depth,energy,Received Signal Strength Indicator(RSSI)to Expected Transmission Count(ETX)ratio,and latency.Theproposed protocol outperforms other techniques in that it offersmore energy efficiency,better packet delivery,lowdelay,and no queue overflow.It also exhibits better scalability and reliability in dynamic underwater networks,which is of very high importance in maintaining the network operations efficiency and the lifetime of UWSNs optimized.Compared to other recent methods,it offers improved network convergence time(10%–23%),energy efficiency(15%),packet delivery(17%),and delay(24%).展开更多
In this work,a novel compact wideband reconfigurable circularly polarised(CP)dielectric resonator antenna(DRA)is presented.The L-shaped Dielectric resonator antenna is excited by an inverted question mark shaped feed....In this work,a novel compact wideband reconfigurable circularly polarised(CP)dielectric resonator antenna(DRA)is presented.The L-shaped Dielectric resonator antenna is excited by an inverted question mark shaped feed.This arrangement of feed-line helps to generate two orthogonal modes inside the DR,which makes the design circularly polarised.A thin micro-strip line placed on the defected ground plane not only helps to generate a wideband response but also assist in the positioning of the two diode switches.These switches located at the left and right of the micro-strip line helps in performing two switching operations.The novel compact design offers the reconfigurability between 2.9–3.8 GHz which can be used for different important wireless applications.For the switching operation I,the achieved impedance bandwidth is 24%while axial ratio bandwidth(ARBW)is 42%.For this switching state,the design has 100%CP performance.Similarly,the switching operation II achieves 60%impedance bandwidth and 58.88%ARBW with 76.36%CP performance.The proposed design has a maximum measured gain of 3.4 dBi and 93%radiation efficiency.The proposed design is novel in terms of compactness and performance parameters.The prototype is fabricated for the performance analysis which shows that the simulated and measured results are in close agreement.展开更多
The last decade shows an explosion of using social media,which raises several challenges related to the security of personal files including images.These challenges include modifying,illegal copying,identity fraud,cop...The last decade shows an explosion of using social media,which raises several challenges related to the security of personal files including images.These challenges include modifying,illegal copying,identity fraud,copyright protection and ownership of images.Traditional digital watermarking techniques embed digital information inside another digital information without affecting the visual quality for security purposes.In this paper,we propose a hybrid digital watermarking and image processing approach to improve the image security level.Specifically,variants of the widely used Least-Significant Bit(LSB)watermarking technique are merged with a blob detection algorithm to embed information into the boundary pixels of the largest blob of a digital image.The proposed algorithms are tested using several experiments and techniques,which are followed by uploading the watermarked images into a social media site to evaluate the probability of extracting the embedding watermarks.The results show that the proposed approaches outperform the traditional LSB algorithm in terms of time,evaluation criteria and the percentage of pixels that have changed.展开更多
The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a ...The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a large population;however,illiterate and semi-illiterate people face challenges in using them.A major population of Pakistan is illiterate that has little or no practice of computer usage.In this paper,we investigate the challenges of using email applications by illiterate and semi-illiterate people.In addition,we also propose a solution by developing an application tailored to the needs of illiterate/semi-illiterate people.Research shows that illiterate people are good at learning the designs that convey information with pictures instead of text-only,and focus more on one object/action at a time.Our proposed solution is based on designing user interfaces that consist of icons and vocal/audio instructions instead of text.Further,we use background voice/audio which is more helpful than flooding a picture with a lot of information.We tested our application using a large number of users with various skill levels(from no computer knowledge to experts).Our results of the usability tests indicate that the application can be used by illiterate people without any training or third-party’s help.展开更多
Medical image super-resolution is a fundamental challenge due to absorption and scattering in tissues.These challenges are increasing the interest in the quality of medical images.Recent research has proven that the r...Medical image super-resolution is a fundamental challenge due to absorption and scattering in tissues.These challenges are increasing the interest in the quality of medical images.Recent research has proven that the rapid progress in convolutional neural networks(CNNs)has achieved superior performance in the area of medical image super-resolution.However,the traditional CNN approaches use interpolation techniques as a preprocessing stage to enlarge low-resolution magnetic resonance(MR)images,adding extra noise in the models and more memory consumption.Furthermore,conventional deep CNN approaches used layers in series-wise connection to create the deeper mode,because this later end layer cannot receive complete information and work as a dead layer.In this paper,we propose Inception-ResNet-based Network for MRI Image Super-Resolution known as IRMRIS.In our proposed approach,a bicubic interpolation is replaced with a deconvolution layer to learn the upsampling filters.Furthermore,a residual skip connection with the Inception block is used to reconstruct a high-resolution output image from a low-quality input image.Quantitative and qualitative evaluations of the proposed method are supported through extensive experiments in reconstructing sharper and clean texture details as compared to the state-of-the-art methods.展开更多
A signal is the entity that carries information. In the field of communication signal is the time varying quantity or functions of time and they are interrelated by a set of different equations, but some times process...A signal is the entity that carries information. In the field of communication signal is the time varying quantity or functions of time and they are interrelated by a set of different equations, but some times processing of signal is corrupted due to adding some noise in the information signal and the information signal become noisy. It is very important to get the information from corrupted signal as we use filters. In this paper, Butterworth filter is designed for the signal analysis and also compared with other filters. It has maximally flat response in the pass band otherwise no ripples in the pass band. To meet the specification, 6th order Butterworth filter was chosen because it is flat in the pass band and has no amount of ripples in the stop band.展开更多
We review recent work on narrowband orthogonally polarized optical RF single sideband generators as well as dualchannel equalization,both based on high-Q integrated ring resonators.The devices operate in the optical t...We review recent work on narrowband orthogonally polarized optical RF single sideband generators as well as dualchannel equalization,both based on high-Q integrated ring resonators.The devices operate in the optical telecommunications C-band and enable RF operation over a range of either fixed or thermally tuneable frequencies.They operate via TE/TM mode birefringence in the resonator.We achieve a very large dynamic tuning range of over 55 dB for both the optical carrier-to-sideband ratio and the dual-channel RF equalization for both the fixed and tunable devices.展开更多
We review recent work on broadband RF channelizers based on integrated optical frequency Kerr micro-combs combined with passive micro-ring resonator filters,with microcombs having channel spacings of 200 and 49 GHz.Th...We review recent work on broadband RF channelizers based on integrated optical frequency Kerr micro-combs combined with passive micro-ring resonator filters,with microcombs having channel spacings of 200 and 49 GHz.This approach to realizing RF channelizers offers reduced complexity,size,and potential cost for a wide range of applications to microwave signal detection.展开更多
Predicting the direction of the stock market has always been a huge challenge.Also,the way of forecasting the stock market reduces the risk in the financial market,thus ensuring that brokers can make normal returns.De...Predicting the direction of the stock market has always been a huge challenge.Also,the way of forecasting the stock market reduces the risk in the financial market,thus ensuring that brokers can make normal returns.Despite the complexities of the stock market,the challenge has been increasingly addressed by experts in a variety of disciplines,including economics,statistics,and computer science.The introduction of machine learning,in-depth understanding of the prospects of the financial market,thus doing many experiments to predict the future so that the stock price trend has different degrees of success.In this paper,we propose a method to predict stocks from different industries and markets,as well as trend prediction using traditional machine learning algorithms such as linear regression,polynomial regression and learning techniques in time series prediction using two forms of special types of recursive neural networks:long and short time memory(LSTM)and spoken short-term memory.展开更多
Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to acc...Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to access the shared cache simultaneously.The main problem in improving memory performance is the shared cache architecture and cache replacement.This paper documents the implementation of a Dual-Port Content Addressable Memory(DPCAM)and a modified Near-Far Access Replacement Algorithm(NFRA),which was previously proposed as a shared L2 cache layer in a multi-core processor.Standard Performance Evaluation Corporation(SPEC)Central Processing Unit(CPU)2006 benchmark workloads are used to evaluate the benefit of the shared L2 cache layer.Results show improved performance of the multicore processor’s DPCAM and NFRA algorithms,corresponding to a higher number of concurrent accesses to shared memory.The new architecture significantly increases system throughput and records performance improvements of up to 8.7%on various types of SPEC 2006 benchmarks.The miss rate is also improved by about 13%,with some exceptions in the sphinx3 and bzip2 benchmarks.These results could open a new window for solving the long-standing problems with shared cache in multi-core processors.展开更多
Multicore systems oftentimes use multiple levels of cache to bridge the gap between processor and memory speed.This paper presents a new design of a dedicated pipeline cache memory for multicore processors called dual...Multicore systems oftentimes use multiple levels of cache to bridge the gap between processor and memory speed.This paper presents a new design of a dedicated pipeline cache memory for multicore processors called dual port content addressable memory(DPCAM).In addition,it proposes a new replacement algorithm based on hardware which is called a near-far access replacement algorithm(NFRA)to reduce the cost overhead of the cache controller and improve the cache access latency.The experimental results indicated that the latency for write and read operations are significantly less in comparison with a set-associative cache memory.Moreover,it was shown that a latency of a read operation is nearly constant regardless of the size of DPCAM.However,an estimation of the power dissipation showed that DPCAM consumes about 7%greater than a set-associative cache memory of the same size.These results encourage for embedding DPCAM within the multicore processors as a small shared cache memory.展开更多
文摘This research aims to address the challenges of fault detection and isolation(FDI)in digital grids,focusing on improving the reliability and stability of power systems.Traditional fault detection techniques,such as rule-based fuzzy systems and conventional FDI methods,often struggle with the dynamic nature of modern grids,resulting in delays and inaccuracies in fault classification.To overcome these limitations,this study introduces a Hybrid NeuroFuzzy Fault Detection Model that combines the adaptive learning capabilities of neural networks with the reasoning strength of fuzzy logic.The model’s performance was evaluated through extensive simulations on the IEEE 33-bus test system,considering various fault scenarios,including line-to-ground faults(LGF),three-phase short circuits(3PSC),and harmonic distortions(HD).The quantitative results show that the model achieves 97.2%accuracy,a false negative rate(FNR)of 1.9%,and a false positive rate(FPR)of 2.3%,demonstrating its high precision in fault diagnosis.The qualitative analysis further highlights the model’s adaptability and its potential for seamless integration into smart grids,micro grids,and renewable energy systems.By dynamically refining fuzzy inference rules,the model enhances fault detection efficiency without compromising computational feasibility.These findings contribute to the development of more resilient and adaptive fault management systems,paving the way for advanced smart grid technologies.
文摘Recent architectures of multi-core systems may have a relatively large number of cores that typically ranges from tens to hundreds;therefore called many-core systems.Such systems require an efficient interconnection network that tries to address two major problems.First,the overhead of power and area cost and its effect on scalability.Second,high access latency is caused by multiple cores’simultaneous accesses of the same shared module.This paper presents an interconnection scheme called N-conjugate Shuffle Clusters(NCSC)based on multi-core multicluster architecture to reduce the overhead of the just mentioned problems.NCSC eliminated the need for router devices and their complexity and hence reduced the power and area costs.It also resigned and distributed the shared caches across the interconnection network to increase the ability for simultaneous access and hence reduce the access latency.For intra-cluster communication,Multi-port Content Addressable Memory(MPCAM)is used.The experimental results using four clusters and four cores each indicated that the average access latency for a write process is 1.14785±0.04532 ns which is nearly equal to the latency of a write operation in MPCAM.Moreover,it was demonstrated that the average read latency within a cluster is 1.26226±0.090591 ns and around 1.92738±0.139588 ns for read access between cores from different clusters.
文摘In time division synchronous code division multiple access (TD-SCDMA) wireless communication systems, QPSK or 8PSK has been employed to support high data rate services and high efficiency in available bandwidth. The performance of such systems is affected by the phase noise of the microwave local oscillator. The phase noise model of synthesizer and the RF transceiver model for the phase noise effect are proposed for applications of TD-SCDMA systems. The relationship between the power spectral density (PSD) and root mean square (RMS) phase error is given. Then, the error vector magnitude (EVM) performance is analytically evaluated by using the single side band (SSB) phase noise. Theoretical results show agreement with those obtained by measurement data and therefore can be used to derive the TD-SCDMA system performance.
文摘The analytical capacity of massive data has become increasingly necessary, given the high volume of data that has been generated daily by different sources. The data sources are varied and can generate a huge amount of data, which can be processed in batch or stream settings. The stream setting corresponds to the treatment of a continuous sequence of data that arrives in real-time flow and needs to be processed in real-time. The models, tools, methods and algorithms for generating intelligence from data stream culminate in the approaches of Data Stream Mining and Data Stream Learning. The activities of such approaches can be organized and structured according to Engineering principles, thus allowing the principles of Analytical Engineering, or more specifically, Analytical Engineering for Data Stream (AEDS). Thus, this article presents the AEDS conceptual framework composed of four pillars (Data, Model, Tool, People) and three processes (Acquisition, Retention, Review). The definition of these pillars and processes is carried out based on the main components of data stream setting, corresponding to four pillars, and also on the necessity to operationalize the activities of an Analytical Organization (AO) in the use of AEDS four pillars, which determines the three proposed processes. The AEDS framework favors the projects carried out in an AO, that is, its Analytical Projects (AP), to favor the delivery of results, or Analytical Deliverables (AD), carried out by the Analytical Teams (AT) in order to provide intelligence from stream data.
文摘Breast cancer is among the leading causes of cancer mortality globally,and its diagnosis through histopathological image analysis is often prone to inter-observer variability and misclassification.Existing machine learning(ML)methods struggle with intra-class heterogeneity and inter-class similarity,necessitating more robust classification models.This study presents an ML classifier ensemble hybrid model for deep feature extraction with deep learning(DL)and Bat Swarm Optimization(BSO)hyperparameter optimization to improve breast cancer histopathology(BCH)image classification.A dataset of 804 Hematoxylin and Eosin(H&E)stained images classified as Benign,in situ,Invasive,and Normal categories(ICIAR2018_BACH_Challenge)has been utilized.ResNet50 was utilized for feature extraction,while Support Vector Machines(SVM),Random Forests(RF),XGBoosts(XGB),Decision Trees(DT),and AdaBoosts(ADB)were utilized for classification.BSO was utilized for hyperparameter optimization in a soft voting ensemble approach.Accuracy,precision,recall,specificity,F1-score,Receiver Operating Characteristic(ROC),and Precision-Recall(PR)were utilized for model performance metrics.The model using an ensemble outperformed individual classifiers in terms of having greater accuracy(~90.0%),precision(~86.4%),recall(~86.3%),and specificity(~96.6%).The robustness of the model was verified by both ROC and PR curves,which showed AUC values of 1.00,0.99,and 0.98 for Benign,Invasive,and in situ instances,respectively.This ensemble model delivers a strong and clinically valid methodology for breast cancer classification that enhances precision and minimizes diagnostic errors.Future work should focus on explainable AI,multi-modal fusion,few-shot learning,and edge computing for real-world deployment.
基金funded by A’Sharqiyah University,Sultanate of Oman,under Research Project grant number(BFP/RGP/ICT/22/490).
文摘Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveillance,biometric authentication,and human-computer interaction.This paper provides a comprehensive review of face detection techniques developed to handle occluded faces.Studies are categorized into four main approaches:feature-based,machine learning-based,deep learning-based,and hybrid methods.We analyzed state-of-the-art studies within each category,examining their methodologies,strengths,and limitations based on widely used benchmark datasets,highlighting their adaptability to partial and severe occlusions.The review also identifies key challenges,including dataset diversity,model generalization,and computational efficiency.Our findings reveal that deep learning methods dominate recent studies,benefiting from their ability to extract hierarchical features and handle complex occlusion patterns.More recently,researchers have increasingly explored Transformer-based architectures,such as Vision Transformer(ViT)and Swin Transformer,to further improve detection robustness under challenging occlusion scenarios.In addition,hybrid approaches,which aim to combine traditional andmodern techniques,are emerging as a promising direction for improving robustness.This review provides valuable insights for researchers aiming to develop more robust face detection systems and for practitioners seeking to deploy reliable solutions in real-world,occlusionprone environments.Further improvements and the proposal of broader datasets are required to developmore scalable,robust,and efficient models that can handle complex occlusions in real-world scenarios.
基金funded by A’Sharqiyah University,Sultanate of Oman,under Research Project Grant Number(BFP/RGP/ICT/22/490).
文摘Face detection is a critical component inmodern security,surveillance,and human-computer interaction systems,with widespread applications in smartphones,biometric access control,and public monitoring.However,detecting faces with high levels of occlusion,such as those covered by masks,veils,or scarves,remains a significant challenge,as traditional models often fail to generalize under such conditions.This paper presents a hybrid approach that combines traditional handcrafted feature extraction technique called Histogram of Oriented Gradients(HOG)and Canny edge detection with modern deep learning models.The goal is to improve face detection accuracy under occlusions.The proposed method leverages the structural strengths of HOG and edge-based object proposals while exploiting the feature extraction capabilities of Convolutional Neural Networks(CNNs).The effectiveness of the proposed model is assessed using a custom dataset containing 10,000 heavily occluded face images and a subset of the Common Objects in Context(COCO)dataset for non-face samples.The COCO dataset was selected for its variety and realism in background contexts.Experimental evaluations demonstrate significant performance improvements compared to baseline CNN models.Results indicate that DenseNet121 combined with HOG outperforms other counterparts in classification metrics with an F1-score of 87.96%and precision of 88.02%.Enhanced performance is achieved through reduced false positives and improved localization accuracy with the integration of object proposals based on Canny and contour detection.While the proposed method increases inference time from 33.52 to 97.80 ms,it achieves a notable improvement in precision from 80.85% to 88.02% when comparing the baseline DenseNet121 model to its hybrid counterpart.Limitations of the method include higher computational cost and the need for careful tuning of parameters across the edge detection,handcrafted features,and CNN components.These findings highlight the potential of combining handcrafted and learned features for occluded face detection tasks.
文摘The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.
文摘Underwater Wireless Sensor Networks(UWSNs)are gaining popularity because of their potential uses in oceanography,seismic activity monitoring,environmental preservation,and underwater mapping.Yet,these networks are faced with challenges such as self-interference,long propagation delays,limited bandwidth,and changing network topologies.These challenges are coped with by designing advanced routing protocols.In this work,we present Under Water Fuzzy-Routing Protocol for Low power and Lossy networks(UWF-RPL),an enhanced fuzzy-based protocol that improves decision-making during path selection and traffic distribution over different network nodes.Our method extends RPL with the aid of fuzzy logic to optimize depth,energy,Received Signal Strength Indicator(RSSI)to Expected Transmission Count(ETX)ratio,and latency.Theproposed protocol outperforms other techniques in that it offersmore energy efficiency,better packet delivery,lowdelay,and no queue overflow.It also exhibits better scalability and reliability in dynamic underwater networks,which is of very high importance in maintaining the network operations efficiency and the lifetime of UWSNs optimized.Compared to other recent methods,it offers improved network convergence time(10%–23%),energy efficiency(15%),packet delivery(17%),and delay(24%).
基金supported by the National Science Foundation of China Grant funded by the Chinese Government(No.61861043).
文摘In this work,a novel compact wideband reconfigurable circularly polarised(CP)dielectric resonator antenna(DRA)is presented.The L-shaped Dielectric resonator antenna is excited by an inverted question mark shaped feed.This arrangement of feed-line helps to generate two orthogonal modes inside the DR,which makes the design circularly polarised.A thin micro-strip line placed on the defected ground plane not only helps to generate a wideband response but also assist in the positioning of the two diode switches.These switches located at the left and right of the micro-strip line helps in performing two switching operations.The novel compact design offers the reconfigurability between 2.9–3.8 GHz which can be used for different important wireless applications.For the switching operation I,the achieved impedance bandwidth is 24%while axial ratio bandwidth(ARBW)is 42%.For this switching state,the design has 100%CP performance.Similarly,the switching operation II achieves 60%impedance bandwidth and 58.88%ARBW with 76.36%CP performance.The proposed design has a maximum measured gain of 3.4 dBi and 93%radiation efficiency.The proposed design is novel in terms of compactness and performance parameters.The prototype is fabricated for the performance analysis which shows that the simulated and measured results are in close agreement.
文摘The last decade shows an explosion of using social media,which raises several challenges related to the security of personal files including images.These challenges include modifying,illegal copying,identity fraud,copyright protection and ownership of images.Traditional digital watermarking techniques embed digital information inside another digital information without affecting the visual quality for security purposes.In this paper,we propose a hybrid digital watermarking and image processing approach to improve the image security level.Specifically,variants of the widely used Least-Significant Bit(LSB)watermarking technique are merged with a blob detection algorithm to embed information into the boundary pixels of the largest blob of a digital image.The proposed algorithms are tested using several experiments and techniques,which are followed by uploading the watermarked images into a social media site to evaluate the probability of extracting the embedding watermarks.The results show that the proposed approaches outperform the traditional LSB algorithm in terms of time,evaluation criteria and the percentage of pixels that have changed.
基金This work is supported by the Security Testing Lab established at the University of Engineering&TechnologyPeshawar under the funded project National Center for Cyber Security of the Higher Education Commission(HEC),Pakistan。
文摘The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a large population;however,illiterate and semi-illiterate people face challenges in using them.A major population of Pakistan is illiterate that has little or no practice of computer usage.In this paper,we investigate the challenges of using email applications by illiterate and semi-illiterate people.In addition,we also propose a solution by developing an application tailored to the needs of illiterate/semi-illiterate people.Research shows that illiterate people are good at learning the designs that convey information with pictures instead of text-only,and focus more on one object/action at a time.Our proposed solution is based on designing user interfaces that consist of icons and vocal/audio instructions instead of text.Further,we use background voice/audio which is more helpful than flooding a picture with a lot of information.We tested our application using a large number of users with various skill levels(from no computer knowledge to experts).Our results of the usability tests indicate that the application can be used by illiterate people without any training or third-party’s help.
基金supported by Balochistan University of Engineering and Technology,Khuzdar,Balochistan,Pakistan.
文摘Medical image super-resolution is a fundamental challenge due to absorption and scattering in tissues.These challenges are increasing the interest in the quality of medical images.Recent research has proven that the rapid progress in convolutional neural networks(CNNs)has achieved superior performance in the area of medical image super-resolution.However,the traditional CNN approaches use interpolation techniques as a preprocessing stage to enlarge low-resolution magnetic resonance(MR)images,adding extra noise in the models and more memory consumption.Furthermore,conventional deep CNN approaches used layers in series-wise connection to create the deeper mode,because this later end layer cannot receive complete information and work as a dead layer.In this paper,we propose Inception-ResNet-based Network for MRI Image Super-Resolution known as IRMRIS.In our proposed approach,a bicubic interpolation is replaced with a deconvolution layer to learn the upsampling filters.Furthermore,a residual skip connection with the Inception block is used to reconstruct a high-resolution output image from a low-quality input image.Quantitative and qualitative evaluations of the proposed method are supported through extensive experiments in reconstructing sharper and clean texture details as compared to the state-of-the-art methods.
文摘A signal is the entity that carries information. In the field of communication signal is the time varying quantity or functions of time and they are interrelated by a set of different equations, but some times processing of signal is corrupted due to adding some noise in the information signal and the information signal become noisy. It is very important to get the information from corrupted signal as we use filters. In this paper, Butterworth filter is designed for the signal analysis and also compared with other filters. It has maximally flat response in the pass band otherwise no ripples in the pass band. To meet the specification, 6th order Butterworth filter was chosen because it is flat in the pass band and has no amount of ripples in the stop band.
文摘We review recent work on narrowband orthogonally polarized optical RF single sideband generators as well as dualchannel equalization,both based on high-Q integrated ring resonators.The devices operate in the optical telecommunications C-band and enable RF operation over a range of either fixed or thermally tuneable frequencies.They operate via TE/TM mode birefringence in the resonator.We achieve a very large dynamic tuning range of over 55 dB for both the optical carrier-to-sideband ratio and the dual-channel RF equalization for both the fixed and tunable devices.
文摘We review recent work on broadband RF channelizers based on integrated optical frequency Kerr micro-combs combined with passive micro-ring resonator filters,with microcombs having channel spacings of 200 and 49 GHz.This approach to realizing RF channelizers offers reduced complexity,size,and potential cost for a wide range of applications to microwave signal detection.
文摘Predicting the direction of the stock market has always been a huge challenge.Also,the way of forecasting the stock market reduces the risk in the financial market,thus ensuring that brokers can make normal returns.Despite the complexities of the stock market,the challenge has been increasingly addressed by experts in a variety of disciplines,including economics,statistics,and computer science.The introduction of machine learning,in-depth understanding of the prospects of the financial market,thus doing many experiments to predict the future so that the stock price trend has different degrees of success.In this paper,we propose a method to predict stocks from different industries and markets,as well as trend prediction using traditional machine learning algorithms such as linear regression,polynomial regression and learning techniques in time series prediction using two forms of special types of recursive neural networks:long and short time memory(LSTM)and spoken short-term memory.
文摘Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to access the shared cache simultaneously.The main problem in improving memory performance is the shared cache architecture and cache replacement.This paper documents the implementation of a Dual-Port Content Addressable Memory(DPCAM)and a modified Near-Far Access Replacement Algorithm(NFRA),which was previously proposed as a shared L2 cache layer in a multi-core processor.Standard Performance Evaluation Corporation(SPEC)Central Processing Unit(CPU)2006 benchmark workloads are used to evaluate the benefit of the shared L2 cache layer.Results show improved performance of the multicore processor’s DPCAM and NFRA algorithms,corresponding to a higher number of concurrent accesses to shared memory.The new architecture significantly increases system throughput and records performance improvements of up to 8.7%on various types of SPEC 2006 benchmarks.The miss rate is also improved by about 13%,with some exceptions in the sphinx3 and bzip2 benchmarks.These results could open a new window for solving the long-standing problems with shared cache in multi-core processors.
文摘Multicore systems oftentimes use multiple levels of cache to bridge the gap between processor and memory speed.This paper presents a new design of a dedicated pipeline cache memory for multicore processors called dual port content addressable memory(DPCAM).In addition,it proposes a new replacement algorithm based on hardware which is called a near-far access replacement algorithm(NFRA)to reduce the cost overhead of the cache controller and improve the cache access latency.The experimental results indicated that the latency for write and read operations are significantly less in comparison with a set-associative cache memory.Moreover,it was shown that a latency of a read operation is nearly constant regardless of the size of DPCAM.However,an estimation of the power dissipation showed that DPCAM consumes about 7%greater than a set-associative cache memory of the same size.These results encourage for embedding DPCAM within the multicore processors as a small shared cache memory.