Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components o...Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.展开更多
We present a novel method for scale-invariant 3D face recognition by integrating computer-generated holography with the Mellin transform.This approach leverages the scale-invariance property of the Mellin transform to...We present a novel method for scale-invariant 3D face recognition by integrating computer-generated holography with the Mellin transform.This approach leverages the scale-invariance property of the Mellin transform to address challenges related to variations in 3D facial sizes during recognition.By applying the Mellin transform to computer-generated holograms and performing correlation between them,which,to the best of our knowledge,is being done for the first time,we have developed a robust recognition framework capable of managing significant scale variations without compromising recognition accuracy.Digital holograms of 3D faces are generated from a face database,and the Mellin transform is employed to enable robust recognition across scale factors ranging from 0.4 to 2.0.Within this range,the method achieves 100%recognition accuracy,as confirmed by both simulation-based and hybrid optical/digital experimental validations.Numerical calculations demonstrate that our method significantly enhances the accuracy and reliability of 3D face recognition,as evidenced by the sharp correlation peaks and higher peak-to-noise ratio(PNR)values than that of using conventional holograms without the Mellin transform.Additionally,the hybrid optical/digital joint transform correlation hardware further validates the method's effectiveness,demonstrating its capability to accurately identify and distinguish 3D faces at various scales.This work provides a promising solution for advanced biometric systems,especially for those which require 3D scale-invariant recognition.展开更多
This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the...This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.展开更多
Currently,some photorealistic computer graphics are very similar to photographic images.Photorealistic computer generated graphics can be forged as photographic images,causing serious security problems.The aim of this...Currently,some photorealistic computer graphics are very similar to photographic images.Photorealistic computer generated graphics can be forged as photographic images,causing serious security problems.The aim of this work is to use a deep neural network to detect photographic images(PI)versus computer generated graphics(CG).In existing approaches,image feature classification is computationally intensive and fails to achieve realtime analysis.This paper presents an effective approach to automatically identify PI and CG based on deep convolutional neural networks(DCNNs).Compared with some existing methods,the proposed method achieves real-time forensic tasks by deepening the network structure.Experimental results show that this approach can effectively identify PI and CG with average detection accuracy of 98%.展开更多
Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tr...Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tracks the situation of the software and hardware operating on the network.Notwithstanding advancements of growth,current intrusion detection systems also experience difficulties in enhancing detection precision,growing false alarm levels and identifying suspicious activities.In order to address above mentioned issues,several researchers concentrated on designing intrusion detection systems that rely on machine learning approaches.Machine learning models will accurately identify the underlying variations among regular information and irregular information with incredible efficiency.Artificial intelligence,particularly machine learning methods can be used to develop an intelligent intrusion detection framework.There in this article in order to achieve this objective,we propose an intrusion detection system focused on a Deep extreme learning machine(DELM)which first establishes the assessment of safety features that lead to their prominence and then constructs an adaptive intrusion detection system focusing on the important features.In the moment,we researched the viability of our suggested DELMbased intrusion detection system by conducting dataset assessments and evaluating the performance factors to validate the system reliability.The experimental results illustrate that the suggested framework outclasses traditional algorithms.In fact,the suggested framework is not only of interest to scientific research but also of functional importance.展开更多
In the period of Industries 4.0,cyber-physical systems(CPSs)were a major study area.Such systems frequently occur in manufacturing processes and people’s everyday lives,and they communicate intensely among physical e...In the period of Industries 4.0,cyber-physical systems(CPSs)were a major study area.Such systems frequently occur in manufacturing processes and people’s everyday lives,and they communicate intensely among physical elements and lead to inconsistency.Due to the magnitude and importance of the systems they support,the cyber quantum models must function effectively.In this paper,an image-processing-based anomalous mobility detecting approach is suggested that may be added to systems at any time.The expense of glitches,failures or destroyed products is decreased when anomalous activities are detected and unplanned scenarios are avoided.The presently offered techniques are not well suited to these operations,which necessitate information systems for issue treatment and classification at a degree of complexity that is distinct from technology.To overcome such challenges in industrial cyber-physical systems,the Image Processing aided Computer Vision Technology for Fault Detection System(IM-CVFD)is proposed in this research.The Uncertainty Management technique is introduced in addition to achieving optimum knowledge in terms of latency and effectiveness.A thorough simulation was performed in an appropriate processing facility.The study results suggest that the IM-CVFD has a high performance,low error frequency,low energy consumption,and low delay with a strategy that provides.In comparison to traditional approaches,the IM-CVFD produces a more efficient outcome.展开更多
A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extr...A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.展开更多
There is a great need to provide educational environments for blind and handicapped people. There are many Islamic websites and applications dedicated to the educational services for the Holy Quran and Its Sciences (Q...There is a great need to provide educational environments for blind and handicapped people. There are many Islamic websites and applications dedicated to the educational services for the Holy Quran and Its Sciences (Quran Recitations, the interpretations, etc.) on the Internet. Unfortunately, blind and handicapped people could not use these services. These people cannot use the keyboard and the mouse. In addition, the ability to read and write is essential to benefit from these services. In this paper, we present an educational environment that allows these people to take full advantage of the scientific materials. This is done through the interaction with the system using voice commands by speaking directly without the need to write or to use the mouse. Google Speech API is used for the universal speech recognition after a preprocessing and post processing phases to improve the accuracy. For blind people, responses of these commands will be played back through the audio device instead of displaying the text to the screen. The text will be displayed on the screen to help other people make use of the system.展开更多
Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform thei...Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform their daily tasks.In these devices,higher latency factors need to be addressed appropriately.Therefore,the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation.The proposed architecture is capable to communicate between different modules of the system by adopting an automotive,intelligent data processing and classification approach.Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion.Think-Net Convolutional Neural Network(TN-CNN)architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification.Data collection and processing are the responsibility of the central integrated server for system load minimization.Testing of implemented architecture and deep learning model shows excellent results.The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism.The training and testing results are 99%and 93%for custom model implementation based on TN-CNN.The proposed real-time architecture is capable of intelligent data processing unit with fewer errors,and it will benefit assistive devices working on the local server and cloud server.展开更多
Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.T...Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions.展开更多
Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable...Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable to society.Consequently,there is a pressing need for swift identification of potential threats to preemptively alert law enforcement and security forces,thereby preventing potential attacks or violent incidents.Recent advancements in big data analytics and deep learning have significantly enhanced the capabilities of computer vision in object detection,particularly in identifying firearms.This paper introduces a novel automatic firearm detection surveillance system,utilizing a one-stage detection approach named MARIE(Mechanism for Realtime Identification of Firearms).MARIE incorporates the Single Shot Multibox Detector(SSD)model,which has been specifically optimized to balance the speed-accuracy trade-off critical in firearm detection applications.The SSD model was further refined by integrating MobileNetV2 and InceptionV2 architectures for superior feature extraction capabilities.The experimental results demonstrate that this modified SSD configuration provides highly satisfactory performance,surpassing existing methods trained on the same dataset in terms of the critical speedaccuracy trade-off.Through these innovations,MARIE sets a new standard in surveillance technology,offering a robust solution to enhance public safety effectively.展开更多
Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a...Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a review of its recent developments and applications,but also provides arguments for its efficacy in resolving optimization problems in comparison with other algorithms.Covering six strategic areas,which include Data Mining,Machine Learning,Engineering Design,Energy Systems,Healthcare,and Robotics,the study demonstrates the versatility and effectiveness of the PSO.Experimental results are,however,used to show the strong and weak parts of PSO,and performance results are included in tables for ease of comparison.The results stress PSO’s efficiency in providing optimal solutions but also show that there are aspects that need to be improved through combination with algorithms or tuning to the parameters of the method.The review of the advantages and limitations of PSO is intended to provide academics and practitioners with a well-rounded view of the methods of employing such a tool most effectively and to encourage optimized designs of PSO in solving theoretical and practical problems in the future.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
This paper introduces the Integrated Security Embedded Resilience Architecture (ISERA) as an advanced resilience mechanism for Industrial Control Systems (ICS) and Operational Technology (OT) environments. The ISERA f...This paper introduces the Integrated Security Embedded Resilience Architecture (ISERA) as an advanced resilience mechanism for Industrial Control Systems (ICS) and Operational Technology (OT) environments. The ISERA framework integrates security by design principles, micro-segmentation, and Island Mode Operation (IMO) to enhance cyber resilience and ensure continuous, secure operations. The methodology deploys a Forward-Thinking Architecture Strategy (FTAS) algorithm, which utilises an industrial Intrusion Detection System (IDS) implemented with Python’s Network Intrusion Detection System (NIDS) library. The FTAS algorithm successfully identified and responded to cyber-attacks, ensuring minimal system disruption. ISERA has been validated through comprehensive testing scenarios simulating Denial of Service (DoS) attacks and malware intrusions, at both the IT and OT layers where it successfully mitigates the impact of malicious activity. Results demonstrate ISERA’s efficacy in real-time threat detection, containment, and incident response, thus ensuring the integrity and reliability of critical infrastructure systems. ISERA’s decentralised approach contributes to global net zero goals by optimising resource use and minimising environmental impact. By adopting a decentralised control architecture and leveraging virtualisation, ISERA significantly enhances the cyber resilience and sustainability of critical infrastructure systems. This approach not only strengthens defences against evolving cyber threats but also optimises resource allocation, reducing the system’s carbon footprint. As a result, ISERA ensures the uninterrupted operation of essential services while contributing to broader net zero goals.展开更多
The Internet of Things(IoT)has gained substantial attention in both academic research and real-world applications.The proliferation of interconnected devices across various domains promises to deliver intelligent and ...The Internet of Things(IoT)has gained substantial attention in both academic research and real-world applications.The proliferation of interconnected devices across various domains promises to deliver intelligent and advanced services.However,this rapid expansion also heightens the vulnerability of the IoT ecosystem to security threats.Consequently,innovative solutions capable of effectively mitigating risks while accommodating the unique constraints of IoT environments are urgently needed.Recently,the convergence of Blockchain technology and IoT has introduced a decentralized and robust framework for securing data and interactions,commonly referred to as the Internet of Blockchained Things(IoBT).Extensive research efforts have been devoted to adapting Blockchain technology to meet the specific requirements of IoT deployments.Within this context,consensus algorithms play a critical role in assessing the feasibility of integrating Blockchain into IoT ecosystems.The adoption of efficient and lightweight consensus mechanisms for block validation has become increasingly essential.This paper presents a comprehensive examination of lightweight,constraint-aware consensus algorithms tailored for IoBT.The study categorizes these consensus mechanisms based on their core operations,the security of the block validation process,the incorporation of AI techniques,and the specific applications they are designed to support.展开更多
Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting pl...Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting play an important role in cancer identification and its grading.In this study,WaveSeg-UNet,a lightweight model,is introduced to segment cancerous nuclei having touching boundaries.Residual blocks are used for feature extraction.Only one feature extractor block is used in each level of the encoder and decoder.Normally,images degrade quality and lose important information during down-sampling.To overcome this loss,discrete wavelet transform(DWT)alongside maxpooling is used in the down-sampling process.Inverse DWT is used to regenerate original images during up-sampling.In the bottleneck of the proposed model,atrous spatial channel pyramid pooling(ASCPP)is used to extract effective high-level features.The ASCPP is the modified pyramid pooling having atrous layers to increase the area of the receptive field.Spatial and channel-based attention are used to focus on the location and class of the identified objects.Finally,watershed transform is used as a post processing technique to identify and refine touching boundaries of nuclei.Nuclei are identified and counted to facilitate pathologists.The same domain of transfer learning is used to retrain the model for domain adaptability.Results of the proposed model are compared with state-of-the-art models,and it outperformed the existing studies.展开更多
The exponential growth of audio data shared over the internet and communication channels has raised significant concerns about the security and privacy of transmitted information.Due to high processing requirements,tr...The exponential growth of audio data shared over the internet and communication channels has raised significant concerns about the security and privacy of transmitted information.Due to high processing requirements,traditional encryption algorithms demand considerable computational effort for real-time audio encryption.To address these challenges,this paper presents a permutation for secure audio encryption using a combination of Tent and 1D logistic maps.The audio data is first shuffled using Tent map for the random permutation.The high random secret key with a length equal to the size of the audio data is then generated using a 1D logistic map.Finally,the Exclusive OR(XOR)operation is applied between the generated key and the shuffled audio to yield the cipher audio.The experimental results prove that the proposed method surpassed the other techniques by encrypting two types of audio files,as mono and stereo audio files with large sizes up to 122 MB,different sample rates 22,050,44,100,48,000,and 96,000 for WAV and 44,100 sample rates for MP3 of size 11 MB.The results show high Mean Square Error(MSE),low Signal-to-Noise Ratio(SNR),spectral distortion,100%Number of Sample Change Rate(NSCR),high Percent Residual Deviation(PRD),low Correlation Coefficient(CC),large key space 2^(616),high sensitivity to a slight change in the secret key and that it can counter several attacks,namely brute force attack,statistical attack,differential attack,and noise attack.展开更多
The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack sys...The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack systematic frameworks capable of addressing the contextual and pedagogical nuances required for effective implementation.This paper introduces a novel framework that combines Data-Driven Error-Correcting Output Codes(DECOC),Long Short-Term Memory(LSTM)networks,and Multi-Layer Deep Neural Networks(ML-DNN)to identify optimal emoji placements within computer science course materials.The originality of the proposed system lies in its ability to leverage sentiment analysis techniques and contextual embeddings to align emoji recommendations with both the emotional tone and learning objectives of course content.A meticulously annotated dataset,comprising diverse topics in computer science,was developed to train and validate the model,ensuring its applicability across a wide range of educational contexts.Comprehensive validation demonstrated the system’s superior performance,achieving an accuracy of 92.4%,precision of 90.7%,recall of 89.3%,and an F1-score of 90.0%.Comparative analysis with baselinemodels and relatedworks confirms themodel’s ability tooutperformexisting approaches inbalancing accuracy,relevance,and contextual appropriateness.Beyond its technical advancements,this framework offers practical benefits for educators by providing an Artificial Intelligence-assisted(AI-assisted)tool that facilitates personalized content adaptation based on student sentiment and engagement patterns.By automating the identification of appropriate emoji placements,teachers can enhance digital course materials with minimal effort,improving the clarity of complex concepts and fostering an emotionally supportive learning environment.This paper contributes to the emerging field of AI-enhanced education by addressing critical gaps in personalized content delivery and pedagogical support.Its findings highlight the transformative potential of integrating AI-driven emoji placement systems into educational materials,offering an innovative tool for fostering student engagement and enhancing learning outcomes.The proposed framework establishes a foundation for future advancements in the visual augmentation of educational resources,emphasizing scalability and adaptability for broader applications in e-learning.展开更多
基金Author extends his appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding and supporting this work through Graduate Student Research Support Program.
文摘Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.
基金financial supports from the National Natural Science Foundation of China(Grant No.6227511362405124).
文摘We present a novel method for scale-invariant 3D face recognition by integrating computer-generated holography with the Mellin transform.This approach leverages the scale-invariance property of the Mellin transform to address challenges related to variations in 3D facial sizes during recognition.By applying the Mellin transform to computer-generated holograms and performing correlation between them,which,to the best of our knowledge,is being done for the first time,we have developed a robust recognition framework capable of managing significant scale variations without compromising recognition accuracy.Digital holograms of 3D faces are generated from a face database,and the Mellin transform is employed to enable robust recognition across scale factors ranging from 0.4 to 2.0.Within this range,the method achieves 100%recognition accuracy,as confirmed by both simulation-based and hybrid optical/digital experimental validations.Numerical calculations demonstrate that our method significantly enhances the accuracy and reliability of 3D face recognition,as evidenced by the sharp correlation peaks and higher peak-to-noise ratio(PNR)values than that of using conventional holograms without the Mellin transform.Additionally,the hybrid optical/digital joint transform correlation hardware further validates the method's effectiveness,demonstrating its capability to accurately identify and distinguish 3D faces at various scales.This work provides a promising solution for advanced biometric systems,especially for those which require 3D scale-invariant recognition.
基金financially supported by Ongoing Research Funding Program(ORF-2025-846),King Saud University,Riyadh,Saudi Arabia.
文摘This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.
基金This work is supported,in part,by the National Natural Science Foundation of China under grant numbers U1536206,U1405254,61772283,61602253,61672294,61502242In part,by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20150925 and BK20151530+1 种基金In part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fundIn part,by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.
文摘Currently,some photorealistic computer graphics are very similar to photographic images.Photorealistic computer generated graphics can be forged as photographic images,causing serious security problems.The aim of this work is to use a deep neural network to detect photographic images(PI)versus computer generated graphics(CG).In existing approaches,image feature classification is computationally intensive and fails to achieve realtime analysis.This paper presents an effective approach to automatically identify PI and CG based on deep convolutional neural networks(DCNNs).Compared with some existing methods,the proposed method achieves real-time forensic tasks by deepening the network structure.Experimental results show that this approach can effectively identify PI and CG with average detection accuracy of 98%.
基金Data and Artificial Intelligence Scientific Chair at Umm AlQura University.
文摘Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tracks the situation of the software and hardware operating on the network.Notwithstanding advancements of growth,current intrusion detection systems also experience difficulties in enhancing detection precision,growing false alarm levels and identifying suspicious activities.In order to address above mentioned issues,several researchers concentrated on designing intrusion detection systems that rely on machine learning approaches.Machine learning models will accurately identify the underlying variations among regular information and irregular information with incredible efficiency.Artificial intelligence,particularly machine learning methods can be used to develop an intelligent intrusion detection framework.There in this article in order to achieve this objective,we propose an intrusion detection system focused on a Deep extreme learning machine(DELM)which first establishes the assessment of safety features that lead to their prominence and then constructs an adaptive intrusion detection system focusing on the important features.In the moment,we researched the viability of our suggested DELMbased intrusion detection system by conducting dataset assessments and evaluating the performance factors to validate the system reliability.The experimental results illustrate that the suggested framework outclasses traditional algorithms.In fact,the suggested framework is not only of interest to scientific research but also of functional importance.
文摘In the period of Industries 4.0,cyber-physical systems(CPSs)were a major study area.Such systems frequently occur in manufacturing processes and people’s everyday lives,and they communicate intensely among physical elements and lead to inconsistency.Due to the magnitude and importance of the systems they support,the cyber quantum models must function effectively.In this paper,an image-processing-based anomalous mobility detecting approach is suggested that may be added to systems at any time.The expense of glitches,failures or destroyed products is decreased when anomalous activities are detected and unplanned scenarios are avoided.The presently offered techniques are not well suited to these operations,which necessitate information systems for issue treatment and classification at a degree of complexity that is distinct from technology.To overcome such challenges in industrial cyber-physical systems,the Image Processing aided Computer Vision Technology for Fault Detection System(IM-CVFD)is proposed in this research.The Uncertainty Management technique is introduced in addition to achieving optimum knowledge in terms of latency and effectiveness.A thorough simulation was performed in an appropriate processing facility.The study results suggest that the IM-CVFD has a high performance,low error frequency,low energy consumption,and low delay with a strategy that provides.In comparison to traditional approaches,the IM-CVFD produces a more efficient outcome.
基金supported by the Researchers Supporting Project (No.RSP-2021/395),King Saud University,Riyadh,Saudi Arabia.
文摘A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.
文摘There is a great need to provide educational environments for blind and handicapped people. There are many Islamic websites and applications dedicated to the educational services for the Holy Quran and Its Sciences (Quran Recitations, the interpretations, etc.) on the Internet. Unfortunately, blind and handicapped people could not use these services. These people cannot use the keyboard and the mouse. In addition, the ability to read and write is essential to benefit from these services. In this paper, we present an educational environment that allows these people to take full advantage of the scientific materials. This is done through the interaction with the system using voice commands by speaking directly without the need to write or to use the mouse. Google Speech API is used for the universal speech recognition after a preprocessing and post processing phases to improve the accuracy. For blind people, responses of these commands will be played back through the audio device instead of displaying the text to the screen. The text will be displayed on the screen to help other people make use of the system.
基金Authors would like to acknowledge the support of the Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia for funding this research through a project(NU/IFC/ENT/01/014)under the institutional funding committee at Najran University,Kingdom of Saudi Arabia.
文摘Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform their daily tasks.In these devices,higher latency factors need to be addressed appropriately.Therefore,the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation.The proposed architecture is capable to communicate between different modules of the system by adopting an automotive,intelligent data processing and classification approach.Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion.Think-Net Convolutional Neural Network(TN-CNN)architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification.Data collection and processing are the responsibility of the central integrated server for system load minimization.Testing of implemented architecture and deep learning model shows excellent results.The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism.The training and testing results are 99%and 93%for custom model implementation based on TN-CNN.The proposed real-time architecture is capable of intelligent data processing unit with fewer errors,and it will benefit assistive devices working on the local server and cloud server.
文摘Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions.
文摘Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable to society.Consequently,there is a pressing need for swift identification of potential threats to preemptively alert law enforcement and security forces,thereby preventing potential attacks or violent incidents.Recent advancements in big data analytics and deep learning have significantly enhanced the capabilities of computer vision in object detection,particularly in identifying firearms.This paper introduces a novel automatic firearm detection surveillance system,utilizing a one-stage detection approach named MARIE(Mechanism for Realtime Identification of Firearms).MARIE incorporates the Single Shot Multibox Detector(SSD)model,which has been specifically optimized to balance the speed-accuracy trade-off critical in firearm detection applications.The SSD model was further refined by integrating MobileNetV2 and InceptionV2 architectures for superior feature extraction capabilities.The experimental results demonstrate that this modified SSD configuration provides highly satisfactory performance,surpassing existing methods trained on the same dataset in terms of the critical speedaccuracy trade-off.Through these innovations,MARIE sets a new standard in surveillance technology,offering a robust solution to enhance public safety effectively.
文摘Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a review of its recent developments and applications,but also provides arguments for its efficacy in resolving optimization problems in comparison with other algorithms.Covering six strategic areas,which include Data Mining,Machine Learning,Engineering Design,Energy Systems,Healthcare,and Robotics,the study demonstrates the versatility and effectiveness of the PSO.Experimental results are,however,used to show the strong and weak parts of PSO,and performance results are included in tables for ease of comparison.The results stress PSO’s efficiency in providing optimal solutions but also show that there are aspects that need to be improved through combination with algorithms or tuning to the parameters of the method.The review of the advantages and limitations of PSO is intended to provide academics and practitioners with a well-rounded view of the methods of employing such a tool most effectively and to encourage optimized designs of PSO in solving theoretical and practical problems in the future.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
基金funded by the Office of Gas and Electricity Markets(Ofgem)and supported by De Montfort University(DMU)and Nottingham Trent University(NTU),UK.
文摘This paper introduces the Integrated Security Embedded Resilience Architecture (ISERA) as an advanced resilience mechanism for Industrial Control Systems (ICS) and Operational Technology (OT) environments. The ISERA framework integrates security by design principles, micro-segmentation, and Island Mode Operation (IMO) to enhance cyber resilience and ensure continuous, secure operations. The methodology deploys a Forward-Thinking Architecture Strategy (FTAS) algorithm, which utilises an industrial Intrusion Detection System (IDS) implemented with Python’s Network Intrusion Detection System (NIDS) library. The FTAS algorithm successfully identified and responded to cyber-attacks, ensuring minimal system disruption. ISERA has been validated through comprehensive testing scenarios simulating Denial of Service (DoS) attacks and malware intrusions, at both the IT and OT layers where it successfully mitigates the impact of malicious activity. Results demonstrate ISERA’s efficacy in real-time threat detection, containment, and incident response, thus ensuring the integrity and reliability of critical infrastructure systems. ISERA’s decentralised approach contributes to global net zero goals by optimising resource use and minimising environmental impact. By adopting a decentralised control architecture and leveraging virtualisation, ISERA significantly enhances the cyber resilience and sustainability of critical infrastructure systems. This approach not only strengthens defences against evolving cyber threats but also optimises resource allocation, reducing the system’s carbon footprint. As a result, ISERA ensures the uninterrupted operation of essential services while contributing to broader net zero goals.
文摘The Internet of Things(IoT)has gained substantial attention in both academic research and real-world applications.The proliferation of interconnected devices across various domains promises to deliver intelligent and advanced services.However,this rapid expansion also heightens the vulnerability of the IoT ecosystem to security threats.Consequently,innovative solutions capable of effectively mitigating risks while accommodating the unique constraints of IoT environments are urgently needed.Recently,the convergence of Blockchain technology and IoT has introduced a decentralized and robust framework for securing data and interactions,commonly referred to as the Internet of Blockchained Things(IoBT).Extensive research efforts have been devoted to adapting Blockchain technology to meet the specific requirements of IoT deployments.Within this context,consensus algorithms play a critical role in assessing the feasibility of integrating Blockchain into IoT ecosystems.The adoption of efficient and lightweight consensus mechanisms for block validation has become increasingly essential.This paper presents a comprehensive examination of lightweight,constraint-aware consensus algorithms tailored for IoBT.The study categorizes these consensus mechanisms based on their core operations,the security of the block validation process,the incorporation of AI techniques,and the specific applications they are designed to support.
文摘Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting play an important role in cancer identification and its grading.In this study,WaveSeg-UNet,a lightweight model,is introduced to segment cancerous nuclei having touching boundaries.Residual blocks are used for feature extraction.Only one feature extractor block is used in each level of the encoder and decoder.Normally,images degrade quality and lose important information during down-sampling.To overcome this loss,discrete wavelet transform(DWT)alongside maxpooling is used in the down-sampling process.Inverse DWT is used to regenerate original images during up-sampling.In the bottleneck of the proposed model,atrous spatial channel pyramid pooling(ASCPP)is used to extract effective high-level features.The ASCPP is the modified pyramid pooling having atrous layers to increase the area of the receptive field.Spatial and channel-based attention are used to focus on the location and class of the identified objects.Finally,watershed transform is used as a post processing technique to identify and refine touching boundaries of nuclei.Nuclei are identified and counted to facilitate pathologists.The same domain of transfer learning is used to retrain the model for domain adaptability.Results of the proposed model are compared with state-of-the-art models,and it outperformed the existing studies.
文摘The exponential growth of audio data shared over the internet and communication channels has raised significant concerns about the security and privacy of transmitted information.Due to high processing requirements,traditional encryption algorithms demand considerable computational effort for real-time audio encryption.To address these challenges,this paper presents a permutation for secure audio encryption using a combination of Tent and 1D logistic maps.The audio data is first shuffled using Tent map for the random permutation.The high random secret key with a length equal to the size of the audio data is then generated using a 1D logistic map.Finally,the Exclusive OR(XOR)operation is applied between the generated key and the shuffled audio to yield the cipher audio.The experimental results prove that the proposed method surpassed the other techniques by encrypting two types of audio files,as mono and stereo audio files with large sizes up to 122 MB,different sample rates 22,050,44,100,48,000,and 96,000 for WAV and 44,100 sample rates for MP3 of size 11 MB.The results show high Mean Square Error(MSE),low Signal-to-Noise Ratio(SNR),spectral distortion,100%Number of Sample Change Rate(NSCR),high Percent Residual Deviation(PRD),low Correlation Coefficient(CC),large key space 2^(616),high sensitivity to a slight change in the secret key and that it can counter several attacks,namely brute force attack,statistical attack,differential attack,and noise attack.
基金funded by the Deanship of Postgraduate Studies and Scientific Research at Majmaah University,grant number[R-2025-1637].
文摘The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack systematic frameworks capable of addressing the contextual and pedagogical nuances required for effective implementation.This paper introduces a novel framework that combines Data-Driven Error-Correcting Output Codes(DECOC),Long Short-Term Memory(LSTM)networks,and Multi-Layer Deep Neural Networks(ML-DNN)to identify optimal emoji placements within computer science course materials.The originality of the proposed system lies in its ability to leverage sentiment analysis techniques and contextual embeddings to align emoji recommendations with both the emotional tone and learning objectives of course content.A meticulously annotated dataset,comprising diverse topics in computer science,was developed to train and validate the model,ensuring its applicability across a wide range of educational contexts.Comprehensive validation demonstrated the system’s superior performance,achieving an accuracy of 92.4%,precision of 90.7%,recall of 89.3%,and an F1-score of 90.0%.Comparative analysis with baselinemodels and relatedworks confirms themodel’s ability tooutperformexisting approaches inbalancing accuracy,relevance,and contextual appropriateness.Beyond its technical advancements,this framework offers practical benefits for educators by providing an Artificial Intelligence-assisted(AI-assisted)tool that facilitates personalized content adaptation based on student sentiment and engagement patterns.By automating the identification of appropriate emoji placements,teachers can enhance digital course materials with minimal effort,improving the clarity of complex concepts and fostering an emotionally supportive learning environment.This paper contributes to the emerging field of AI-enhanced education by addressing critical gaps in personalized content delivery and pedagogical support.Its findings highlight the transformative potential of integrating AI-driven emoji placement systems into educational materials,offering an innovative tool for fostering student engagement and enhancing learning outcomes.The proposed framework establishes a foundation for future advancements in the visual augmentation of educational resources,emphasizing scalability and adaptability for broader applications in e-learning.