This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the...This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.展开更多
Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components o...Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.展开更多
Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable...Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable to society.Consequently,there is a pressing need for swift identification of potential threats to preemptively alert law enforcement and security forces,thereby preventing potential attacks or violent incidents.Recent advancements in big data analytics and deep learning have significantly enhanced the capabilities of computer vision in object detection,particularly in identifying firearms.This paper introduces a novel automatic firearm detection surveillance system,utilizing a one-stage detection approach named MARIE(Mechanism for Realtime Identification of Firearms).MARIE incorporates the Single Shot Multibox Detector(SSD)model,which has been specifically optimized to balance the speed-accuracy trade-off critical in firearm detection applications.The SSD model was further refined by integrating MobileNetV2 and InceptionV2 architectures for superior feature extraction capabilities.The experimental results demonstrate that this modified SSD configuration provides highly satisfactory performance,surpassing existing methods trained on the same dataset in terms of the critical speedaccuracy trade-off.Through these innovations,MARIE sets a new standard in surveillance technology,offering a robust solution to enhance public safety effectively.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
This paper introduces the Integrated Security Embedded Resilience Architecture (ISERA) as an advanced resilience mechanism for Industrial Control Systems (ICS) and Operational Technology (OT) environments. The ISERA f...This paper introduces the Integrated Security Embedded Resilience Architecture (ISERA) as an advanced resilience mechanism for Industrial Control Systems (ICS) and Operational Technology (OT) environments. The ISERA framework integrates security by design principles, micro-segmentation, and Island Mode Operation (IMO) to enhance cyber resilience and ensure continuous, secure operations. The methodology deploys a Forward-Thinking Architecture Strategy (FTAS) algorithm, which utilises an industrial Intrusion Detection System (IDS) implemented with Python’s Network Intrusion Detection System (NIDS) library. The FTAS algorithm successfully identified and responded to cyber-attacks, ensuring minimal system disruption. ISERA has been validated through comprehensive testing scenarios simulating Denial of Service (DoS) attacks and malware intrusions, at both the IT and OT layers where it successfully mitigates the impact of malicious activity. Results demonstrate ISERA’s efficacy in real-time threat detection, containment, and incident response, thus ensuring the integrity and reliability of critical infrastructure systems. ISERA’s decentralised approach contributes to global net zero goals by optimising resource use and minimising environmental impact. By adopting a decentralised control architecture and leveraging virtualisation, ISERA significantly enhances the cyber resilience and sustainability of critical infrastructure systems. This approach not only strengthens defences against evolving cyber threats but also optimises resource allocation, reducing the system’s carbon footprint. As a result, ISERA ensures the uninterrupted operation of essential services while contributing to broader net zero goals.展开更多
The Internet of Things(IoT)has gained substantial attention in both academic research and real-world applications.The proliferation of interconnected devices across various domains promises to deliver intelligent and ...The Internet of Things(IoT)has gained substantial attention in both academic research and real-world applications.The proliferation of interconnected devices across various domains promises to deliver intelligent and advanced services.However,this rapid expansion also heightens the vulnerability of the IoT ecosystem to security threats.Consequently,innovative solutions capable of effectively mitigating risks while accommodating the unique constraints of IoT environments are urgently needed.Recently,the convergence of Blockchain technology and IoT has introduced a decentralized and robust framework for securing data and interactions,commonly referred to as the Internet of Blockchained Things(IoBT).Extensive research efforts have been devoted to adapting Blockchain technology to meet the specific requirements of IoT deployments.Within this context,consensus algorithms play a critical role in assessing the feasibility of integrating Blockchain into IoT ecosystems.The adoption of efficient and lightweight consensus mechanisms for block validation has become increasingly essential.This paper presents a comprehensive examination of lightweight,constraint-aware consensus algorithms tailored for IoBT.The study categorizes these consensus mechanisms based on their core operations,the security of the block validation process,the incorporation of AI techniques,and the specific applications they are designed to support.展开更多
Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a...Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a review of its recent developments and applications,but also provides arguments for its efficacy in resolving optimization problems in comparison with other algorithms.Covering six strategic areas,which include Data Mining,Machine Learning,Engineering Design,Energy Systems,Healthcare,and Robotics,the study demonstrates the versatility and effectiveness of the PSO.Experimental results are,however,used to show the strong and weak parts of PSO,and performance results are included in tables for ease of comparison.The results stress PSO’s efficiency in providing optimal solutions but also show that there are aspects that need to be improved through combination with algorithms or tuning to the parameters of the method.The review of the advantages and limitations of PSO is intended to provide academics and practitioners with a well-rounded view of the methods of employing such a tool most effectively and to encourage optimized designs of PSO in solving theoretical and practical problems in the future.展开更多
Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting pl...Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting play an important role in cancer identification and its grading.In this study,WaveSeg-UNet,a lightweight model,is introduced to segment cancerous nuclei having touching boundaries.Residual blocks are used for feature extraction.Only one feature extractor block is used in each level of the encoder and decoder.Normally,images degrade quality and lose important information during down-sampling.To overcome this loss,discrete wavelet transform(DWT)alongside maxpooling is used in the down-sampling process.Inverse DWT is used to regenerate original images during up-sampling.In the bottleneck of the proposed model,atrous spatial channel pyramid pooling(ASCPP)is used to extract effective high-level features.The ASCPP is the modified pyramid pooling having atrous layers to increase the area of the receptive field.Spatial and channel-based attention are used to focus on the location and class of the identified objects.Finally,watershed transform is used as a post processing technique to identify and refine touching boundaries of nuclei.Nuclei are identified and counted to facilitate pathologists.The same domain of transfer learning is used to retrain the model for domain adaptability.Results of the proposed model are compared with state-of-the-art models,and it outperformed the existing studies.展开更多
The exponential growth of audio data shared over the internet and communication channels has raised significant concerns about the security and privacy of transmitted information.Due to high processing requirements,tr...The exponential growth of audio data shared over the internet and communication channels has raised significant concerns about the security and privacy of transmitted information.Due to high processing requirements,traditional encryption algorithms demand considerable computational effort for real-time audio encryption.To address these challenges,this paper presents a permutation for secure audio encryption using a combination of Tent and 1D logistic maps.The audio data is first shuffled using Tent map for the random permutation.The high random secret key with a length equal to the size of the audio data is then generated using a 1D logistic map.Finally,the Exclusive OR(XOR)operation is applied between the generated key and the shuffled audio to yield the cipher audio.The experimental results prove that the proposed method surpassed the other techniques by encrypting two types of audio files,as mono and stereo audio files with large sizes up to 122 MB,different sample rates 22,050,44,100,48,000,and 96,000 for WAV and 44,100 sample rates for MP3 of size 11 MB.The results show high Mean Square Error(MSE),low Signal-to-Noise Ratio(SNR),spectral distortion,100%Number of Sample Change Rate(NSCR),high Percent Residual Deviation(PRD),low Correlation Coefficient(CC),large key space 2^(616),high sensitivity to a slight change in the secret key and that it can counter several attacks,namely brute force attack,statistical attack,differential attack,and noise attack.展开更多
The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack sys...The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack systematic frameworks capable of addressing the contextual and pedagogical nuances required for effective implementation.This paper introduces a novel framework that combines Data-Driven Error-Correcting Output Codes(DECOC),Long Short-Term Memory(LSTM)networks,and Multi-Layer Deep Neural Networks(ML-DNN)to identify optimal emoji placements within computer science course materials.The originality of the proposed system lies in its ability to leverage sentiment analysis techniques and contextual embeddings to align emoji recommendations with both the emotional tone and learning objectives of course content.A meticulously annotated dataset,comprising diverse topics in computer science,was developed to train and validate the model,ensuring its applicability across a wide range of educational contexts.Comprehensive validation demonstrated the system’s superior performance,achieving an accuracy of 92.4%,precision of 90.7%,recall of 89.3%,and an F1-score of 90.0%.Comparative analysis with baselinemodels and relatedworks confirms themodel’s ability tooutperformexisting approaches inbalancing accuracy,relevance,and contextual appropriateness.Beyond its technical advancements,this framework offers practical benefits for educators by providing an Artificial Intelligence-assisted(AI-assisted)tool that facilitates personalized content adaptation based on student sentiment and engagement patterns.By automating the identification of appropriate emoji placements,teachers can enhance digital course materials with minimal effort,improving the clarity of complex concepts and fostering an emotionally supportive learning environment.This paper contributes to the emerging field of AI-enhanced education by addressing critical gaps in personalized content delivery and pedagogical support.Its findings highlight the transformative potential of integrating AI-driven emoji placement systems into educational materials,offering an innovative tool for fostering student engagement and enhancing learning outcomes.The proposed framework establishes a foundation for future advancements in the visual augmentation of educational resources,emphasizing scalability and adaptability for broader applications in e-learning.展开更多
Social media has emerged as one of the most transformative developments on the internet,revolu-tionizing the way people communicate and interact.However,alongside its benefits,social media has also given rise to signi...Social media has emerged as one of the most transformative developments on the internet,revolu-tionizing the way people communicate and interact.However,alongside its benefits,social media has also given rise to significant challenges,one of the most pressing being cyberbullying.This issue has become a major concern in modern society,particularly due to its profound negative impacts on the mental health and well-being of its victims.In the Arab world,where social media usage is exceptionblly high,cyberbullying has become increasingly prevalent,necessitating urgent attention.Early detection of harmful online behavior is critical to fostering safer digital environments and mitigating the adverse efcts of cyberbullying.This underscores the importance of developing advanced tools and systems to identify and address such behavior efectively.This paper investigates the development of a robust cyberbullying detection and classifcation system tailored for Arabic comments on YouTube.The study explores the efectiveness of various deep learning models,including Bi-LSTM(Bidirectional Long Short Term Memory),LSTM(Long Short-Term Memory),CNN(Convolutional Neural Networks),and a hybrid CNN-LSTM,in classifying Arabic comments into binary classes(bullying or not)and multiclass categories.A comprehensive dataset of 20,000 Arabic YouTube comments was collected,preprocessed,and labeled to support these tasks.The results revealed that the CNN and hybrid CNN-LSTM models achieved the highest accuracy in binary classification,reaching an impressive 91.9%.For multiclass dlassification,the LSTM and Bi-LSTM models outperformed others,achieving an accuracy of 89.5%.These findings highlight the efctiveness of deep learning approaches in the mitigation of cyberbullying within Arabic online communities.展开更多
The increasing reliance on digital infrastructure in modern healthcare systems has introduced significant cybersecurity challenges,particularly in safeguarding sensitive patient data and maintaining the integrity of m...The increasing reliance on digital infrastructure in modern healthcare systems has introduced significant cybersecurity challenges,particularly in safeguarding sensitive patient data and maintaining the integrity of medical services.As healthcare becomes more data-driven,cyberattacks targeting these systems continue to rise,necessitating the development of robust,domain-adapted Intrusion Detection Systems(IDS).However,current IDS solutions often lack access to domain-specific datasets that reflect realistic threat scenarios in healthcare.To address this gap,this study introduces HCKDDCUP,a synthetic dataset modeled on the widely used KDDCUP benchmark,augmented with healthcare-relevant attributes such as patient data,treatments,and diagnoses to better simulate the unique conditions of clinical environments.This research applies standard machine learning algorithms Random Forest(RF),Decision Tree(DT),and K-Nearest Neighbors(KNN)to both the KDDCUP and HCKDDCUP datasets.The methodology includes data preprocessing,feature selection,dimensionality reduction,and comparative performance evaluation.Experimental results show that the RF model performed best,achieving 98%accuracy on KDDCUP and 99%on HCKDDCUP,highlighting its effectiveness in detecting cyber intrusions within a healthcare-specific context.This work contributes a valuable resource for future research and underscores the need for IDS development tailored to sector-specific requirements.展开更多
In this paper,we propose a hybrid decode-and-forward and soft information relaying(HDFSIR)strategy to mitigate error propagation in coded cooperative communications.In the HDFSIR approach,the relay operates in decode-...In this paper,we propose a hybrid decode-and-forward and soft information relaying(HDFSIR)strategy to mitigate error propagation in coded cooperative communications.In the HDFSIR approach,the relay operates in decode-and-forward(DF)mode when it successfully decodes the received message;otherwise,it switches to soft information relaying(SIR)mode.The benefits of the DF and SIR forwarding strategies are combined to achieve better performance than deploying the DF or SIR strategy alone.Closed-form expressions for the outage probability and symbol error rate(SER)are derived for coded cooperative communication with HDFSIR and energy-harvesting relays.Additionally,we introduce a novel normalized log-likelihood-ratio based soft estimation symbol(NL-SES)mapping technique,which enhances soft symbol accuracy for higher-order modulation,and propose a model characterizing the relationship between the estimated complex soft symbol and the actual high-order modulated symbol.Further-more,the hybrid DF-SIR strategy is extended to a distributed Alamouti space-time-coded cooperative network.To evaluate the~performance of the proposed HDFSIR strategy,we implement extensive Monte Carlo simulations under varying channel conditions.Results demonstrate significant improvements with the hybrid technique outperforming individual DF and SIR strategies in both conventional and distributed Alamouti space-time coded cooperative networks.Moreover,at a SER of 10^(-3),the proposed NL-SES mapping demonstrated a 3.5 dB performance gain over the conventional averaging one,highlighting its superior accuracy in estimating soft symbols for quadrature phase-shift keying modulation.展开更多
The NIST Cybersecurity Framework (NIST CSF) serves as a voluntary guideline aimed at helping organizations, tiny and medium-sized enterprises (SMEs), and critical infrastructure operators, effectively manage cyber ris...The NIST Cybersecurity Framework (NIST CSF) serves as a voluntary guideline aimed at helping organizations, tiny and medium-sized enterprises (SMEs), and critical infrastructure operators, effectively manage cyber risks. Although comprehensive, the complexity of the NIST CSF can be overwhelming, especially for those lacking extensive cybersecurity resources. Current implementation tools often cater to larger companies, neglecting the specific needs of SMEs, which can be vulnerable to cyber threats. To address this gap, our research proposes a user-friendly, open-source web platform designed to simplify the implementation of the NIST CSF. This platform enables organizations to assess their risk exposure and continuously monitor their cybersecurity maturity through tailored recommendations based on their unique profiles. Our methodology includes a literature review of existing tools and standards, followed by a description of the platform’s design and architecture. Initial tests with SMEs in Burkina Faso reveal a concerning cybersecurity maturity level, indicating the urgent need for improved strategies based on our findings. By offering an intuitive interface and cross-platform accessibility, this solution aims to empower organizations to enhance their cybersecurity resilience in an evolving threat landscape. The article concludes with discussions on the practical implications and future enhancements of the tool.展开更多
The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP...The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP)and greedy algorithms,have been effective in solving small problem instances but often struggle with scalability and efficiency as the problem size increases.DP,for instance,has exponential time complexity and can become computationally prohibitive for large problem instances.On the other hand,greedy algorithms offer faster solutions but may not always yield the optimal results,especially when the problem involves complex constraints or large numbers of items.This paper introduces a novel reinforcement learning(RL)approach to solve the knapsack problem by enhancing the state representation within the learning environment.We propose a representation where item weights and volumes are expressed as ratios relative to the knapsack’s capacity,and item values are normalized to represent their percentage of the total value across all items.This novel state modification leads to a 5%improvement in accuracy compared to the state-of-the-art RL-based algorithms,while significantly reducing execution time.Our RL-based method outperforms DP by over 9000 times in terms of speed,making it highly scalable for larger problem instances.Furthermore,we improve the performance of the RL model by incorporating Noisy layers into the neural network architecture.The addition of Noisy layers enhances the exploration capabilities of the agent,resulting in an additional accuracy boost of 0.2%–0.5%.The results demonstrate that our approach not only outperforms existing RL techniques,such as the Transformer model in terms of accuracy,but also provides a substantial improvement than DP in computational efficiency.This combination of enhanced accuracy and speed presents a promising solution for tackling large-scale optimization problems in real-world applications,where both precision and time are critical factors.展开更多
Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features i...Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features in multimodal image analysis of ophthalmology,as well as the existence of information redundancy in cross-modal data fusion,this paper proposes amultimodal fusion framework based on cross-modal collaboration and weighted attention mechanism.In terms of feature extraction,the framework collaboratively extracts local fine-grained features and global structural dependencies through a parallel dual-branch architecture,overcoming the limitations of traditional single-modality models in capturing either local or global information;in terms of fusion strategy,the framework innovatively designs a cross-modal dynamic fusion strategy,combining overlappingmulti-head self-attention modules with a bidirectional feature alignment mechanism,addressing the bottlenecks of low feature interaction efficiency and excessive attention fusion computations in traditional parallel fusion,and further introduces cross-domain local integration technology,which enhances the representation ability of the lesion area through pixel-level feature recalibration and optimizes the diagnostic robustness of complex cases.Experiments show that the framework exhibits excellent feature expression and generalization performance in cross-domain scenarios of ophthalmic medical images and natural images,providing a high-precision,low-redundancy fusion paradigm for multimodal medical image analysis,and promoting the upgrade of intelligent diagnosis and treatment fromsingle-modal static analysis to dynamic decision-making.展开更多
This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narw...This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narwhals,“unicorns of the sea”,particularly the use of their distinctive spiral tusks,which play significant roles in hunting,searching prey,navigation,echolocation,and complex social interaction.Particularly,the NWOA imitates the foraging strategies and techniques of narwhals when hunting for prey but focuses mainly on the cooperative and exploratory behavior shown during group hunting and in the use of their tusks in sensing and locating prey under the Arctic ice.These functions provide a strong assessment basis for investigating the algorithm’s prowess at balancing exploration and exploitation,convergence speed,and solution accuracy.The performance of the NWOA is evaluated on 30 benchmark test functions.A comparison study using the Grey Wolf Optimizer(GWO),Whale Optimization Algorithm(WOA),Perfumer Optimization Algorithm(POA),Candle Flame Optimization(CFO)Algorithm,Particle Swarm Optimization(PSO)Algorithm,and Genetic Algorithm(GA)validates the results.As evidenced in the experimental results,NWOA is capable of yielding competitive outcomes among these well-known optimizers,whereas in several instances.These results suggest thatNWOAhas proven to be an effective and robust optimization tool suitable for solving many different complex optimization problems from the real world.展开更多
Arabic Dialect Identification(DID)is a task in Natural Language Processing(NLP)that involves determining the dialect of a given piece of text in Arabic.The state-of-the-art solutions for DID are built on various deep ...Arabic Dialect Identification(DID)is a task in Natural Language Processing(NLP)that involves determining the dialect of a given piece of text in Arabic.The state-of-the-art solutions for DID are built on various deep neural networks that commonly learn the representation of sentences in response to a given dialect.Despite the effectiveness of these solutions,the performance heavily relies on the amount of labeled examples,which is labor-intensive to atain and may not be readily available in real-world scenarios.To alleviate the burden of labeling data,this paper introduces a novel solution that leverages unlabeled corpora to boost performance on the DID task.Specifically,we design an architecture that enables learning the shared information between labeled and unlabeled texts through a gradient reversal layer.The key idea is to penalize the model for learning source dataset specific features and thus enable it to capture common knowledge regardless of the label.Finally,we evaluate the proposed solution on benchmark datasets for DID.Our extensive experiments show that it performs signifcantly better,especially,with sparse labeled data.By comparing our approach with existing Pre-trained Language Models(PLMs),we achieve a new state-of-the-art performance in the DID field.The code will be available on GitHub upon the paper's acceptance.展开更多
With increasing density and heterogeneity in unlicensed wireless networks,traditional MAC protocols,such as Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA)in Wi-Fi networks,are experiencing performance...With increasing density and heterogeneity in unlicensed wireless networks,traditional MAC protocols,such as Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA)in Wi-Fi networks,are experiencing performance degradation.This is manifested in increased collisions and extended backoff times,leading to diminished spectrum efficiency and protocol coordination.Addressing these issues,this paper proposes a deep-learning-based MAC paradigm,dubbed DL-MAC,which leverages spectrum data readily available from energy detection modules in wireless devices to achieve the MAC functionalities of channel access,rate adaptation,and channel switch.First,we utilize DL-MAC to realize a joint design of channel access and rate adaptation.Subsequently,we integrate the capability of channel switching into DL-MAC,enhancing its functionality from single-channel to multi-channel operations.Specifically,the DL-MAC protocol incorporates a Deep Neural Network(DNN)for channel selection and a Recurrent Neural Network(RNN)for the joint design of channel access and rate adaptation.We conducted real-world data collection within the 2.4 GHz frequency band to validate the effectiveness of DL-MAC.Experimental results demonstrate that DL-MAC exhibits significantly superior performance compared to traditional algorithms in both single and multi-channel environments,and also outperforms single-function designs.Additionally,the performance of DL-MAC remains robust,unaffected by channel switch overheads within the evaluation range.展开更多
基金financially supported by Ongoing Research Funding Program(ORF-2025-846),King Saud University,Riyadh,Saudi Arabia.
文摘This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.
基金Author extends his appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding and supporting this work through Graduate Student Research Support Program.
文摘Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.
文摘Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable to society.Consequently,there is a pressing need for swift identification of potential threats to preemptively alert law enforcement and security forces,thereby preventing potential attacks or violent incidents.Recent advancements in big data analytics and deep learning have significantly enhanced the capabilities of computer vision in object detection,particularly in identifying firearms.This paper introduces a novel automatic firearm detection surveillance system,utilizing a one-stage detection approach named MARIE(Mechanism for Realtime Identification of Firearms).MARIE incorporates the Single Shot Multibox Detector(SSD)model,which has been specifically optimized to balance the speed-accuracy trade-off critical in firearm detection applications.The SSD model was further refined by integrating MobileNetV2 and InceptionV2 architectures for superior feature extraction capabilities.The experimental results demonstrate that this modified SSD configuration provides highly satisfactory performance,surpassing existing methods trained on the same dataset in terms of the critical speedaccuracy trade-off.Through these innovations,MARIE sets a new standard in surveillance technology,offering a robust solution to enhance public safety effectively.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
基金funded by the Office of Gas and Electricity Markets(Ofgem)and supported by De Montfort University(DMU)and Nottingham Trent University(NTU),UK.
文摘This paper introduces the Integrated Security Embedded Resilience Architecture (ISERA) as an advanced resilience mechanism for Industrial Control Systems (ICS) and Operational Technology (OT) environments. The ISERA framework integrates security by design principles, micro-segmentation, and Island Mode Operation (IMO) to enhance cyber resilience and ensure continuous, secure operations. The methodology deploys a Forward-Thinking Architecture Strategy (FTAS) algorithm, which utilises an industrial Intrusion Detection System (IDS) implemented with Python’s Network Intrusion Detection System (NIDS) library. The FTAS algorithm successfully identified and responded to cyber-attacks, ensuring minimal system disruption. ISERA has been validated through comprehensive testing scenarios simulating Denial of Service (DoS) attacks and malware intrusions, at both the IT and OT layers where it successfully mitigates the impact of malicious activity. Results demonstrate ISERA’s efficacy in real-time threat detection, containment, and incident response, thus ensuring the integrity and reliability of critical infrastructure systems. ISERA’s decentralised approach contributes to global net zero goals by optimising resource use and minimising environmental impact. By adopting a decentralised control architecture and leveraging virtualisation, ISERA significantly enhances the cyber resilience and sustainability of critical infrastructure systems. This approach not only strengthens defences against evolving cyber threats but also optimises resource allocation, reducing the system’s carbon footprint. As a result, ISERA ensures the uninterrupted operation of essential services while contributing to broader net zero goals.
文摘The Internet of Things(IoT)has gained substantial attention in both academic research and real-world applications.The proliferation of interconnected devices across various domains promises to deliver intelligent and advanced services.However,this rapid expansion also heightens the vulnerability of the IoT ecosystem to security threats.Consequently,innovative solutions capable of effectively mitigating risks while accommodating the unique constraints of IoT environments are urgently needed.Recently,the convergence of Blockchain technology and IoT has introduced a decentralized and robust framework for securing data and interactions,commonly referred to as the Internet of Blockchained Things(IoBT).Extensive research efforts have been devoted to adapting Blockchain technology to meet the specific requirements of IoT deployments.Within this context,consensus algorithms play a critical role in assessing the feasibility of integrating Blockchain into IoT ecosystems.The adoption of efficient and lightweight consensus mechanisms for block validation has become increasingly essential.This paper presents a comprehensive examination of lightweight,constraint-aware consensus algorithms tailored for IoBT.The study categorizes these consensus mechanisms based on their core operations,the security of the block validation process,the incorporation of AI techniques,and the specific applications they are designed to support.
文摘Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a review of its recent developments and applications,but also provides arguments for its efficacy in resolving optimization problems in comparison with other algorithms.Covering six strategic areas,which include Data Mining,Machine Learning,Engineering Design,Energy Systems,Healthcare,and Robotics,the study demonstrates the versatility and effectiveness of the PSO.Experimental results are,however,used to show the strong and weak parts of PSO,and performance results are included in tables for ease of comparison.The results stress PSO’s efficiency in providing optimal solutions but also show that there are aspects that need to be improved through combination with algorithms or tuning to the parameters of the method.The review of the advantages and limitations of PSO is intended to provide academics and practitioners with a well-rounded view of the methods of employing such a tool most effectively and to encourage optimized designs of PSO in solving theoretical and practical problems in the future.
文摘Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting play an important role in cancer identification and its grading.In this study,WaveSeg-UNet,a lightweight model,is introduced to segment cancerous nuclei having touching boundaries.Residual blocks are used for feature extraction.Only one feature extractor block is used in each level of the encoder and decoder.Normally,images degrade quality and lose important information during down-sampling.To overcome this loss,discrete wavelet transform(DWT)alongside maxpooling is used in the down-sampling process.Inverse DWT is used to regenerate original images during up-sampling.In the bottleneck of the proposed model,atrous spatial channel pyramid pooling(ASCPP)is used to extract effective high-level features.The ASCPP is the modified pyramid pooling having atrous layers to increase the area of the receptive field.Spatial and channel-based attention are used to focus on the location and class of the identified objects.Finally,watershed transform is used as a post processing technique to identify and refine touching boundaries of nuclei.Nuclei are identified and counted to facilitate pathologists.The same domain of transfer learning is used to retrain the model for domain adaptability.Results of the proposed model are compared with state-of-the-art models,and it outperformed the existing studies.
文摘The exponential growth of audio data shared over the internet and communication channels has raised significant concerns about the security and privacy of transmitted information.Due to high processing requirements,traditional encryption algorithms demand considerable computational effort for real-time audio encryption.To address these challenges,this paper presents a permutation for secure audio encryption using a combination of Tent and 1D logistic maps.The audio data is first shuffled using Tent map for the random permutation.The high random secret key with a length equal to the size of the audio data is then generated using a 1D logistic map.Finally,the Exclusive OR(XOR)operation is applied between the generated key and the shuffled audio to yield the cipher audio.The experimental results prove that the proposed method surpassed the other techniques by encrypting two types of audio files,as mono and stereo audio files with large sizes up to 122 MB,different sample rates 22,050,44,100,48,000,and 96,000 for WAV and 44,100 sample rates for MP3 of size 11 MB.The results show high Mean Square Error(MSE),low Signal-to-Noise Ratio(SNR),spectral distortion,100%Number of Sample Change Rate(NSCR),high Percent Residual Deviation(PRD),low Correlation Coefficient(CC),large key space 2^(616),high sensitivity to a slight change in the secret key and that it can counter several attacks,namely brute force attack,statistical attack,differential attack,and noise attack.
基金funded by the Deanship of Postgraduate Studies and Scientific Research at Majmaah University,grant number[R-2025-1637].
文摘The integration of visual elements,such as emojis,into educational content represents a promising approach to enhancing student engagement and comprehension.However,existing efforts in emoji integration often lack systematic frameworks capable of addressing the contextual and pedagogical nuances required for effective implementation.This paper introduces a novel framework that combines Data-Driven Error-Correcting Output Codes(DECOC),Long Short-Term Memory(LSTM)networks,and Multi-Layer Deep Neural Networks(ML-DNN)to identify optimal emoji placements within computer science course materials.The originality of the proposed system lies in its ability to leverage sentiment analysis techniques and contextual embeddings to align emoji recommendations with both the emotional tone and learning objectives of course content.A meticulously annotated dataset,comprising diverse topics in computer science,was developed to train and validate the model,ensuring its applicability across a wide range of educational contexts.Comprehensive validation demonstrated the system’s superior performance,achieving an accuracy of 92.4%,precision of 90.7%,recall of 89.3%,and an F1-score of 90.0%.Comparative analysis with baselinemodels and relatedworks confirms themodel’s ability tooutperformexisting approaches inbalancing accuracy,relevance,and contextual appropriateness.Beyond its technical advancements,this framework offers practical benefits for educators by providing an Artificial Intelligence-assisted(AI-assisted)tool that facilitates personalized content adaptation based on student sentiment and engagement patterns.By automating the identification of appropriate emoji placements,teachers can enhance digital course materials with minimal effort,improving the clarity of complex concepts and fostering an emotionally supportive learning environment.This paper contributes to the emerging field of AI-enhanced education by addressing critical gaps in personalized content delivery and pedagogical support.Its findings highlight the transformative potential of integrating AI-driven emoji placement systems into educational materials,offering an innovative tool for fostering student engagement and enhancing learning outcomes.The proposed framework establishes a foundation for future advancements in the visual augmentation of educational resources,emphasizing scalability and adaptability for broader applications in e-learning.
基金financed by the European Union-NextGenerationEU,through the National Recowery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.013-0001-C01.
文摘Social media has emerged as one of the most transformative developments on the internet,revolu-tionizing the way people communicate and interact.However,alongside its benefits,social media has also given rise to significant challenges,one of the most pressing being cyberbullying.This issue has become a major concern in modern society,particularly due to its profound negative impacts on the mental health and well-being of its victims.In the Arab world,where social media usage is exceptionblly high,cyberbullying has become increasingly prevalent,necessitating urgent attention.Early detection of harmful online behavior is critical to fostering safer digital environments and mitigating the adverse efcts of cyberbullying.This underscores the importance of developing advanced tools and systems to identify and address such behavior efectively.This paper investigates the development of a robust cyberbullying detection and classifcation system tailored for Arabic comments on YouTube.The study explores the efectiveness of various deep learning models,including Bi-LSTM(Bidirectional Long Short Term Memory),LSTM(Long Short-Term Memory),CNN(Convolutional Neural Networks),and a hybrid CNN-LSTM,in classifying Arabic comments into binary classes(bullying or not)and multiclass categories.A comprehensive dataset of 20,000 Arabic YouTube comments was collected,preprocessed,and labeled to support these tasks.The results revealed that the CNN and hybrid CNN-LSTM models achieved the highest accuracy in binary classification,reaching an impressive 91.9%.For multiclass dlassification,the LSTM and Bi-LSTM models outperformed others,achieving an accuracy of 89.5%.These findings highlight the efctiveness of deep learning approaches in the mitigation of cyberbullying within Arabic online communities.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2501).
文摘The increasing reliance on digital infrastructure in modern healthcare systems has introduced significant cybersecurity challenges,particularly in safeguarding sensitive patient data and maintaining the integrity of medical services.As healthcare becomes more data-driven,cyberattacks targeting these systems continue to rise,necessitating the development of robust,domain-adapted Intrusion Detection Systems(IDS).However,current IDS solutions often lack access to domain-specific datasets that reflect realistic threat scenarios in healthcare.To address this gap,this study introduces HCKDDCUP,a synthetic dataset modeled on the widely used KDDCUP benchmark,augmented with healthcare-relevant attributes such as patient data,treatments,and diagnoses to better simulate the unique conditions of clinical environments.This research applies standard machine learning algorithms Random Forest(RF),Decision Tree(DT),and K-Nearest Neighbors(KNN)to both the KDDCUP and HCKDDCUP datasets.The methodology includes data preprocessing,feature selection,dimensionality reduction,and comparative performance evaluation.Experimental results show that the RF model performed best,achieving 98%accuracy on KDDCUP and 99%on HCKDDCUP,highlighting its effectiveness in detecting cyber intrusions within a healthcare-specific context.This work contributes a valuable resource for future research and underscores the need for IDS development tailored to sector-specific requirements.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-02160).
文摘In this paper,we propose a hybrid decode-and-forward and soft information relaying(HDFSIR)strategy to mitigate error propagation in coded cooperative communications.In the HDFSIR approach,the relay operates in decode-and-forward(DF)mode when it successfully decodes the received message;otherwise,it switches to soft information relaying(SIR)mode.The benefits of the DF and SIR forwarding strategies are combined to achieve better performance than deploying the DF or SIR strategy alone.Closed-form expressions for the outage probability and symbol error rate(SER)are derived for coded cooperative communication with HDFSIR and energy-harvesting relays.Additionally,we introduce a novel normalized log-likelihood-ratio based soft estimation symbol(NL-SES)mapping technique,which enhances soft symbol accuracy for higher-order modulation,and propose a model characterizing the relationship between the estimated complex soft symbol and the actual high-order modulated symbol.Further-more,the hybrid DF-SIR strategy is extended to a distributed Alamouti space-time-coded cooperative network.To evaluate the~performance of the proposed HDFSIR strategy,we implement extensive Monte Carlo simulations under varying channel conditions.Results demonstrate significant improvements with the hybrid technique outperforming individual DF and SIR strategies in both conventional and distributed Alamouti space-time coded cooperative networks.Moreover,at a SER of 10^(-3),the proposed NL-SES mapping demonstrated a 3.5 dB performance gain over the conventional averaging one,highlighting its superior accuracy in estimating soft symbols for quadrature phase-shift keying modulation.
文摘The NIST Cybersecurity Framework (NIST CSF) serves as a voluntary guideline aimed at helping organizations, tiny and medium-sized enterprises (SMEs), and critical infrastructure operators, effectively manage cyber risks. Although comprehensive, the complexity of the NIST CSF can be overwhelming, especially for those lacking extensive cybersecurity resources. Current implementation tools often cater to larger companies, neglecting the specific needs of SMEs, which can be vulnerable to cyber threats. To address this gap, our research proposes a user-friendly, open-source web platform designed to simplify the implementation of the NIST CSF. This platform enables organizations to assess their risk exposure and continuously monitor their cybersecurity maturity through tailored recommendations based on their unique profiles. Our methodology includes a literature review of existing tools and standards, followed by a description of the platform’s design and architecture. Initial tests with SMEs in Burkina Faso reveal a concerning cybersecurity maturity level, indicating the urgent need for improved strategies based on our findings. By offering an intuitive interface and cross-platform accessibility, this solution aims to empower organizations to enhance their cybersecurity resilience in an evolving threat landscape. The article concludes with discussions on the practical implications and future enhancements of the tool.
基金supported in part by the Research Start-Up Funds of South-Central Minzu University under Grants YZZ23002,YZY23001,and YZZ18006in part by the Hubei Provincial Natural Science Foundation of China under Grants 2024AFB842 and 2023AFB202+3 种基金in part by the Knowledge Innovation Program of Wuhan Basic Research underGrant 2023010201010151in part by the Spring Sunshine Program of Ministry of Education of the People’s Republic of China under Grant HZKY20220331in part by the Funds for Academic Innovation Teams and Research Platformof South-CentralMinzu University Grant Number:XT224003,PTZ24001in part by the Career Development Fund(CDF)of the Agency for Science,Technology and Research(A*STAR)(Grant Number:C233312007).
文摘The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP)and greedy algorithms,have been effective in solving small problem instances but often struggle with scalability and efficiency as the problem size increases.DP,for instance,has exponential time complexity and can become computationally prohibitive for large problem instances.On the other hand,greedy algorithms offer faster solutions but may not always yield the optimal results,especially when the problem involves complex constraints or large numbers of items.This paper introduces a novel reinforcement learning(RL)approach to solve the knapsack problem by enhancing the state representation within the learning environment.We propose a representation where item weights and volumes are expressed as ratios relative to the knapsack’s capacity,and item values are normalized to represent their percentage of the total value across all items.This novel state modification leads to a 5%improvement in accuracy compared to the state-of-the-art RL-based algorithms,while significantly reducing execution time.Our RL-based method outperforms DP by over 9000 times in terms of speed,making it highly scalable for larger problem instances.Furthermore,we improve the performance of the RL model by incorporating Noisy layers into the neural network architecture.The addition of Noisy layers enhances the exploration capabilities of the agent,resulting in an additional accuracy boost of 0.2%–0.5%.The results demonstrate that our approach not only outperforms existing RL techniques,such as the Transformer model in terms of accuracy,but also provides a substantial improvement than DP in computational efficiency.This combination of enhanced accuracy and speed presents a promising solution for tackling large-scale optimization problems in real-world applications,where both precision and time are critical factors.
基金funded by the Ongoing Research Funding Program(ORF-2025-102),King Saud University,Riyadh,Saudi Arabiaby the Science and Technology Research Programof Chongqing Municipal Education Commission(Grant No.KJQN202400813)by the Graduate Research Innovation Project(Grant Nos.yjscxx2025-269-193 and CYS25618).
文摘Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features in multimodal image analysis of ophthalmology,as well as the existence of information redundancy in cross-modal data fusion,this paper proposes amultimodal fusion framework based on cross-modal collaboration and weighted attention mechanism.In terms of feature extraction,the framework collaboratively extracts local fine-grained features and global structural dependencies through a parallel dual-branch architecture,overcoming the limitations of traditional single-modality models in capturing either local or global information;in terms of fusion strategy,the framework innovatively designs a cross-modal dynamic fusion strategy,combining overlappingmulti-head self-attention modules with a bidirectional feature alignment mechanism,addressing the bottlenecks of low feature interaction efficiency and excessive attention fusion computations in traditional parallel fusion,and further introduces cross-domain local integration technology,which enhances the representation ability of the lesion area through pixel-level feature recalibration and optimizes the diagnostic robustness of complex cases.Experiments show that the framework exhibits excellent feature expression and generalization performance in cross-domain scenarios of ophthalmic medical images and natural images,providing a high-precision,low-redundancy fusion paradigm for multimodal medical image analysis,and promoting the upgrade of intelligent diagnosis and treatment fromsingle-modal static analysis to dynamic decision-making.
文摘This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narwhals,“unicorns of the sea”,particularly the use of their distinctive spiral tusks,which play significant roles in hunting,searching prey,navigation,echolocation,and complex social interaction.Particularly,the NWOA imitates the foraging strategies and techniques of narwhals when hunting for prey but focuses mainly on the cooperative and exploratory behavior shown during group hunting and in the use of their tusks in sensing and locating prey under the Arctic ice.These functions provide a strong assessment basis for investigating the algorithm’s prowess at balancing exploration and exploitation,convergence speed,and solution accuracy.The performance of the NWOA is evaluated on 30 benchmark test functions.A comparison study using the Grey Wolf Optimizer(GWO),Whale Optimization Algorithm(WOA),Perfumer Optimization Algorithm(POA),Candle Flame Optimization(CFO)Algorithm,Particle Swarm Optimization(PSO)Algorithm,and Genetic Algorithm(GA)validates the results.As evidenced in the experimental results,NWOA is capable of yielding competitive outcomes among these well-known optimizers,whereas in several instances.These results suggest thatNWOAhas proven to be an effective and robust optimization tool suitable for solving many different complex optimization problems from the real world.
基金supported by the Deanship of Scientific Research at King Khalid University through Small Groups funding(Project Grant No.RGP1/243/45)The funding was awarded to Dr.Mohammed Abker.And Natural Science Foundation of China under Grant 61901388.
文摘Arabic Dialect Identification(DID)is a task in Natural Language Processing(NLP)that involves determining the dialect of a given piece of text in Arabic.The state-of-the-art solutions for DID are built on various deep neural networks that commonly learn the representation of sentences in response to a given dialect.Despite the effectiveness of these solutions,the performance heavily relies on the amount of labeled examples,which is labor-intensive to atain and may not be readily available in real-world scenarios.To alleviate the burden of labeling data,this paper introduces a novel solution that leverages unlabeled corpora to boost performance on the DID task.Specifically,we design an architecture that enables learning the shared information between labeled and unlabeled texts through a gradient reversal layer.The key idea is to penalize the model for learning source dataset specific features and thus enable it to capture common knowledge regardless of the label.Finally,we evaluate the proposed solution on benchmark datasets for DID.Our extensive experiments show that it performs signifcantly better,especially,with sparse labeled data.By comparing our approach with existing Pre-trained Language Models(PLMs),we achieve a new state-of-the-art performance in the DID field.The code will be available on GitHub upon the paper's acceptance.
基金supported in part by the National Key R&D Program of China under Grant 2021YFB1714100in part by the Shenzhen Science and Technology Program,China,under Grant JCYJ20220531101015033.
文摘With increasing density and heterogeneity in unlicensed wireless networks,traditional MAC protocols,such as Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA)in Wi-Fi networks,are experiencing performance degradation.This is manifested in increased collisions and extended backoff times,leading to diminished spectrum efficiency and protocol coordination.Addressing these issues,this paper proposes a deep-learning-based MAC paradigm,dubbed DL-MAC,which leverages spectrum data readily available from energy detection modules in wireless devices to achieve the MAC functionalities of channel access,rate adaptation,and channel switch.First,we utilize DL-MAC to realize a joint design of channel access and rate adaptation.Subsequently,we integrate the capability of channel switching into DL-MAC,enhancing its functionality from single-channel to multi-channel operations.Specifically,the DL-MAC protocol incorporates a Deep Neural Network(DNN)for channel selection and a Recurrent Neural Network(RNN)for the joint design of channel access and rate adaptation.We conducted real-world data collection within the 2.4 GHz frequency band to validate the effectiveness of DL-MAC.Experimental results demonstrate that DL-MAC exhibits significantly superior performance compared to traditional algorithms in both single and multi-channel environments,and also outperforms single-function designs.Additionally,the performance of DL-MAC remains robust,unaffected by channel switch overheads within the evaluation range.