The rapid development of artificial intelligence(AI),machine learning(ML),and deep learning(DL)in recent years has transformed many sectors.A fundamental shift has occurred in approaches to solving complex problems an...The rapid development of artificial intelligence(AI),machine learning(ML),and deep learning(DL)in recent years has transformed many sectors.A fundamental shift has occurred in approaches to solving complex problems and making decisions in many different fields.These advanced technologies have enabled significant breakthroughs in sectors including entertainment,finance,transportation,and healthcare.AI systems,which can analyze vast volumes of data,have significantly driven efficiency and innovation.With remarkable accuracy,patterns can be identified and predictions generated,improving decision-making processes and facilitating the development of more intelligent solutions.The increasing adoption of these technologies by organizations has expanded the potential for AI to change processes and improve results.展开更多
Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components o...Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.展开更多
Bone age assessment(BAA)aims to determine whether a child’s growth and development are normal concerning their chronological age.To predict bone age more accurately based on radiographs,and for the left-hand X-ray im...Bone age assessment(BAA)aims to determine whether a child’s growth and development are normal concerning their chronological age.To predict bone age more accurately based on radiographs,and for the left-hand X-ray images of different races model can have better adaptability,we propose a neural network in parallel with the quantitative features from the left-hand bone measurements for BAA.In this study,a lightweight feature extractor(LFE)is designed to obtain the featuremaps fromradiographs,and amodule called attention erasermodule(AEM)is proposed to capture the fine-grained features.Meanwhile,the dimensional information of the metacarpal parts in the radiographs is measured to enhance the model’s generalization capability across images fromdifferent races.Ourmodel is trained and validated on the RSNA,RHPE,and digital hand atlas datasets,which include images from various racial groups.The model achieves a mean absolute error(MAE)of 4.42 months on the RSNA dataset and 15.98 months on the RHPE dataset.Compared to ResNet50,InceptionV3,and several state-of-the-art methods,our proposed method shows statistically significant improvements(p<0.05),with a reduction in MAE by 0.2±0.02 years across different racial datasets.Furthermore,t-tests on the features also confirm the statistical significance of our approach(p<0.05).展开更多
Terrain Aided Navigation(TAN)technology has become increasingly important due to its effectiveness in environments where Global Positioning System(GPS)is unavailable.In recent years,TAN systems have been extensively r...Terrain Aided Navigation(TAN)technology has become increasingly important due to its effectiveness in environments where Global Positioning System(GPS)is unavailable.In recent years,TAN systems have been extensively researched for both aerial and underwater navigation applications.However,many TAN systems that rely on recursive Unmanned Aerial Vehicle(UAV)position estimation methods,such as Extended Kalman Filters(EKF),often face challenges with divergence and instability,particularly in highly non-linear systems.To address these issues,this paper proposes and investigates a hybrid two-stage TAN positioning system for UAVs that utilizes Particle Filter.To enhance the system’s robustness against uncertainties caused by noise and to estimate additional system states,a Fuzzy Particle Filter(FPF)is employed in the first stage.This approach introduces a novel terrain composite feature that enables a fuzzy expert system to analyze terrain non-linearities and dynamically adjust the number of particles in real-time.This design allows the UAV to be efficiently localized in GPS-denied environments while also reducing the computational complexity of the particle filter in real-time applications.In the second stage,an Error State Kalman Filter(ESKF)is implemented to estimate the UAV’s altitude.The ESKF is chosen over the conventional EKF method because it is more suitable for non-linear systems.Simulation results demonstrate that the proposed fuzzy-based terrain composite method achieves high positional accuracy while reducing computational time and memory usage.展开更多
The concept of“STEM+”integrates art,humanistic literacy,and social values in the traditional STEM education concept,advocates cross-disciplinary integration,and aims to cultivate compound talents equipped to tackle ...The concept of“STEM+”integrates art,humanistic literacy,and social values in the traditional STEM education concept,advocates cross-disciplinary integration,and aims to cultivate compound talents equipped to tackle future challenges.In 2022,the Ministry of Education issued the“Compulsory Education Information Technology Curriculum(2022 Edition),”emphasizing the core literacy of information science and technology and the integration of interdisciplinary disciplines,and encouraging the teaching mode suitable for discipline characteristics.The 6E teaching mode is a student-centered teaching strategy characterized by active exploration and cross-disciplinary integration.This article innovatively designed the“STEM+”6E teaching mode,which is applied to junior high school information technology teaching,which can better achieve core literacy teaching goals.展开更多
The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Indu...The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems.This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology(IT)professionals as they navigate a dynamic technological landscape marked by intelligent automation,shifting professional identities,and emerging ethical concerns.Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling,prompt engineering,interdisciplinary collaboration,and heightened ethical awareness.However,participants also voiced growing concerns about the reliability and security of AI-generated code,noting that these tools can introduce hidden vulnerabilities and reduce critical engagement due to automation bias.Many described instances of flawed logic,insecure patterns,or syntactically correct but contextually inappropriate suggestions,underscoring the need for rigorous human oversight.Additionally,the study reveals anxieties around job displacement and the gradual erosion of fundamental coding skills,particularly in environments where AI tools dominate routine development tasks.These findings highlight an urgent need for educational reforms,industry standards,and organizational policies that prioritize both technical robustness and the preservation of human expertise.As AI becomes increasingly embedded in software engineering workflows,this research offers timely insights into how developers and organizations can responsibly integrate intelligent systems to promote accountability,resilience,and innovation across the software development lifecycle.展开更多
The rapid evolution of malware presents a critical cybersecurity challenge,rendering traditional signature-based detection methods ineffective against novel variants.This growing threat affects individuals,organizatio...The rapid evolution of malware presents a critical cybersecurity challenge,rendering traditional signature-based detection methods ineffective against novel variants.This growing threat affects individuals,organizations,and governments,highlighting the urgent need for robust malware detection mechanisms.Conventional machine learning-based approaches rely on static and dynamicmalware analysis and often struggle to detect previously unseen threats due to their dependency on predefined signatures.Although machine learning algorithms(MLAs)offer promising detection capabilities,their reliance on extensive feature engineering limits real-time applicability.Deep learning techniques mitigate this issue by automating feature extraction but may introduce computational overhead,affecting deployment efficiency.This research evaluates classical MLAs and deep learningmodels to enhance malware detection performance across diverse datasets.The proposed approach integrates a novel text and imagebased detection framework,employing an optimized Support Vector Machine(SVM)for textual data analysis and EfficientNet-B0 for image-based malware classification.Experimental analysis,conducted across multiple train-test splits over varying timescales,demonstrates 99.97%accuracy on textual datasets using SVM and 96.7%accuracy on image-based datasets with EfficientNet-B0,significantly improving zero-day malware detection.Furthermore,a comparative analysis with existing competitive techniques,such as Random Forest,XGBoost,and CNN-based(Convolutional Neural Network)classifiers,highlights the superior performance of the proposed model in terms of accuracy,efficiency,and robustness.展开更多
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ...The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.展开更多
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de...Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.展开更多
Dementia is a neurological disorder that affects the brain and its functioning,and women experience its effects more than men do.Preventive care often requires non-invasive and rapid tests,yet conventional diagnostic ...Dementia is a neurological disorder that affects the brain and its functioning,and women experience its effects more than men do.Preventive care often requires non-invasive and rapid tests,yet conventional diagnostic techniques are time-consuming and invasive.One of the most effective ways to diagnose dementia is by analyzing a patient’s speech,which is cheap and does not require surgery.This research aims to determine the effectiveness of deep learning(DL)and machine learning(ML)structures in diagnosing dementia based on women’s speech patterns.The study analyzes data drawn from the Pitt Corpus,which contains 298 dementia files and 238 control files from the Dementia Bank database.Deep learning models and SVM classifiers were used to analyze the available audio samples in the dataset.Our methodology used two methods:a DL-ML model and a single DL model for the classification of diabetics and a single DL model.The deep learning model achieved an astronomic level of accuracy of 99.99%with an F1 score of 0.9998,Precision of 0.9997,and recall of 0.9998.The proposed DL-ML fusion model was equally impressive,with an accuracy of 99.99%,F1 score of 0.9995,Precision of 0.9998,and recall of 0.9997.Also,the study reveals how to apply deep learning and machine learning models for dementia detection from speech with high accuracy and low computational complexity.This research work,therefore,concludes by showing the possibility of using speech-based dementia detection as a possibly helpful early diagnosis mode.For even further enhanced model performance and better generalization,future studies may explore real-time applications and the inclusion of other components of speech.展开更多
Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or...Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.展开更多
Accurate and efficient brain tumor segmentation is essential for early diagnosis,treatment planning,and clinical decision-making.However,the complex structure of brain anatomy and the heterogeneous nature of tumors pr...Accurate and efficient brain tumor segmentation is essential for early diagnosis,treatment planning,and clinical decision-making.However,the complex structure of brain anatomy and the heterogeneous nature of tumors present significant challenges for precise anomaly detection.While U-Net-based architectures have demonstrated strong performance in medical image segmentation,there remains room for improvement in feature extraction and localization accuracy.In this study,we propose a novel hybrid model designed to enhance 3D brain tumor segmentation.The architecture incorporates a 3D ResNet encoder known for mitigating the vanishing gradient problem and a 3D U-Net decoder.Additionally,to enhance the model’s generalization ability,Squeeze and Excitation attention mechanism is integrated.We introduce Gabor filter banks into the encoder to further strengthen the model’s ability to extract robust and transformation-invariant features from the complex and irregular shapes typical in medical imaging.This approach,which is not well explored in current U-Net-based segmentation frameworks,provides a unique advantage by enhancing texture-aware feature representation.Specifically,Gabor filters help extract distinctive low-level texture features,reducing the effects of texture interference and facilitating faster convergence during the early stages of training.Our model achieved Dice scores of 0.881,0.846,and 0.819 for Whole Tumor(WT),Tumor Core(TC),and Enhancing Tumor(ET),respectively,on the BraTS 2020 dataset.Cross-validation on the BraTS 2021 dataset further confirmed the model’s robustness,yielding Dice score values of 0.887 for WT,0.856 for TC,and 0.824 for ET.The proposed model outperforms several state-of-the-art existing models,particularly in accurately identifying small and complex tumor regions.Extensive evaluations suggest integrating advanced preprocessing with an attention-augmented hybrid architecture offers significant potential for reliable and clinically valuable brain tumor segmentation.展开更多
Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limi...Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.展开更多
Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation pr...Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation profiling techniques,new developments in this field determined four molecular subtypes for SHH-MB.SHH-MB subtypes show distinct DNAmethylation patterns that allow their discrimination fromoverlapping subtypes and predict clinical outcomes.Class overlapping occurs when two or more classes share common features,making it difficult to distinguish them as separate.Using the DNA methylation dataset,a novel classification technique is presented to address the issue of overlapping SHH-MBsubtypes.Penalizedmultinomial regression(PMR),Tomek links(TL),and singular value decomposition(SVD)were all smoothly integrated into a single framework.SVD and group lasso improve computational efficiency,address the problem of high-dimensional datasets,and clarify class distinctions by removing redundant or irrelevant features that might lead to class overlap.As a method to eliminate the issues of decision boundary overlap and class imbalance in the classification task,TL enhances dataset balance and increases the clarity of decision boundaries through the elimination of overlapping samples.Using fivefold cross-validation,our proposed method(TL-SVDPMR)achieved a remarkable overall accuracy of almost 95%in the classification of SHH-MB molecular subtypes.The results demonstrate the strong performance of the proposed classification model among the various SHH-MB subtypes given a high average of the area under the curve(AUC)values.Additionally,the statistical significance test indicates that TL-SVDPMR is more accurate than both SVM and random forest algorithms in classifying the overlapping SHH-MB subtypes,highlighting its importance for precision medicine applications.Our findings emphasized the success of combining SVD,TL,and PMRtechniques to improve the classification performance for biomedical applications with many features and overlapping subtypes.展开更多
In this paper,we propose a hybrid decode-and-forward and soft information relaying(HDFSIR)strategy to mitigate error propagation in coded cooperative communications.In the HDFSIR approach,the relay operates in decode-...In this paper,we propose a hybrid decode-and-forward and soft information relaying(HDFSIR)strategy to mitigate error propagation in coded cooperative communications.In the HDFSIR approach,the relay operates in decode-and-forward(DF)mode when it successfully decodes the received message;otherwise,it switches to soft information relaying(SIR)mode.The benefits of the DF and SIR forwarding strategies are combined to achieve better performance than deploying the DF or SIR strategy alone.Closed-form expressions for the outage probability and symbol error rate(SER)are derived for coded cooperative communication with HDFSIR and energy-harvesting relays.Additionally,we introduce a novel normalized log-likelihood-ratio based soft estimation symbol(NL-SES)mapping technique,which enhances soft symbol accuracy for higher-order modulation,and propose a model characterizing the relationship between the estimated complex soft symbol and the actual high-order modulated symbol.Further-more,the hybrid DF-SIR strategy is extended to a distributed Alamouti space-time-coded cooperative network.To evaluate the~performance of the proposed HDFSIR strategy,we implement extensive Monte Carlo simulations under varying channel conditions.Results demonstrate significant improvements with the hybrid technique outperforming individual DF and SIR strategies in both conventional and distributed Alamouti space-time coded cooperative networks.Moreover,at a SER of 10^(-3),the proposed NL-SES mapping demonstrated a 3.5 dB performance gain over the conventional averaging one,highlighting its superior accuracy in estimating soft symbols for quadrature phase-shift keying modulation.展开更多
One in every eight men in the US is diagnosed with prostate cancer,making it the most common cancer in men.Gleason grading is one of the most essential diagnostic and prognostic factors for planning the treatment of p...One in every eight men in the US is diagnosed with prostate cancer,making it the most common cancer in men.Gleason grading is one of the most essential diagnostic and prognostic factors for planning the treatment of prostate cancer patients.Traditionally,urological pathologists perform the grading by scoring the morphological pattern,known as the Gleason pattern,in histopathology images.However,thismanual grading is highly subjective,suffers intra-and inter-pathologist variability and lacks reproducibility.An automated grading system could be more efficient,with no subjectivity and higher accuracy and reproducibility.Automated methods presented previously failed to achieve sufficient accuracy,lacked reproducibility and depended on high-resolution images such as 40×.This paper proposes an automated Gleason grading method,ProGENET,to accurately predict the grade using low-resolution images such as 10×.This method first divides the patient’s histopathology whole slide image(WSI)into patches.Then,it detects artifacts and tissue-less regions and predicts the patch-wise grade using an ensemble network of CNN and transformer models.The proposed method adapted the International Society of Urological Pathology(ISUP)grading system and achieved 90.8%accuracy in classifying the patches into healthy and Gleason grades 1 through 5 using 10×WSI,outperforming the state-of-the-art accuracy by 27%.Finally,the patient’s grade was determined by combining the patch-wise results.The method was also demonstrated for 4−class grading and binary classification of prostate cancer,achieving 93.0%and 99.6%accuracy,respectively.The reproducibility was over 90%.Since the proposedmethod determined the grades with higher accuracy and reproducibility using low-resolution images,it is more reliable and effective than existing methods and can potentially improve subsequent therapy decisions.展开更多
With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object si...With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object significant challenges have been presented in accurately segmenting melanomas in dermoscopic images due to the objects that could interfere human observations,such as bubbles and scales.To address these challenges,we propose a dual U-Net network framework for skin melanoma segmentation.In our proposed architecture,we introduce several innovative components that aim to enhance the performance and capabilities of the traditional U-Net.First,we establish a novel framework that links two simplified U-Nets,enabling more comprehensive information exchange and feature integration throughout the network.Second,after cascading the second U-Net,we introduce a skip connection between the decoder and encoder networks,and incorporate a modified receptive field block(MRFB),which is designed to capture multi-scale spatial information.Third,to further enhance the feature representation capabilities,we add a multi-path convolution block attention module(MCBAM)to the first two layers of the first U-Net encoding,and integrate a new squeeze-and-excitation(SE)mechanism with residual connections in the second U-Net.To illustrate the performance of our proposed model,we conducted comprehensive experiments on widely recognized skin datasets.On the ISIC-2017 dataset,the IoU value of our proposed model increased from 0.6406 to 0.6819 and the Dice coefficient increased from 0.7625 to 0.8023.On the ISIC-2018 dataset,the IoU value of proposed model also improved from 0.7138 to 0.7709,while the Dice coefficient increased from 0.8285 to 0.8665.Furthermore,the generalization experiments conducted on the jaw cyst dataset from Quzhou People’s Hospital further verified the outstanding segmentation performance of the proposed model.These findings collectively affirm the potential of our approach as a valuable tool in supporting clinical decision-making in the field of skin cancer detection,as well as advancing research in medical image analysis.展开更多
Wireless Sensor Networks(WSNs)are one of the best technologies of the 21st century and have seen tremendous growth over the past decade.Much work has been put into its development in various aspects such as architectu...Wireless Sensor Networks(WSNs)are one of the best technologies of the 21st century and have seen tremendous growth over the past decade.Much work has been put into its development in various aspects such as architectural attention,routing protocols,location exploration,time exploration,etc.This research aims to optimize routing protocols and address the challenges arising from conflicting objectives in WSN environments,such as balancing energy consumption,ensuring routing reliability,distributing network load,and selecting the shortest path.Many optimization techniques have shown success in achieving one or two objectives but struggle to achieve the right balance between multiple conflicting objectives.To address this gap,this paper proposes an innovative approach that integrates Particle Swarm Optimization(PSO)with a fuzzy multi-objective framework.The proposed method uses fuzzy logic to effectively control multiple competing objectives to represent its major development beyond existing methods that only deal with one or two objectives.The search efficiency is improved by particle swarm optimization(PSO)which overcomes the large computational requirements that serve as a major drawback of existing methods.The PSO algorithm is adapted for WSNs to optimize routing paths based on fuzzy multi-objective fitness.The fuzzy logic framework uses predefined membership functions and rule-based reasoning to adjust routing decisions.These adjustments influence PSO’s velocity updates,ensuring continuous adaptation under varying network conditions.The proposed multi-objective PSO-fuzzy model is evaluated using NS-3 simulation.The results show that the proposed model is capable of improving the network lifetime by 15.2%–22.4%,increasing the stabilization time by 18.7%–25.5%,and increasing the residual energy by 8.9%–16.2% compared to the state-of-the-art techniques.The proposed model also achieves a 15%–24% reduction in load variance,demonstrating balanced routing and extended network lifetime.Furthermore,analysis using p-values obtained from multiple performance measures(p-values<0.05)showed that the proposed approach outperforms with a high level of confidence.The proposed multi-objective PSO-fuzzy model provides a robust and scalable solution to improve the performance of WSNs.It allows stable performance in networks with 100 to 300 nodes,under varying node densities,and across different base station placements.Computational complexity analysis has shown that the method fits well into large-scale WSNs and that the addition of fuzzy logic controls the power usage to make the system practical for real-world use.展开更多
In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart...In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.展开更多
Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embe...Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embedding a unique identifier that identifies the owner of the content.Image watermarking can also be used to verify the authenticity of digital media,such as images or videos,by ascertaining the watermark information.In this paper,a mathematical chaos-based image watermarking technique is proposed using discrete wavelet transform(DWT),chaotic map,and Laplacian operator.The DWT can be used to decompose the image into its frequency components,chaos is used to provide extra security defense by encrypting the watermark signal,and the Laplacian operator with optimization is applied to the mid-frequency bands to find the sharp areas in the image.These mid-frequency bands are used to embed the watermarks by modifying the coefficients in these bands.The mid-sub-band maintains the invisible property of the watermark,and chaos combined with the second-order derivative Laplacian is vulnerable to attacks.Comprehensive experiments demonstrate that this approach is effective for common signal processing attacks,i.e.,compression,noise addition,and filtering.Moreover,this approach also maintains image quality through peak signal-to-noise ratio(PSNR)and structural similarity index metrics(SSIM).The highest achieved PSNR and SSIM values are 55.4 dB and 1.In the same way,normalized correlation(NC)values are almost 10%–20%higher than comparative research.These results support assistance in copyright protection in multimedia content.展开更多
基金funded by the Research,Development,and Innovation Authority(RDIA),Kingdom of Saudi Arabia,with grant number 13382-PSU-2023-PSNU-R-3-1-EIsupported by the Automated Systems and Computing Lab(ASCL),Prince Sultan University,Riyadh,Saudi Arabia.
文摘The rapid development of artificial intelligence(AI),machine learning(ML),and deep learning(DL)in recent years has transformed many sectors.A fundamental shift has occurred in approaches to solving complex problems and making decisions in many different fields.These advanced technologies have enabled significant breakthroughs in sectors including entertainment,finance,transportation,and healthcare.AI systems,which can analyze vast volumes of data,have significantly driven efficiency and innovation.With remarkable accuracy,patterns can be identified and predictions generated,improving decision-making processes and facilitating the development of more intelligent solutions.The increasing adoption of these technologies by organizations has expanded the potential for AI to change processes and improve results.
基金Author extends his appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding and supporting this work through Graduate Student Research Support Program.
文摘Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.
基金supported by the grant from the National Natural Science Foundation of China(No.72071019)grant from the Natural Science Foundation of Chongqing(No.cstc2021jcyj-msxmX0185).
文摘Bone age assessment(BAA)aims to determine whether a child’s growth and development are normal concerning their chronological age.To predict bone age more accurately based on radiographs,and for the left-hand X-ray images of different races model can have better adaptability,we propose a neural network in parallel with the quantitative features from the left-hand bone measurements for BAA.In this study,a lightweight feature extractor(LFE)is designed to obtain the featuremaps fromradiographs,and amodule called attention erasermodule(AEM)is proposed to capture the fine-grained features.Meanwhile,the dimensional information of the metacarpal parts in the radiographs is measured to enhance the model’s generalization capability across images fromdifferent races.Ourmodel is trained and validated on the RSNA,RHPE,and digital hand atlas datasets,which include images from various racial groups.The model achieves a mean absolute error(MAE)of 4.42 months on the RSNA dataset and 15.98 months on the RHPE dataset.Compared to ResNet50,InceptionV3,and several state-of-the-art methods,our proposed method shows statistically significant improvements(p<0.05),with a reduction in MAE by 0.2±0.02 years across different racial datasets.Furthermore,t-tests on the features also confirm the statistical significance of our approach(p<0.05).
文摘Terrain Aided Navigation(TAN)technology has become increasingly important due to its effectiveness in environments where Global Positioning System(GPS)is unavailable.In recent years,TAN systems have been extensively researched for both aerial and underwater navigation applications.However,many TAN systems that rely on recursive Unmanned Aerial Vehicle(UAV)position estimation methods,such as Extended Kalman Filters(EKF),often face challenges with divergence and instability,particularly in highly non-linear systems.To address these issues,this paper proposes and investigates a hybrid two-stage TAN positioning system for UAVs that utilizes Particle Filter.To enhance the system’s robustness against uncertainties caused by noise and to estimate additional system states,a Fuzzy Particle Filter(FPF)is employed in the first stage.This approach introduces a novel terrain composite feature that enables a fuzzy expert system to analyze terrain non-linearities and dynamically adjust the number of particles in real-time.This design allows the UAV to be efficiently localized in GPS-denied environments while also reducing the computational complexity of the particle filter in real-time applications.In the second stage,an Error State Kalman Filter(ESKF)is implemented to estimate the UAV’s altitude.The ESKF is chosen over the conventional EKF method because it is more suitable for non-linear systems.Simulation results demonstrate that the proposed fuzzy-based terrain composite method achieves high positional accuracy while reducing computational time and memory usage.
基金2024 Chongqing Normal University Graduate Research Innovation Project“Construction and Application of Information Technology Knowledge Map based on the Three-Layer Architecture”(CYS240395)。
文摘The concept of“STEM+”integrates art,humanistic literacy,and social values in the traditional STEM education concept,advocates cross-disciplinary integration,and aims to cultivate compound talents equipped to tackle future challenges.In 2022,the Ministry of Education issued the“Compulsory Education Information Technology Curriculum(2022 Edition),”emphasizing the core literacy of information science and technology and the integration of interdisciplinary disciplines,and encouraging the teaching mode suitable for discipline characteristics.The 6E teaching mode is a student-centered teaching strategy characterized by active exploration and cross-disciplinary integration.This article innovatively designed the“STEM+”6E teaching mode,which is applied to junior high school information technology teaching,which can better achieve core literacy teaching goals.
文摘The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems.This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology(IT)professionals as they navigate a dynamic technological landscape marked by intelligent automation,shifting professional identities,and emerging ethical concerns.Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling,prompt engineering,interdisciplinary collaboration,and heightened ethical awareness.However,participants also voiced growing concerns about the reliability and security of AI-generated code,noting that these tools can introduce hidden vulnerabilities and reduce critical engagement due to automation bias.Many described instances of flawed logic,insecure patterns,or syntactically correct but contextually inappropriate suggestions,underscoring the need for rigorous human oversight.Additionally,the study reveals anxieties around job displacement and the gradual erosion of fundamental coding skills,particularly in environments where AI tools dominate routine development tasks.These findings highlight an urgent need for educational reforms,industry standards,and organizational policies that prioritize both technical robustness and the preservation of human expertise.As AI becomes increasingly embedded in software engineering workflows,this research offers timely insights into how developers and organizations can responsibly integrate intelligent systems to promote accountability,resilience,and innovation across the software development lifecycle.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2504).
文摘The rapid evolution of malware presents a critical cybersecurity challenge,rendering traditional signature-based detection methods ineffective against novel variants.This growing threat affects individuals,organizations,and governments,highlighting the urgent need for robust malware detection mechanisms.Conventional machine learning-based approaches rely on static and dynamicmalware analysis and often struggle to detect previously unseen threats due to their dependency on predefined signatures.Although machine learning algorithms(MLAs)offer promising detection capabilities,their reliance on extensive feature engineering limits real-time applicability.Deep learning techniques mitigate this issue by automating feature extraction but may introduce computational overhead,affecting deployment efficiency.This research evaluates classical MLAs and deep learningmodels to enhance malware detection performance across diverse datasets.The proposed approach integrates a novel text and imagebased detection framework,employing an optimized Support Vector Machine(SVM)for textual data analysis and EfficientNet-B0 for image-based malware classification.Experimental analysis,conducted across multiple train-test splits over varying timescales,demonstrates 99.97%accuracy on textual datasets using SVM and 96.7%accuracy on image-based datasets with EfficientNet-B0,significantly improving zero-day malware detection.Furthermore,a comparative analysis with existing competitive techniques,such as Random Forest,XGBoost,and CNN-based(Convolutional Neural Network)classifiers,highlights the superior performance of the proposed model in terms of accuracy,efficiency,and robustness.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program,Grant No.(FRP-1443-15).
文摘The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.
基金The author Dr.Arshiya S.Ansari extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1538).
文摘Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Groups Program Grant No.(RGP-1444-0057).
文摘Dementia is a neurological disorder that affects the brain and its functioning,and women experience its effects more than men do.Preventive care often requires non-invasive and rapid tests,yet conventional diagnostic techniques are time-consuming and invasive.One of the most effective ways to diagnose dementia is by analyzing a patient’s speech,which is cheap and does not require surgery.This research aims to determine the effectiveness of deep learning(DL)and machine learning(ML)structures in diagnosing dementia based on women’s speech patterns.The study analyzes data drawn from the Pitt Corpus,which contains 298 dementia files and 238 control files from the Dementia Bank database.Deep learning models and SVM classifiers were used to analyze the available audio samples in the dataset.Our methodology used two methods:a DL-ML model and a single DL model for the classification of diabetics and a single DL model.The deep learning model achieved an astronomic level of accuracy of 99.99%with an F1 score of 0.9998,Precision of 0.9997,and recall of 0.9998.The proposed DL-ML fusion model was equally impressive,with an accuracy of 99.99%,F1 score of 0.9995,Precision of 0.9998,and recall of 0.9997.Also,the study reveals how to apply deep learning and machine learning models for dementia detection from speech with high accuracy and low computational complexity.This research work,therefore,concludes by showing the possibility of using speech-based dementia detection as a possibly helpful early diagnosis mode.For even further enhanced model performance and better generalization,future studies may explore real-time applications and the inclusion of other components of speech.
基金funded by Scientific Research Deanship at University of Hail-Saudi Arabia through Project Number RG-23092.
文摘Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.
基金the National Science and Technology Council(NSTC)of the Republic of China,Taiwan,for financially supporting this research under Contract No.NSTC 112-2637-M-131-001.
文摘Accurate and efficient brain tumor segmentation is essential for early diagnosis,treatment planning,and clinical decision-making.However,the complex structure of brain anatomy and the heterogeneous nature of tumors present significant challenges for precise anomaly detection.While U-Net-based architectures have demonstrated strong performance in medical image segmentation,there remains room for improvement in feature extraction and localization accuracy.In this study,we propose a novel hybrid model designed to enhance 3D brain tumor segmentation.The architecture incorporates a 3D ResNet encoder known for mitigating the vanishing gradient problem and a 3D U-Net decoder.Additionally,to enhance the model’s generalization ability,Squeeze and Excitation attention mechanism is integrated.We introduce Gabor filter banks into the encoder to further strengthen the model’s ability to extract robust and transformation-invariant features from the complex and irregular shapes typical in medical imaging.This approach,which is not well explored in current U-Net-based segmentation frameworks,provides a unique advantage by enhancing texture-aware feature representation.Specifically,Gabor filters help extract distinctive low-level texture features,reducing the effects of texture interference and facilitating faster convergence during the early stages of training.Our model achieved Dice scores of 0.881,0.846,and 0.819 for Whole Tumor(WT),Tumor Core(TC),and Enhancing Tumor(ET),respectively,on the BraTS 2020 dataset.Cross-validation on the BraTS 2021 dataset further confirmed the model’s robustness,yielding Dice score values of 0.887 for WT,0.856 for TC,and 0.824 for ET.The proposed model outperforms several state-of-the-art existing models,particularly in accurately identifying small and complex tumor regions.Extensive evaluations suggest integrating advanced preprocessing with an attention-augmented hybrid architecture offers significant potential for reliable and clinically valuable brain tumor segmentation.
基金supported by Chongqing Municipal Commission of Housing and Urban-Rural Development(Grant No.CKZ2024-87)China Chongqing Municipal Science and Technology Bureau(Grant No.2024TIAD-CYKJCXX0121).
文摘Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01137).
文摘Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation profiling techniques,new developments in this field determined four molecular subtypes for SHH-MB.SHH-MB subtypes show distinct DNAmethylation patterns that allow their discrimination fromoverlapping subtypes and predict clinical outcomes.Class overlapping occurs when two or more classes share common features,making it difficult to distinguish them as separate.Using the DNA methylation dataset,a novel classification technique is presented to address the issue of overlapping SHH-MBsubtypes.Penalizedmultinomial regression(PMR),Tomek links(TL),and singular value decomposition(SVD)were all smoothly integrated into a single framework.SVD and group lasso improve computational efficiency,address the problem of high-dimensional datasets,and clarify class distinctions by removing redundant or irrelevant features that might lead to class overlap.As a method to eliminate the issues of decision boundary overlap and class imbalance in the classification task,TL enhances dataset balance and increases the clarity of decision boundaries through the elimination of overlapping samples.Using fivefold cross-validation,our proposed method(TL-SVDPMR)achieved a remarkable overall accuracy of almost 95%in the classification of SHH-MB molecular subtypes.The results demonstrate the strong performance of the proposed classification model among the various SHH-MB subtypes given a high average of the area under the curve(AUC)values.Additionally,the statistical significance test indicates that TL-SVDPMR is more accurate than both SVM and random forest algorithms in classifying the overlapping SHH-MB subtypes,highlighting its importance for precision medicine applications.Our findings emphasized the success of combining SVD,TL,and PMRtechniques to improve the classification performance for biomedical applications with many features and overlapping subtypes.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-02160).
文摘In this paper,we propose a hybrid decode-and-forward and soft information relaying(HDFSIR)strategy to mitigate error propagation in coded cooperative communications.In the HDFSIR approach,the relay operates in decode-and-forward(DF)mode when it successfully decodes the received message;otherwise,it switches to soft information relaying(SIR)mode.The benefits of the DF and SIR forwarding strategies are combined to achieve better performance than deploying the DF or SIR strategy alone.Closed-form expressions for the outage probability and symbol error rate(SER)are derived for coded cooperative communication with HDFSIR and energy-harvesting relays.Additionally,we introduce a novel normalized log-likelihood-ratio based soft estimation symbol(NL-SES)mapping technique,which enhances soft symbol accuracy for higher-order modulation,and propose a model characterizing the relationship between the estimated complex soft symbol and the actual high-order modulated symbol.Further-more,the hybrid DF-SIR strategy is extended to a distributed Alamouti space-time-coded cooperative network.To evaluate the~performance of the proposed HDFSIR strategy,we implement extensive Monte Carlo simulations under varying channel conditions.Results demonstrate significant improvements with the hybrid technique outperforming individual DF and SIR strategies in both conventional and distributed Alamouti space-time coded cooperative networks.Moreover,at a SER of 10^(-3),the proposed NL-SES mapping demonstrated a 3.5 dB performance gain over the conventional averaging one,highlighting its superior accuracy in estimating soft symbols for quadrature phase-shift keying modulation.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘One in every eight men in the US is diagnosed with prostate cancer,making it the most common cancer in men.Gleason grading is one of the most essential diagnostic and prognostic factors for planning the treatment of prostate cancer patients.Traditionally,urological pathologists perform the grading by scoring the morphological pattern,known as the Gleason pattern,in histopathology images.However,thismanual grading is highly subjective,suffers intra-and inter-pathologist variability and lacks reproducibility.An automated grading system could be more efficient,with no subjectivity and higher accuracy and reproducibility.Automated methods presented previously failed to achieve sufficient accuracy,lacked reproducibility and depended on high-resolution images such as 40×.This paper proposes an automated Gleason grading method,ProGENET,to accurately predict the grade using low-resolution images such as 10×.This method first divides the patient’s histopathology whole slide image(WSI)into patches.Then,it detects artifacts and tissue-less regions and predicts the patch-wise grade using an ensemble network of CNN and transformer models.The proposed method adapted the International Society of Urological Pathology(ISUP)grading system and achieved 90.8%accuracy in classifying the patches into healthy and Gleason grades 1 through 5 using 10×WSI,outperforming the state-of-the-art accuracy by 27%.Finally,the patient’s grade was determined by combining the patch-wise results.The method was also demonstrated for 4−class grading and binary classification of prostate cancer,achieving 93.0%and 99.6%accuracy,respectively.The reproducibility was over 90%.Since the proposedmethod determined the grades with higher accuracy and reproducibility using low-resolution images,it is more reliable and effective than existing methods and can potentially improve subsequent therapy decisions.
基金funded by Zhejiang Basic Public Welfare Research Project,grant number LZY24E060001supported by Guangzhou Development Zone Science and Technology(2021GH10,2020GH10,2023GH02)+1 种基金the University of Macao(MYRG2022-00271-FST)the Science and Technology Development Fund(FDCT)of Macao(0032/2022/A).
文摘With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object significant challenges have been presented in accurately segmenting melanomas in dermoscopic images due to the objects that could interfere human observations,such as bubbles and scales.To address these challenges,we propose a dual U-Net network framework for skin melanoma segmentation.In our proposed architecture,we introduce several innovative components that aim to enhance the performance and capabilities of the traditional U-Net.First,we establish a novel framework that links two simplified U-Nets,enabling more comprehensive information exchange and feature integration throughout the network.Second,after cascading the second U-Net,we introduce a skip connection between the decoder and encoder networks,and incorporate a modified receptive field block(MRFB),which is designed to capture multi-scale spatial information.Third,to further enhance the feature representation capabilities,we add a multi-path convolution block attention module(MCBAM)to the first two layers of the first U-Net encoding,and integrate a new squeeze-and-excitation(SE)mechanism with residual connections in the second U-Net.To illustrate the performance of our proposed model,we conducted comprehensive experiments on widely recognized skin datasets.On the ISIC-2017 dataset,the IoU value of our proposed model increased from 0.6406 to 0.6819 and the Dice coefficient increased from 0.7625 to 0.8023.On the ISIC-2018 dataset,the IoU value of proposed model also improved from 0.7138 to 0.7709,while the Dice coefficient increased from 0.8285 to 0.8665.Furthermore,the generalization experiments conducted on the jaw cyst dataset from Quzhou People’s Hospital further verified the outstanding segmentation performance of the proposed model.These findings collectively affirm the potential of our approach as a valuable tool in supporting clinical decision-making in the field of skin cancer detection,as well as advancing research in medical image analysis.
基金funded by Deanship of Graduate studies and Scientific Research at Jouf University under grant No.(DGSSR-2023-2-02038).
文摘Wireless Sensor Networks(WSNs)are one of the best technologies of the 21st century and have seen tremendous growth over the past decade.Much work has been put into its development in various aspects such as architectural attention,routing protocols,location exploration,time exploration,etc.This research aims to optimize routing protocols and address the challenges arising from conflicting objectives in WSN environments,such as balancing energy consumption,ensuring routing reliability,distributing network load,and selecting the shortest path.Many optimization techniques have shown success in achieving one or two objectives but struggle to achieve the right balance between multiple conflicting objectives.To address this gap,this paper proposes an innovative approach that integrates Particle Swarm Optimization(PSO)with a fuzzy multi-objective framework.The proposed method uses fuzzy logic to effectively control multiple competing objectives to represent its major development beyond existing methods that only deal with one or two objectives.The search efficiency is improved by particle swarm optimization(PSO)which overcomes the large computational requirements that serve as a major drawback of existing methods.The PSO algorithm is adapted for WSNs to optimize routing paths based on fuzzy multi-objective fitness.The fuzzy logic framework uses predefined membership functions and rule-based reasoning to adjust routing decisions.These adjustments influence PSO’s velocity updates,ensuring continuous adaptation under varying network conditions.The proposed multi-objective PSO-fuzzy model is evaluated using NS-3 simulation.The results show that the proposed model is capable of improving the network lifetime by 15.2%–22.4%,increasing the stabilization time by 18.7%–25.5%,and increasing the residual energy by 8.9%–16.2% compared to the state-of-the-art techniques.The proposed model also achieves a 15%–24% reduction in load variance,demonstrating balanced routing and extended network lifetime.Furthermore,analysis using p-values obtained from multiple performance measures(p-values<0.05)showed that the proposed approach outperforms with a high level of confidence.The proposed multi-objective PSO-fuzzy model provides a robust and scalable solution to improve the performance of WSNs.It allows stable performance in networks with 100 to 300 nodes,under varying node densities,and across different base station placements.Computational complexity analysis has shown that the method fits well into large-scale WSNs and that the addition of fuzzy logic controls the power usage to make the system practical for real-world use.
文摘In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.
基金supported by the researcher supporting Project number(RSPD2025R636),King Saud University,Riyadh,Saudi Arabia.
文摘Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embedding a unique identifier that identifies the owner of the content.Image watermarking can also be used to verify the authenticity of digital media,such as images or videos,by ascertaining the watermark information.In this paper,a mathematical chaos-based image watermarking technique is proposed using discrete wavelet transform(DWT),chaotic map,and Laplacian operator.The DWT can be used to decompose the image into its frequency components,chaos is used to provide extra security defense by encrypting the watermark signal,and the Laplacian operator with optimization is applied to the mid-frequency bands to find the sharp areas in the image.These mid-frequency bands are used to embed the watermarks by modifying the coefficients in these bands.The mid-sub-band maintains the invisible property of the watermark,and chaos combined with the second-order derivative Laplacian is vulnerable to attacks.Comprehensive experiments demonstrate that this approach is effective for common signal processing attacks,i.e.,compression,noise addition,and filtering.Moreover,this approach also maintains image quality through peak signal-to-noise ratio(PSNR)and structural similarity index metrics(SSIM).The highest achieved PSNR and SSIM values are 55.4 dB and 1.In the same way,normalized correlation(NC)values are almost 10%–20%higher than comparative research.These results support assistance in copyright protection in multimedia content.