As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a no...As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.展开更多
Unmanned Aerial Vehicles(UAVs)have become indispensable for intelligent traffic monitoring,particularly in low-light conditions,where traditional surveillance systems struggle.This study presents a novel deep learning...Unmanned Aerial Vehicles(UAVs)have become indispensable for intelligent traffic monitoring,particularly in low-light conditions,where traditional surveillance systems struggle.This study presents a novel deep learning-based framework for nighttime aerial vehicle detection and classification that addresses critical challenges of poor illumination,noise,and occlusions.Our pipeline integrates MSRCR enhancement with OPTICS segmentation to overcome low-light challenges,while YOLOv10 enables accurate vehicle localization.The framework employs GLOH and Dense-SIFT for discriminative feature extraction,optimized using the Whale Optimization Algorithm to enhance classification performance.A Swin Transformer-based classifier provides the final categorization,leveraging hierarchical attention mechanisms for robust performance.Extensive experimentation validates our approach,achieving detection mAP@0.5 scores of 91.5%(UAVDT)and 89.7%(VisDrone),alongside classification accuracies of 95.50%and 92.67%,respectively.These results outperform state-of-the-art methods by up to 5.10%in accuracy and 4.2%in mAP,demonstrating the framework’s effectiveness for real-time aerial surveillance and intelligent traffic management in challenging nighttime environments.展开更多
Unmanned Aerial Vehicles(UAVs)are increasingly employed in traffic surveillance,urban planning,and infrastructure monitoring due to their cost-effectiveness,flexibility,and high-resolution imaging.However,vehicle dete...Unmanned Aerial Vehicles(UAVs)are increasingly employed in traffic surveillance,urban planning,and infrastructure monitoring due to their cost-effectiveness,flexibility,and high-resolution imaging.However,vehicle detection and classification in aerial imagery remain challenging due to scale variations from fluctuating UAV altitudes,frequent occlusions in dense traffic,and environmental noise,such as shadows and lighting inconsistencies.Traditional methods,including sliding-window searches and shallow learning techniques,struggle with computational inefficiency and robustness under dynamic conditions.To address these limitations,this study proposes a six-stage hierarchical framework integrating radiometric calibration,deep learning,and classical feature engineering.The workflow begins with radiometric calibration to normalize pixel intensities and mitigate sensor noise,followed by Conditional Random Field(CRF)segmentation to isolate vehicles.YOLOv9,equipped with a bi-directional feature pyramid network(BiFPN),ensures precise multi-scale object detection.Hybrid feature extraction employs Maximally Stable Extremal Regions(MSER)for stable contour detection,Binary Robust Independent Elementary Features(BRIEF)for texture encoding,and Affine-SIFT(ASIFT)for viewpoint invariance.Quadratic Discriminant Analysis(QDA)enhances feature discrimination,while a Probabilistic Neural Network(PNN)performs Bayesian probability-based classification.Tested on the Roundabout Aerial Imagery(15,474 images,985K instances)and AU-AIR(32,823 instances,7 classes)datasets,the model achieves state-of-the-art accuracy of 95.54%and 94.14%,respectively.Its superior performance in detecting small-scale vehicles and resolving occlusions highlights its potential for intelligent traffic systems.Future work will extend testing to nighttime and adverse weather conditions while optimizing real-time UAV inference.展开更多
Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(C...Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(CNNs)struggle with long-range dependencies and preserving high-resolution features,limiting their effectiveness in complex aerial image analysis.To address these challenges,we propose a Hybrid HRNet-Swin Transformer model that synergizes the strengths of HRNet-W48 for high-resolution segmentation and the Swin Transformer for global feature extraction.This hybrid architecture ensures robust multi-scale feature fusion,capturing fine-grained details and broader contextual relationships in aerial imagery.Our methodology begins with preprocessing steps,including normalization,histogram equalization,and noise reduction,to enhance input data quality.The HRNet-W48 backbone maintains high-resolution feature maps throughout the network,enabling precise segmentation,while the Swin Transformer leverages hierarchical self-attention to model long-range dependencies efficiently.By integrating these components,our model achieves superior performance in segmentation and classification tasks compared to traditional CNNs and standalone transformer models.We evaluate our approach on two benchmark datasets:UC Merced and WHU-RS19.Experimental results demonstrate that the proposed hybrid model outperforms existing methods,achieving state-of-the-art accuracy while maintaining computational efficiency.Specifically,it excels in preserving fine spatial details and contextual understanding,critical for applications like land-use classification and disaster assessment.展开更多
Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately r...Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately recognize activities.A structured pipeline enhances IS-DAR by applying signal preprocessing,feature extraction and optimization,followed by classification.Before segmentation,a Chebyshev filter removes noise,and Blackman window-ing improves signal representation.Discriminative features-Gaussian Mixture Model(GMM)with Mel-Frequency Cepstral Coefficients(MFCC),spectral entropy,quaternion-based features,and Gammatone Cepstral Coefficients(GCC)-are fused to expand the feature space.Unlike existing approaches,the proposed IS-DAR system uniquely inte-grates diverse handcrafted features using a novel fusion strategy combined with Bayesian-based optimization,enabling a more accurate and generalized activity recognition.The key contribution lies in the joint optimization and fusion of features via Bayesian-based subset selection,resulting in a compact and highly discriminative feature representation.These features are then fed into a Convolutional Neural Network(CNN)to effectively detect spatial-temporal patterns in activity signals.Testing on two public datasets-IM-WSHA and ENABL3S-achieved accuracy levels of 93.0%and 92.0%,respectively.The integration of advanced feature extraction methods with fusion and optimization techniques significantly enhanced detection performance,surpassing traditional methods.The obtained results establish the effectiveness of the proposed IS-DAR system for deployment in real-world activity recognition applications.展开更多
AIM:To investigate short-term changes in choroidal thickness in response to peripheral myopic defocus induced by two designs of multifocal corneal gas permeable contact lenses(MFGPCL)in young adults.METHODS:Seventeen ...AIM:To investigate short-term changes in choroidal thickness in response to peripheral myopic defocus induced by two designs of multifocal corneal gas permeable contact lenses(MFGPCL)in young adults.METHODS:Seventeen participants,with a mean age of 24.5±4y,underwent choroidal thickness and vascularity index measurements using enhanced depth imaging optical coherence tomography(EDI OCT)at baseline,one day,and one week following MFGPCL wear.Two center-distance MFGPCL designs with similar center zone diameters of 3.0 mm but different peripheral add powers(low add:+1.5 D and high add:+3.0 D)were tested.Each participant was randomly assigned to wear one of the two MFGPCL designs.Measurements of total,luminal,and stromal choroid thickness were obtained in five eccentric regions(6 mm towards the periphery)in all quadrants.RESULTS:Significant thickening in total choroidal thickness were observed after one week of wearing both high add(+10±6µm)and low add(+7±5µm)MFGPCLs,with no statistically significant difference between the two groups(P=0.42).Choroidal thickening was consistent across eccentric regions and quadrants,with no significant differences based on eccentricity or quadrant(all P>0.05).Both lens designs induced choroidal thickening,with no significant difference between them in total choroidal thickness(P=0.18 for quadrants,P=0.51 for eccentric regions).CONCLUSION:Peripheral myopic defocus induced by MFGPCLs lead to significant choroidal thickening,including total,luminal,and stromal components.This study highlights the need for future research to explore the dose-response relationship between peripheral myopic defocus and choroidal thickening,utilizing choroidal response as a potential biomarker.展开更多
Smart Grid is a power grid that improves flexibility,reliability,and efficiency through smart meters.Due to extensive data exchange over the Internet,the smart grid faces many security challenges that have led to data...Smart Grid is a power grid that improves flexibility,reliability,and efficiency through smart meters.Due to extensive data exchange over the Internet,the smart grid faces many security challenges that have led to data loss,data compromise,and high power consumption.Moreover,the lack of hardware protection and physical attacks reduce the overall performance of the smart grid network.We proposed the BLIDSE model(Blockchain-based secure quantum key distribution and Intrusion Detection System in Edge Enables Smart Grid Network)to address these issues.The proposed model includes five phases:The first phase is blockchain-based secure user authentication,where all smart meters are first registered in the blockchain,and then the blockchain generates a secret key.The blockchain verifies the user ID and the secret key during authentication matches the one authorized to access the network.The secret key is shared during transmission through secure quantum key distribution(SQKD).The second phase is the lightweight data encryption,for which we use a lightweight symmetric encryption algorithm,named Camellia.The third phase is the multi-constraint-based edge selection;the data are transmitted to the control center through the edge server,which is also authenticated by blockchain to enhance the security during the data transmission.We proposed a perfect matching algorithm for selecting the optimal edge.The fourth phase is a dual intrusion detection system which acts as a firewall used to drop irrelevant packets,and data packets are classified into normal,physical errors and attacks,which is done by Double DeepQNetwork(DDQN).The last phase is optimal user privacy management.In this phase,smartmeter updates and revocations are done,forwhichwe proposed Forensic based Investigation Optimization(FBI),which improves the security of the smart grid network.The simulation is performed using network simulator NS3.26,which evaluates the performance in terms of computational complexity,accuracy,false detection,and false alarm rate.The proposed BLIDSE model effectively mitigates cyber-attacks,thereby contributing to improved security in the network.展开更多
Software development is getting a transition from centralized version control systems(CVCSs)like Subversion to decentralized version control systems(DVCDs)like Git due to lesser efficiency of former in terms of branch...Software development is getting a transition from centralized version control systems(CVCSs)like Subversion to decentralized version control systems(DVCDs)like Git due to lesser efficiency of former in terms of branching,fusion,time,space,merging,offline commits&builds and repository,etc.Git is having a share of 77%of total VCS,followed by Subversion with a share of 13.5%.The majority of software industries are getting a migration from Subversion to Git.Only a few migration tools are available in the software industry.Still,these too lack in many features like lack of identifying the empty directories as premigration check,failover capabilities during migration due to network failure or disk space issue,and detailed report generation as post-migration steps.In this work,a holistic,proactive and novel approach has been presented for pre/during/post-migration validation from Subversion to Git.Many scripts have been developed and executed run-time over various projects for overcoming the limitations of existing migration software tools for a Subversion to Git migration.During premigration,none of the available migration tools has the capability to fetch empty directories of Subversion,which results in an incomplete migration from Subversion to Git.Many Scripts have been developed and executed for pre-migration validation and migration preparation,which overcomes the problem of incomplete migration.Experimentation was conducted in SRLC Software Research Lab,Chicago,USA.During the migration process,in case of loss of network connection or due to any other reason,if migration stops or breaks,available migration tools do not have capabilities to start over from the same point where it left.Various Scripts have been developed and executed to keep the migration revision history in the cache(elastic cache)to start from the same point where it was left due to connection failure.During post-migration,none of the available version control migration tools generate a detailed report giving information about the total size of source Subversion repositories, the total volume of data migrated todestination repositories in Git, total number of pools migrated, time taken formigration, number of Subversion users with email notification, etc. VariousScripts have been developed and executed for the above purpose during thepost-migration process.展开更多
Diabetes Mellitus is one of the most severe diseases,and many studies have been conducted to anticipate diabetes.This research aimed to develop an intelligent mobile application based on machine learning to determine ...Diabetes Mellitus is one of the most severe diseases,and many studies have been conducted to anticipate diabetes.This research aimed to develop an intelligent mobile application based on machine learning to determine the diabetic,pre-diabetic,or non-diabetic without the assistance of any physician or medical tests.This study’s methodology was classified into two the Diabetes Prediction Approach and the Proposed System Architecture Design.The Diabetes Prediction Approach uses a novel approach,Light Gradient Boosting Machine(LightGBM),to ensure a faster diagnosis.The Proposed System ArchitectureDesign has been combined into sevenmodules;the Answering Question Module is a natural language processing Chabot that can answer all kinds of questions related to diabetes.The Doctor Consultation Module ensures free treatment related to diabetes.In this research,90%accuracy was obtained by performing K-fold cross-validation on top of the K nearest neighbor’s algorithm(KNN)&LightGBM.To evaluate the model’s performance,Receiver Operating Characteristics(ROC)Curve and Area under the ROC Curve(AUC)were applied with a value of 0.948 and 0.936,respectively.This manuscript presents some exploratory data analysis,including a correlation matrix and survey report.Moreover,the proposed solution can be adjustable in the daily activities of a diabetic patient.展开更多
The recent unprecedented threat from COVID-19 and past epidemics,such as SARS,AIDS,and Ebola,has affected millions of people in multiple countries.Countries have shut their borders,and their nationals have been advise...The recent unprecedented threat from COVID-19 and past epidemics,such as SARS,AIDS,and Ebola,has affected millions of people in multiple countries.Countries have shut their borders,and their nationals have been advised to self-quarantine.The variety of responses to the pandemic has given rise to data privacy concerns.Infection prevention and control strategies as well as disease control measures,especially real-time contact tracing for COVID-19,require the identification of people exposed to COVID-19.Such tracing frameworks use mobile apps and geolocations to trace individuals.However,while the motive may be well intended,the limitations and security issues associated with using such a technology are a serious cause of concern.There are growing concerns regarding the privacy of an individual’s location and personal identifiable information(PII)being shared with governments and/or health agencies.This study presents a real-time,trust-based contact-tracing framework that operateswithout the use of an individual’sPII,location sensing,or gathering GPS logs.The focus of the proposed contact tracing framework is to ensure real-time privacy using the Bluetooth range of individuals to determine others within the range.The research validates the trust-based framework using Bluetooth as practical and privacy-aware.Using our proposed methodology,personal information,health logs,and location data will be secure and not abused.This research analyzes 100,000 tracing dataset records from 150 mobile devices to identify infected users and active users.展开更多
Depression is a crippling affliction and affects millions of individuals around the world.In general,the physicians screen patients for mental health disorders on a regular basis and treat patients in collaboration wi...Depression is a crippling affliction and affects millions of individuals around the world.In general,the physicians screen patients for mental health disorders on a regular basis and treat patients in collaboration with psychologists and other mental health experts,which results in lower costs and improved patient outcomes.However,this strategy can necessitate a lot of buy-in from a large number of people,as well as additional training and logistical considerations.Thus,utilizing the machine learning algorithms,patients with depression based on information generally present in a medical file were analyzed and predicted.The methodology of this proposed study is divided into six parts:Proposed Research Architecture(PRA),Data Pre-processing Approach(DPA),Research Hypothesis Testing(RHT),Concentrated Algorithm Pipeline(CAP),Loss Optimization Stratagem(LOS),and Model Deployment Architecture(MDA).The Null Hypothesis and Alternative Hypothesis are applied to test the RHT.In addition,Ensemble Learning Approach(ELA)and Frequent Model Retraining(FMR)have been utilized for optimizing the loss function.Besides,the Features Importance Interpretation is also delineated in this research.These forecasts could help individuals connect with expert mental health specialists more quickly and easily.According to the findings,71%of people with depression and 80%of those who do not have depression can be appropriately diagnosed.This study obtained 91%and 92%accuracy through the Random Forest(RF)and Extra Tree Classifier.But after applying the Receiver operating characteristic(ROC)curve,79%accuracy was found on top of RF,81%found on Extra Tree,and 82%recorded for the eXtreme Gradient Boosting(XGBoost)algorithm.Besides,several factors are identified in terms of predicting depression through statistical data analysis.Though the additional effort is needed to develop a more accurate model,this model can be adjustable in the healthcare sector for diagnosing depression.展开更多
Sensors and physical activity evaluation are quite limited for motionbased commercial devices.Sometimes the accelerometer of the smartwatch is utilized;walking is investigated.The combination can perform better in ter...Sensors and physical activity evaluation are quite limited for motionbased commercial devices.Sometimes the accelerometer of the smartwatch is utilized;walking is investigated.The combination can perform better in terms of sensors and that can be determined by sensors on both the smartwatch and phones,i.e.,accelerometer and gyroscope.For biometric efficiency,some of the diverse activities of daily routine have been evaluated,also with biometric authentication.The result shows that using the different computing techniques in phones and watch for biometric can provide a suitable output based on the mentioned activities.This indicates that the high feasibility and results of continuous biometrics analysis in terms of average daily routine activities.In this research,the set of rules with the real-valued attributes are evolved with the use of a genetic algorithm.With the help of real value genes,the real value attributes cab be encoded,and presentation of new methods which are represents not to cares in the rules.The rule sets which help in maximizing the number of accurate classifications of inputs and supervise classifications are viewed as an optimization problem.The use of Pitt approach to the ML(Machine Learning)and Genetic based system that includes a resolution mechanism among rules that are competing within the same rule sets is utilized.This enhances the efficiency of the overall system,as shown in the research.展开更多
Low back pain(LBP)is a morbid condition that has afflicted several citizens in Europe.It has negatively impacted the European economy due to several man-days lost,with bed rest and forced inactivity being the usual LB...Low back pain(LBP)is a morbid condition that has afflicted several citizens in Europe.It has negatively impacted the European economy due to several man-days lost,with bed rest and forced inactivity being the usual LBP care and management steps.Direct models,which incorporate various regression analyses,have been executed for the investigation of this premise due to the simplicity of translation.However,such straight models fail to completely consider the impact of association brought about by a mix of nonlinear connections and autonomous factors.In this paper,we discuss a system that aids decision-making regarding the best-suited support system for LBP,allowing the individual to avail of reinforcement and improvement in its self-management.These activities are monitored with the help of a wearable sensor that helps in their detection and their classification as those that soothe or aggravate LBP and hence,should or should not be performed.This system helps the patients set their own boundaries and milestones with respect to suitable activities.This system also does windowing and feature extraction.The present study is an empirical and comparative analysis of the most suitable activities that patients suffering from low back pain can select.The evaluation shows that the system can distinguish between nine common daily activities effectively and helps self-monitor these activities for the efficient management of LBP.展开更多
Cyber Threat Intelligence(CTI)has gained massive attention to collect hidden knowledge for a better understanding of the various cyber-attacks and eventually paving the way for predicting the future of such attacks.Th...Cyber Threat Intelligence(CTI)has gained massive attention to collect hidden knowledge for a better understanding of the various cyber-attacks and eventually paving the way for predicting the future of such attacks.The information exchange and collaborative sharing through different platforms have a significant contribution towards a global solution.While CTI and the information exchange can help a lot in focusing and prioritizing on the use of the large volume of complex information among different organizations,there exists a great challenge ineffective processing of large count of different Indicators of Threat(IoT)which appear regularly,and that can be solved only through a collaborative approach.Collaborative approach and intelligence sharing have become the mandatory element in the entire world of processing the threats.In order to covet the complete needs of having a definite standard of information exchange,various initiatives have been taken in means of threat information sharing platforms like MISP and formats such as SITX.This paper proposes a scoring model to address information decay,which is shared within TISP.The scoring model is implemented,taking the use case of detecting the Threat Indicators in a phishing data network.The proposed method calculates the rate of decay of an attribute through which the early entries are removed.展开更多
Today,security is a major challenge linked with computer network companies that cannot defend against cyber-attacks.Numerous vulnerable factors increase security risks and cyber-attacks,including viruses,the internet,...Today,security is a major challenge linked with computer network companies that cannot defend against cyber-attacks.Numerous vulnerable factors increase security risks and cyber-attacks,including viruses,the internet,communications,and hackers.Internets of Things(IoT)devices are more effective,and the number of devices connected to the internet is constantly increasing,and governments and businesses are also using these technologies to perform business activities effectively.However,the increasing uses of technologies also increase risks,such as password attacks,social engineering,and phishing attacks.Humans play a major role in the field of cybersecurity.It is observed that more than 39%of security risks are related to the human factor,and 95%of successful cyber-attacks are caused by human error,with most of them being insider threats.The major human factor issue in cybersecurity is a lack of user awareness of cyber threats.This study focuses on the human factor by surveying the vulnerabilities and reducing the risk by focusing on human nature and reacting to different situations.This study highlighted that most of the participants are not experienced with cybersecurity threats and how to protect their personal information.Moreover,the lack of awareness of the top three vulnerabilities related to the human factor in cybersecurity,such as phishing attacks,passwords,attacks,and social engineering,are major problems that need to be addressed and reduced through proper awareness and training.展开更多
Web applications have become a widely accepted method to support the internet for the past decade.Since they have been successfully installed in the business activities and there is a requirement of advanced functiona...Web applications have become a widely accepted method to support the internet for the past decade.Since they have been successfully installed in the business activities and there is a requirement of advanced functionalities,the configuration is growing and becoming more complicated.The growing demand and complexity also make these web applications a preferred target for intruders on the internet.Even with the support of security specialists,they remain highly problematic for the complexity of penetration and code reviewing methods.It requires considering different testing patterns in both codes reviewing and penetration testing.As a result,the number of hacked websites is increasing day by day.Most of these vulnerabilities also occur due to incorrect input validation and lack of result validation for lousy programming practices or coding errors.Vulnerability scanners for web applications can detect a few vulnerabilities in a dynamic approach.These are quite easy to use;however,these often miss out on some of the unique critical vulnerabilities in a different and static approach.Although these are time-consuming,they can find complex vulnerabilities and improve developer knowledge in coding and best practices.Many scanners choose both dynamic and static approaches,and the developers can select them based on their requirements and conditions.This research explores and provides details of SQL injection,operating system command injection,path traversal,and cross-site scripting vulnerabilities through dynamic and static approaches.It also examines various security measures in web applications and selected five tools based on their features for scanning PHP,and JAVA code focuses on SQL injection,cross-site scripting,Path Traversal,operating system command.Moreover,this research discusses the approach of a cyber-security tester or a security developer finding out vulnerabilities through dynamic and static approaches using manual and automated web vulnerability scanners.展开更多
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB BremenThe authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Group Project under grant number(RGP2/367/46)+1 种基金This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.
基金supported through Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R508)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Unmanned Aerial Vehicles(UAVs)have become indispensable for intelligent traffic monitoring,particularly in low-light conditions,where traditional surveillance systems struggle.This study presents a novel deep learning-based framework for nighttime aerial vehicle detection and classification that addresses critical challenges of poor illumination,noise,and occlusions.Our pipeline integrates MSRCR enhancement with OPTICS segmentation to overcome low-light challenges,while YOLOv10 enables accurate vehicle localization.The framework employs GLOH and Dense-SIFT for discriminative feature extraction,optimized using the Whale Optimization Algorithm to enhance classification performance.A Swin Transformer-based classifier provides the final categorization,leveraging hierarchical attention mechanisms for robust performance.Extensive experimentation validates our approach,achieving detection mAP@0.5 scores of 91.5%(UAVDT)and 89.7%(VisDrone),alongside classification accuracies of 95.50%and 92.67%,respectively.These results outperform state-of-the-art methods by up to 5.10%in accuracy and 4.2%in mAP,demonstrating the framework’s effectiveness for real-time aerial surveillance and intelligent traffic management in challenging nighttime environments.
基金supported through Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R508)Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/18-5.
文摘Unmanned Aerial Vehicles(UAVs)are increasingly employed in traffic surveillance,urban planning,and infrastructure monitoring due to their cost-effectiveness,flexibility,and high-resolution imaging.However,vehicle detection and classification in aerial imagery remain challenging due to scale variations from fluctuating UAV altitudes,frequent occlusions in dense traffic,and environmental noise,such as shadows and lighting inconsistencies.Traditional methods,including sliding-window searches and shallow learning techniques,struggle with computational inefficiency and robustness under dynamic conditions.To address these limitations,this study proposes a six-stage hierarchical framework integrating radiometric calibration,deep learning,and classical feature engineering.The workflow begins with radiometric calibration to normalize pixel intensities and mitigate sensor noise,followed by Conditional Random Field(CRF)segmentation to isolate vehicles.YOLOv9,equipped with a bi-directional feature pyramid network(BiFPN),ensures precise multi-scale object detection.Hybrid feature extraction employs Maximally Stable Extremal Regions(MSER)for stable contour detection,Binary Robust Independent Elementary Features(BRIEF)for texture encoding,and Affine-SIFT(ASIFT)for viewpoint invariance.Quadratic Discriminant Analysis(QDA)enhances feature discrimination,while a Probabilistic Neural Network(PNN)performs Bayesian probability-based classification.Tested on the Roundabout Aerial Imagery(15,474 images,985K instances)and AU-AIR(32,823 instances,7 classes)datasets,the model achieves state-of-the-art accuracy of 95.54%and 94.14%,respectively.Its superior performance in detecting small-scale vehicles and resolving occlusions highlights its potential for intelligent traffic systems.Future work will extend testing to nighttime and adverse weather conditions while optimizing real-time UAV inference.
基金supported by the ITP(Institute of Information&Communications Technology Planning&Evaluation)-ICAN(ICT Challenge and Advanced Network of HRD)(ITP-2025-RS-2022-00156326,33)grant funded by the Korea government(Ministry of Science and ICT)the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/568/45)the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,for funding this research work through the Project Number"NBU-FFR-2025-231-03".
文摘Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(CNNs)struggle with long-range dependencies and preserving high-resolution features,limiting their effectiveness in complex aerial image analysis.To address these challenges,we propose a Hybrid HRNet-Swin Transformer model that synergizes the strengths of HRNet-W48 for high-resolution segmentation and the Swin Transformer for global feature extraction.This hybrid architecture ensures robust multi-scale feature fusion,capturing fine-grained details and broader contextual relationships in aerial imagery.Our methodology begins with preprocessing steps,including normalization,histogram equalization,and noise reduction,to enhance input data quality.The HRNet-W48 backbone maintains high-resolution feature maps throughout the network,enabling precise segmentation,while the Swin Transformer leverages hierarchical self-attention to model long-range dependencies efficiently.By integrating these components,our model achieves superior performance in segmentation and classification tasks compared to traditional CNNs and standalone transformer models.We evaluate our approach on two benchmark datasets:UC Merced and WHU-RS19.Experimental results demonstrate that the proposed hybrid model outperforms existing methods,achieving state-of-the-art accuracy while maintaining computational efficiency.Specifically,it excels in preserving fine spatial details and contextual understanding,critical for applications like land-use classification and disaster assessment.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB BremenThe authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding thiswork through Large Group Project under grant number(RGP.2/568/45)The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2025-231-04”.
文摘Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately recognize activities.A structured pipeline enhances IS-DAR by applying signal preprocessing,feature extraction and optimization,followed by classification.Before segmentation,a Chebyshev filter removes noise,and Blackman window-ing improves signal representation.Discriminative features-Gaussian Mixture Model(GMM)with Mel-Frequency Cepstral Coefficients(MFCC),spectral entropy,quaternion-based features,and Gammatone Cepstral Coefficients(GCC)-are fused to expand the feature space.Unlike existing approaches,the proposed IS-DAR system uniquely inte-grates diverse handcrafted features using a novel fusion strategy combined with Bayesian-based optimization,enabling a more accurate and generalized activity recognition.The key contribution lies in the joint optimization and fusion of features via Bayesian-based subset selection,resulting in a compact and highly discriminative feature representation.These features are then fed into a Convolutional Neural Network(CNN)to effectively detect spatial-temporal patterns in activity signals.Testing on two public datasets-IM-WSHA and ENABL3S-achieved accuracy levels of 93.0%and 92.0%,respectively.The integration of advanced feature extraction methods with fusion and optimization techniques significantly enhanced detection performance,surpassing traditional methods.The obtained results establish the effectiveness of the proposed IS-DAR system for deployment in real-world activity recognition applications.
基金Supported by Ongoing Research Funding Program(No.ORF-2025-1160),King Saud University,Riyadh Saudi Arabia.
文摘AIM:To investigate short-term changes in choroidal thickness in response to peripheral myopic defocus induced by two designs of multifocal corneal gas permeable contact lenses(MFGPCL)in young adults.METHODS:Seventeen participants,with a mean age of 24.5±4y,underwent choroidal thickness and vascularity index measurements using enhanced depth imaging optical coherence tomography(EDI OCT)at baseline,one day,and one week following MFGPCL wear.Two center-distance MFGPCL designs with similar center zone diameters of 3.0 mm but different peripheral add powers(low add:+1.5 D and high add:+3.0 D)were tested.Each participant was randomly assigned to wear one of the two MFGPCL designs.Measurements of total,luminal,and stromal choroid thickness were obtained in five eccentric regions(6 mm towards the periphery)in all quadrants.RESULTS:Significant thickening in total choroidal thickness were observed after one week of wearing both high add(+10±6µm)and low add(+7±5µm)MFGPCLs,with no statistically significant difference between the two groups(P=0.42).Choroidal thickening was consistent across eccentric regions and quadrants,with no significant differences based on eccentricity or quadrant(all P>0.05).Both lens designs induced choroidal thickening,with no significant difference between them in total choroidal thickness(P=0.18 for quadrants,P=0.51 for eccentric regions).CONCLUSION:Peripheral myopic defocus induced by MFGPCLs lead to significant choroidal thickening,including total,luminal,and stromal components.This study highlights the need for future research to explore the dose-response relationship between peripheral myopic defocus and choroidal thickening,utilizing choroidal response as a potential biomarker.
基金The authors would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No-R-2021-137.
文摘Smart Grid is a power grid that improves flexibility,reliability,and efficiency through smart meters.Due to extensive data exchange over the Internet,the smart grid faces many security challenges that have led to data loss,data compromise,and high power consumption.Moreover,the lack of hardware protection and physical attacks reduce the overall performance of the smart grid network.We proposed the BLIDSE model(Blockchain-based secure quantum key distribution and Intrusion Detection System in Edge Enables Smart Grid Network)to address these issues.The proposed model includes five phases:The first phase is blockchain-based secure user authentication,where all smart meters are first registered in the blockchain,and then the blockchain generates a secret key.The blockchain verifies the user ID and the secret key during authentication matches the one authorized to access the network.The secret key is shared during transmission through secure quantum key distribution(SQKD).The second phase is the lightweight data encryption,for which we use a lightweight symmetric encryption algorithm,named Camellia.The third phase is the multi-constraint-based edge selection;the data are transmitted to the control center through the edge server,which is also authenticated by blockchain to enhance the security during the data transmission.We proposed a perfect matching algorithm for selecting the optimal edge.The fourth phase is a dual intrusion detection system which acts as a firewall used to drop irrelevant packets,and data packets are classified into normal,physical errors and attacks,which is done by Double DeepQNetwork(DDQN).The last phase is optimal user privacy management.In this phase,smartmeter updates and revocations are done,forwhichwe proposed Forensic based Investigation Optimization(FBI),which improves the security of the smart grid network.The simulation is performed using network simulator NS3.26,which evaluates the performance in terms of computational complexity,accuracy,false detection,and false alarm rate.The proposed BLIDSE model effectively mitigates cyber-attacks,thereby contributing to improved security in the network.
基金the Deanship of Scientific research at Majmaah University for the funding this work under Project No.(RGP-2019-26).
文摘Software development is getting a transition from centralized version control systems(CVCSs)like Subversion to decentralized version control systems(DVCDs)like Git due to lesser efficiency of former in terms of branching,fusion,time,space,merging,offline commits&builds and repository,etc.Git is having a share of 77%of total VCS,followed by Subversion with a share of 13.5%.The majority of software industries are getting a migration from Subversion to Git.Only a few migration tools are available in the software industry.Still,these too lack in many features like lack of identifying the empty directories as premigration check,failover capabilities during migration due to network failure or disk space issue,and detailed report generation as post-migration steps.In this work,a holistic,proactive and novel approach has been presented for pre/during/post-migration validation from Subversion to Git.Many scripts have been developed and executed run-time over various projects for overcoming the limitations of existing migration software tools for a Subversion to Git migration.During premigration,none of the available migration tools has the capability to fetch empty directories of Subversion,which results in an incomplete migration from Subversion to Git.Many Scripts have been developed and executed for pre-migration validation and migration preparation,which overcomes the problem of incomplete migration.Experimentation was conducted in SRLC Software Research Lab,Chicago,USA.During the migration process,in case of loss of network connection or due to any other reason,if migration stops or breaks,available migration tools do not have capabilities to start over from the same point where it left.Various Scripts have been developed and executed to keep the migration revision history in the cache(elastic cache)to start from the same point where it was left due to connection failure.During post-migration,none of the available version control migration tools generate a detailed report giving information about the total size of source Subversion repositories, the total volume of data migrated todestination repositories in Git, total number of pools migrated, time taken formigration, number of Subversion users with email notification, etc. VariousScripts have been developed and executed for the above purpose during thepost-migration process.
文摘Diabetes Mellitus is one of the most severe diseases,and many studies have been conducted to anticipate diabetes.This research aimed to develop an intelligent mobile application based on machine learning to determine the diabetic,pre-diabetic,or non-diabetic without the assistance of any physician or medical tests.This study’s methodology was classified into two the Diabetes Prediction Approach and the Proposed System Architecture Design.The Diabetes Prediction Approach uses a novel approach,Light Gradient Boosting Machine(LightGBM),to ensure a faster diagnosis.The Proposed System ArchitectureDesign has been combined into sevenmodules;the Answering Question Module is a natural language processing Chabot that can answer all kinds of questions related to diabetes.The Doctor Consultation Module ensures free treatment related to diabetes.In this research,90%accuracy was obtained by performing K-fold cross-validation on top of the K nearest neighbor’s algorithm(KNN)&LightGBM.To evaluate the model’s performance,Receiver Operating Characteristics(ROC)Curve and Area under the ROC Curve(AUC)were applied with a value of 0.948 and 0.936,respectively.This manuscript presents some exploratory data analysis,including a correlation matrix and survey report.Moreover,the proposed solution can be adjustable in the daily activities of a diabetic patient.
基金The author would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2021-131.
文摘The recent unprecedented threat from COVID-19 and past epidemics,such as SARS,AIDS,and Ebola,has affected millions of people in multiple countries.Countries have shut their borders,and their nationals have been advised to self-quarantine.The variety of responses to the pandemic has given rise to data privacy concerns.Infection prevention and control strategies as well as disease control measures,especially real-time contact tracing for COVID-19,require the identification of people exposed to COVID-19.Such tracing frameworks use mobile apps and geolocations to trace individuals.However,while the motive may be well intended,the limitations and security issues associated with using such a technology are a serious cause of concern.There are growing concerns regarding the privacy of an individual’s location and personal identifiable information(PII)being shared with governments and/or health agencies.This study presents a real-time,trust-based contact-tracing framework that operateswithout the use of an individual’sPII,location sensing,or gathering GPS logs.The focus of the proposed contact tracing framework is to ensure real-time privacy using the Bluetooth range of individuals to determine others within the range.The research validates the trust-based framework using Bluetooth as practical and privacy-aware.Using our proposed methodology,personal information,health logs,and location data will be secure and not abused.This research analyzes 100,000 tracing dataset records from 150 mobile devices to identify infected users and active users.
文摘Depression is a crippling affliction and affects millions of individuals around the world.In general,the physicians screen patients for mental health disorders on a regular basis and treat patients in collaboration with psychologists and other mental health experts,which results in lower costs and improved patient outcomes.However,this strategy can necessitate a lot of buy-in from a large number of people,as well as additional training and logistical considerations.Thus,utilizing the machine learning algorithms,patients with depression based on information generally present in a medical file were analyzed and predicted.The methodology of this proposed study is divided into six parts:Proposed Research Architecture(PRA),Data Pre-processing Approach(DPA),Research Hypothesis Testing(RHT),Concentrated Algorithm Pipeline(CAP),Loss Optimization Stratagem(LOS),and Model Deployment Architecture(MDA).The Null Hypothesis and Alternative Hypothesis are applied to test the RHT.In addition,Ensemble Learning Approach(ELA)and Frequent Model Retraining(FMR)have been utilized for optimizing the loss function.Besides,the Features Importance Interpretation is also delineated in this research.These forecasts could help individuals connect with expert mental health specialists more quickly and easily.According to the findings,71%of people with depression and 80%of those who do not have depression can be appropriately diagnosed.This study obtained 91%and 92%accuracy through the Random Forest(RF)and Extra Tree Classifier.But after applying the Receiver operating characteristic(ROC)curve,79%accuracy was found on top of RF,81%found on Extra Tree,and 82%recorded for the eXtreme Gradient Boosting(XGBoost)algorithm.Besides,several factors are identified in terms of predicting depression through statistical data analysis.Though the additional effort is needed to develop a more accurate model,this model can be adjustable in the healthcare sector for diagnosing depression.
基金Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No.RGP-2019-26.
文摘Sensors and physical activity evaluation are quite limited for motionbased commercial devices.Sometimes the accelerometer of the smartwatch is utilized;walking is investigated.The combination can perform better in terms of sensors and that can be determined by sensors on both the smartwatch and phones,i.e.,accelerometer and gyroscope.For biometric efficiency,some of the diverse activities of daily routine have been evaluated,also with biometric authentication.The result shows that using the different computing techniques in phones and watch for biometric can provide a suitable output based on the mentioned activities.This indicates that the high feasibility and results of continuous biometrics analysis in terms of average daily routine activities.In this research,the set of rules with the real-valued attributes are evolved with the use of a genetic algorithm.With the help of real value genes,the real value attributes cab be encoded,and presentation of new methods which are represents not to cares in the rules.The rule sets which help in maximizing the number of accurate classifications of inputs and supervise classifications are viewed as an optimization problem.The use of Pitt approach to the ML(Machine Learning)and Genetic based system that includes a resolution mechanism among rules that are competing within the same rule sets is utilized.This enhances the efficiency of the overall system,as shown in the research.
基金the Deanship of Scientific research atMajmaah University for funding this work under project No.RGP-2019-26.
文摘Low back pain(LBP)is a morbid condition that has afflicted several citizens in Europe.It has negatively impacted the European economy due to several man-days lost,with bed rest and forced inactivity being the usual LBP care and management steps.Direct models,which incorporate various regression analyses,have been executed for the investigation of this premise due to the simplicity of translation.However,such straight models fail to completely consider the impact of association brought about by a mix of nonlinear connections and autonomous factors.In this paper,we discuss a system that aids decision-making regarding the best-suited support system for LBP,allowing the individual to avail of reinforcement and improvement in its self-management.These activities are monitored with the help of a wearable sensor that helps in their detection and their classification as those that soothe or aggravate LBP and hence,should or should not be performed.This system helps the patients set their own boundaries and milestones with respect to suitable activities.This system also does windowing and feature extraction.The present study is an empirical and comparative analysis of the most suitable activities that patients suffering from low back pain can select.The evaluation shows that the system can distinguish between nine common daily activities effectively and helps self-monitor these activities for the efficient management of LBP.
基金The author extends their appreciation to the Deanship of Scientific research at Majmaah University for the funding this work under Project No.1439-48.
文摘Cyber Threat Intelligence(CTI)has gained massive attention to collect hidden knowledge for a better understanding of the various cyber-attacks and eventually paving the way for predicting the future of such attacks.The information exchange and collaborative sharing through different platforms have a significant contribution towards a global solution.While CTI and the information exchange can help a lot in focusing and prioritizing on the use of the large volume of complex information among different organizations,there exists a great challenge ineffective processing of large count of different Indicators of Threat(IoT)which appear regularly,and that can be solved only through a collaborative approach.Collaborative approach and intelligence sharing have become the mandatory element in the entire world of processing the threats.In order to covet the complete needs of having a definite standard of information exchange,various initiatives have been taken in means of threat information sharing platforms like MISP and formats such as SITX.This paper proposes a scoring model to address information decay,which is shared within TISP.The scoring model is implemented,taking the use case of detecting the Threat Indicators in a phishing data network.The proposed method calculates the rate of decay of an attribute through which the early entries are removed.
基金the Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No-R-14xx-4x.
文摘Today,security is a major challenge linked with computer network companies that cannot defend against cyber-attacks.Numerous vulnerable factors increase security risks and cyber-attacks,including viruses,the internet,communications,and hackers.Internets of Things(IoT)devices are more effective,and the number of devices connected to the internet is constantly increasing,and governments and businesses are also using these technologies to perform business activities effectively.However,the increasing uses of technologies also increase risks,such as password attacks,social engineering,and phishing attacks.Humans play a major role in the field of cybersecurity.It is observed that more than 39%of security risks are related to the human factor,and 95%of successful cyber-attacks are caused by human error,with most of them being insider threats.The major human factor issue in cybersecurity is a lack of user awareness of cyber threats.This study focuses on the human factor by surveying the vulnerabilities and reducing the risk by focusing on human nature and reacting to different situations.This study highlighted that most of the participants are not experienced with cybersecurity threats and how to protect their personal information.Moreover,the lack of awareness of the top three vulnerabilities related to the human factor in cybersecurity,such as phishing attacks,passwords,attacks,and social engineering,are major problems that need to be addressed and reduced through proper awareness and training.
基金The author swould like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No-R-14xx-4x.
文摘Web applications have become a widely accepted method to support the internet for the past decade.Since they have been successfully installed in the business activities and there is a requirement of advanced functionalities,the configuration is growing and becoming more complicated.The growing demand and complexity also make these web applications a preferred target for intruders on the internet.Even with the support of security specialists,they remain highly problematic for the complexity of penetration and code reviewing methods.It requires considering different testing patterns in both codes reviewing and penetration testing.As a result,the number of hacked websites is increasing day by day.Most of these vulnerabilities also occur due to incorrect input validation and lack of result validation for lousy programming practices or coding errors.Vulnerability scanners for web applications can detect a few vulnerabilities in a dynamic approach.These are quite easy to use;however,these often miss out on some of the unique critical vulnerabilities in a different and static approach.Although these are time-consuming,they can find complex vulnerabilities and improve developer knowledge in coding and best practices.Many scanners choose both dynamic and static approaches,and the developers can select them based on their requirements and conditions.This research explores and provides details of SQL injection,operating system command injection,path traversal,and cross-site scripting vulnerabilities through dynamic and static approaches.It also examines various security measures in web applications and selected five tools based on their features for scanning PHP,and JAVA code focuses on SQL injection,cross-site scripting,Path Traversal,operating system command.Moreover,this research discusses the approach of a cyber-security tester or a security developer finding out vulnerabilities through dynamic and static approaches using manual and automated web vulnerability scanners.