Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patie...Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patients'data has become an urgent and practical concern.We present a novel approach for reversible data hiding for colour medical images.In a hybrid domain,we employ AlexNet,tuned with watershed transform(WST)and L-shaped fractal Tromino encryption.Our approach commences by constructing the host image's feature vector using a pre-trained AlexNet model.Next,we use the watershed transform to convert the extracted feature vector into a vector for a topographic map,which we then encrypt using an L-shaped fractal Tromino cryptosystem.We embed the secret image in the transformed image vector using a histogram-based embedding strategy to enhance payload and visual fidelity.When there are no attacks,the RDHNet exhibits robust performance,can be reversed to the original image and maintains a visually appealing stego image,with an average PSNR of 73.14 dB,an SSIM of 0.9999 and perfect values of NC=1 and BER=0 under normal conditions.The proposed RDHNet demonstrates a robust ability to withstand detrimental geometric and noise-adding attacks as well as various steganalysis methods.Furthermore,our RDHNet method initiative demonstrates efficacy in tackling contemporary confidentiality issues.展开更多
The continual growth of the use of technological appliances during the COVID-19 pandemic has resulted in a massive volume of data flow on the Internet,as many employees have transitioned to working from home.Furthermo...The continual growth of the use of technological appliances during the COVID-19 pandemic has resulted in a massive volume of data flow on the Internet,as many employees have transitioned to working from home.Furthermore,with the increase in the adoption of encrypted data transmission by many people who tend to use a Virtual Private Network(VPN)or Tor Browser(dark web)to keep their data privacy and hidden,network traffic encryption is rapidly becoming a universal approach.This affects and complicates the quality of service(QoS),traffic monitoring,and network security provided by Internet Service Providers(ISPs),particularly for analysis and anomaly detection approaches based on the network traffic’s nature.The method of categorizing encrypted traffic is one of the most challenging issues introduced by a VPN as a way to bypass censorship as well as gain access to geo-locked services.Therefore,an efficient approach is especially needed that enables the identification of encrypted network traffic data to extract and select valuable features which improve the quality of service and network management as well as to oversee the overall performance.In this paper,the classification of network traffic data in terms of VPN and non-VPN traffic is studied based on the efficiency of time-based features extracted from network packets.Therefore,this paper suggests two machine learning models that categorize network traffic into encrypted and non-encrypted traffic.The proposed models utilize statistical features(SF),Pearson Correlation(PC),and a Genetic Algorithm(GA),preprocessing the traffic samples into net flow traffic to accomplish the experiment’s objectives.The GA-based method utilizes a stochastic method based on natural genetics and biological evolution to extract essential features.The PC-based method performs well in removing different features of network traffic.With a microsecond perpacket prediction time,the best model achieved an accuracy of more than 95.02 percent in the most demanding traffic classification task,a drop in accuracy of only 2.37 percent in comparison to the entire statistical-based machine learning approach.This is extremely promising for the development of real-time traffic analyzers.展开更多
The defense in depth methodology was popularized in the early 2000’s amid growing concerns for information security;this paper will address the shortcomings of early implementations. In the last two years, many suppo...The defense in depth methodology was popularized in the early 2000’s amid growing concerns for information security;this paper will address the shortcomings of early implementations. In the last two years, many supporters of the defense in depth security methodology have changed their allegiance to an offshoot method dubbed the defense in breadth methodology. A substantial portion of this paper’s body will be devoted to comparing real-world usage scenarios and discussing the flaws in each method. A major goal of this publication will be to assist readers in selecting a method that will best benefit their personal environment. Scenarios certainly exist where one method may be clearly favored;this article will help identify the factors that make one method a clear choice over another. This paper will strive not only to highlight key strengths and weaknesses for the two strategies listed, but also provide the evaluation techniques necessary for readers to apply to other popular methodologies in order to make the most appropriate personal determinations.展开更多
This paper presents an investigation to evaluate the reading speed and reading comprehension of non-native English speaking students by presenting a simple analytical model. For this purpose, various readability softw...This paper presents an investigation to evaluate the reading speed and reading comprehension of non-native English speaking students by presenting a simple analytical model. For this purpose, various readability softwares were used to estimate the average grade level of the given texts. The relationship between the score obtained by the students and their reading speed under average grade level 9 and 14 using font size 12 and 14 is presented. The experimental results show that the reading speed and the score versus the students may be explained by a linear regression. Reading speed decreases as the score decreases. The students with a higher magnitude of reading speed scored better marks. More importantly, we find that the reading speed of our students is lower than the native English speakers. This approach of modeling the readability in linear form significantly simplifies the readability analysis.展开更多
Healthcare systems nowadays depend on IoT sensors for sending data over the internet as a common practice.Encryption ofmedical images is very important to secure patient information.Encrypting these images consumes a ...Healthcare systems nowadays depend on IoT sensors for sending data over the internet as a common practice.Encryption ofmedical images is very important to secure patient information.Encrypting these images consumes a lot of time onedge computing;therefore,theuse of anauto-encoder for compressionbefore encodingwill solve such a problem.In this paper,we use an auto-encoder to compress amedical image before encryption,and an encryption output(vector)is sent out over the network.On the other hand,a decoder was used to reproduce the original image back after the vector was received and decrypted.Two convolutional neural networks were conducted to evaluate our proposed approach:The first one is the auto-encoder,which is utilized to compress and encrypt the images,and the other assesses the classification accuracy of the image after decryption and decoding.Different hyperparameters of the encoder were tested,followed by the classification of the image to verify that no critical information was lost,to test the encryption and encoding resolution.In this approach,sixteen hyperparameter permutations are utilized,but this research discusses three main cases in detail.The first case shows that the combination of Mean Square Logarithmic Error(MSLE),ADAgrad,two layers for the auto-encoder,and ReLU had the best auto-encoder results with a Mean Absolute Error(MAE)=0.221 after 50 epochs and 75%classification with the best result for the classification algorithm.The second case shows the reflection of auto-encoder results on the classification results which is a combination ofMean Square Error(MSE),RMSprop,three layers for the auto-encoder,and ReLU,which had the best classification accuracy of 65%,the auto-encoder gives MAE=0.31 after 50 epochs.The third case is the worst,which is the combination of the hinge,RMSprop,three layers for the auto-encoder,and ReLU,providing accuracy of 20%and MAE=0.485.展开更多
The advent of the COVID-19 pandemic has adversely affected the entire world and has put forth high demand for techniques that remotely manage crowd-related tasks.Video surveillance and crowd management using video ana...The advent of the COVID-19 pandemic has adversely affected the entire world and has put forth high demand for techniques that remotely manage crowd-related tasks.Video surveillance and crowd management using video analysis techniques have significantly impacted today’s research,and numerous applications have been developed in this domain.This research proposed an anomaly detection technique applied to Umrah videos in Kaaba during the COVID-19 pandemic through sparse crowd analysis.Managing theKaaba rituals is crucial since the crowd gathers from around the world and requires proper analysis during these days of the pandemic.The Umrah videos are analyzed,and a system is devised that can track and monitor the crowd flow in Kaaba.The crowd in these videos is sparse due to the pandemic,and we have developed a technique to track the maximum crowd flow and detect any object(person)moving in the direction unlikely of the major flow.We have detected abnormal movement by creating the histograms for the vertical and horizontal flows and applying thresholds to identify the non-majority flow.Our algorithm aims to analyze the crowd through video surveillance and timely detect any abnormal activity tomaintain a smooth crowd flowinKaaba during the pandemic.展开更多
Enhancing the security of Wireless Sensor Networks(WSNs)improves the usability of their applications.Therefore,finding solutions to various attacks,such as the blackhole attack,is crucial for the success of WSN applic...Enhancing the security of Wireless Sensor Networks(WSNs)improves the usability of their applications.Therefore,finding solutions to various attacks,such as the blackhole attack,is crucial for the success of WSN applications.This paper proposes an enhanced version of the AODV(Ad Hoc On-Demand Distance Vector)protocol capable of detecting blackholes and malfunctioning benign nodes in WSNs,thereby avoiding them when delivering packets.The proposed version employs a network-based reputation system to select the best and most secure path to a destination.To achieve this goal,the proposed version utilizes the Watchdogs/Pathrater mechanisms in AODV to gather and broadcast reputations to all network nodes to build the network-based reputation system.To minimize the network overhead of the proposed approach,the paper uses reputation aggregator nodes only for forwarding reputation tables.Moreover,to reduce the overhead of updating reputation tables,the paper proposes three mechanisms,which are the prompt broadcast,the regular broadcast,and the light broadcast approaches.The proposed enhanced version has been designed to perform effectively in dynamic environments such as mobile WSNs where nodes,including blackholes,move continuously,which is considered a challenge for other protocols.Using the proposed enhanced protocol,a node evaluates the security of different routes to a destination and can select the most secure routing path.The paper provides an algorithm that explains the proposed protocol in detail and demonstrates a case study that shows the operations of calculating and updating reputation values when nodes move across different zones.Furthermore,the paper discusses the proposed approach’s overhead analysis to prove the proposed enhancement’s correctness and applicability.展开更多
Related to the growth of data sharing on the Internet and the wide-spread use of digital media,multimedia security and copyright protection have become of broad interest.Visual cryptography(VC)is a method of sharing a...Related to the growth of data sharing on the Internet and the wide-spread use of digital media,multimedia security and copyright protection have become of broad interest.Visual cryptography(VC)is a method of sharing a secret image between a group of participants,where certain groups of participants are defined as qualified and may combine their share of the image to obtain the original,and certain other groups are defined as prohibited,and even if they combine knowledge of their parts,they can’t obtain any information on the secret image.The visual cryptography is one of the techniques which used to transmit the secrete image under the cover picture.Human vision systems are connected to visual cryptography.The black and white image was originally used as a hidden image.In order to achieve the owner’s copy right security based on visual cryptography,a watermarking algorithm is presented.We suggest an approach in this paper to hide multiple images in video by meaningful shares using one binary share.With a common share,which we refer to as a smart key,we can decrypt several images simultaneously.Depending on a given share,the smart key decrypts several hidden images.The smart key is printed on transparency and the shares are involved in video and decryption is performed by physically superimposing the transparency on the video.Using binary,grayscale,and color images,we test the proposed method.展开更多
This empirical study focused on investigating the perceived trust surrogated by a number of hy-pothesized factors and its effect on the choice of method of payment. The data were collected using a questionnaire, as th...This empirical study focused on investigating the perceived trust surrogated by a number of hy-pothesized factors and its effect on the choice of method of payment. The data were collected using a questionnaire, as the instrument for the primary data collection, with total collected back responses of 214 from customers of MarkaVIP. Structural equation modeling technique was used to fully analyze the data in order to determine what level of the relationship between the constituting factors of the perceived trust and the method of payment. The main findings were related to confirming the seven main hypotheses of the research that were related to testing if some factors were important to forming perceived trust by customers. Four factors (reputation, security, familiarity, and ease of use) were found to have a positive effect and the remaining three were not (privacy, size and usefulness). In addition, having perceived trust meant no preference to any method of payment by the customers.展开更多
Mapping and assessment of mangrove environment are crucial since the mangrove has an important role in the process of human-environment interaction. In Indonesia alone, 25% of South East Asia's mangroves available...Mapping and assessment of mangrove environment are crucial since the mangrove has an important role in the process of human-environment interaction. In Indonesia alone, 25% of South East Asia's mangroves available are under threat. Recognizing the availability and the ability of new sensor of Landsat data, this study investigates the use of Landsat ETM + 7 and Landsat 8, acquired in 2002 and 2013 respectively, for assessing the extent of mangroves along the South Sulawesi’s coastline. For each year, a supervised classification of the mangrove was performed using open source GRASS GIS software. The resulting maps were then compared to quantify the change. Field work activities were conducted and confirmed with the changes that occurred in the study area.? Considering the accuracy assessment, our study shows that the RGB composite color-supervised classification is better than band ratio-supervised classification methods. By linking the open source software with the Landsat data and Google Earth satellite imagery that is available in public domain, mangroves forest conversion and changes can be observed remotely. Ground truth surveys confirmed that, decreases of mangroves forest is due to the expansion of fishpond activity. This technique could potentially allow rapid, inexpensive remote monitoring of cascading, indirect effects of human activities to mangroves forest.展开更多
As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering ...As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering community.Previous literature studies have proposed numerousmodels for the classification of security requirements.However,adopting those models is constrained due to the lack of essential datasets permitting the repetition and generalization of studies employing more advanced machine learning algorithms.Moreover,most of the researchers focus only on the classification of requirements with security keywords.They did not consider other nonfunctional requirements(NFR)directly or indirectly related to security.This has been identified as a significant research gap in security requirements engineering.The major objective of this study is to propose a security requirements classification model that categorizes security and other relevant security requirements.We use PROMISE_exp and DOSSPRE,the two most commonly used datasets in the software engineering community.The proposed methodology consists of two steps.In the first step,we analyze all the nonfunctional requirements and their relation with security requirements.We found 10 NFRs that have a strong relationship with security requirements.In the second step,we categorize those NFRs in the security requirements category.Our proposedmethodology is a hybridmodel based on the ConvolutionalNeural Network(CNN)and Extreme Gradient Boosting(XGBoost)models.Moreover,we evaluate the model by updating the requirement type column with a binary classification column in the dataset to classify the requirements into security and non-security categories.The performance is evaluated using four metrics:recall,precision,accuracy,and F1 Score with 20 and 28 epochs number and batch size of 32 for PROMISE_exp and DOSSPRE datasets and achieved 87.3%and 85.3%accuracy,respectively.The proposed study shows an enhancement in metrics values compared to the previous literature studies.This is a proof of concept for systematizing the evaluation of security recognition in software systems during the early phases of software development.展开更多
Surveillance systems can take various forms,but gait-based surveillance is emerging as a powerful approach due to its ability to identify individuals without requiring their cooperation.In the existing studies,several...Surveillance systems can take various forms,but gait-based surveillance is emerging as a powerful approach due to its ability to identify individuals without requiring their cooperation.In the existing studies,several approaches have been suggested for gait recognition;nevertheless,the performance of existing systems is often degraded in real-world conditions due to covariate factors such as occlusions,clothing changes,walking speed,and varying camera viewpoints.Furthermore,most existing research focuses on single-person gait recognition;however,counting,tracking,detecting,and recognizing individuals in dual-subject settings with occlusions remains a challenging task.Therefore,this research proposed a variant of an automated gait model for occluded dual-subject walk scenarios.More precisely,in the proposed method,we have designed a deep learning(DL)-based dual-subject gait model(DSG)involving three modules.The first module handles silhouette segmentation,localization,and counting(SLC)using Mask-RCNN with MobileNetV2.The next stage uses a Convolutional block attention module(CBAM)-based Siamese network for frame-level tracking with a modified gallery setting.Following the last,gait recognition based on regionbased deep learning is proposed for dual-subject gait recognition.The proposed method,tested on Shri Mata Vaishno Devi University(SMVDU)-Multi-Gait and Single-Gait datasets,shows strong performance with 94.00%segmentation,58.36%tracking,and 63.04%gait recognition accuracy in dual-subject walk scenarios.展开更多
Race classification is a long-standing challenge in the field of face image analysis.The investigation of salient facial features is an important task to avoid processing all face parts.Face segmentation strongly bene...Race classification is a long-standing challenge in the field of face image analysis.The investigation of salient facial features is an important task to avoid processing all face parts.Face segmentation strongly benefits several face analysis tasks,including ethnicity and race classification.We propose a race-classification algorithm using a prior face segmentation framework.A deep convolutional neural network(DCNN)was used to construct a face segmentation model.For training the DCNN,we label face images according to seven different classes,that is,nose,skin,hair,eyes,brows,back,and mouth.The DCNN model developed in the first phase was used to create segmentation results.The probabilistic classification method is used,and probability maps(PMs)are created for each semantic class.We investigated five salient facial features from among seven that help in race classification.Features are extracted from the PMs of five classes,and a new model is trained based on the DCNN.We assessed the performance of the proposed race classification method on four standard face datasets,reporting superior results compared with previous studies.展开更多
This paper proposes Parallelized Linear Time-Variant Acceleration Coefficients and Inertial Weight of Particle Swarm Optimization algorithm(PLTVACIW-PSO).Its designed has introduced the benefits of Parallel computing ...This paper proposes Parallelized Linear Time-Variant Acceleration Coefficients and Inertial Weight of Particle Swarm Optimization algorithm(PLTVACIW-PSO).Its designed has introduced the benefits of Parallel computing into the combined power of TVAC(Time-Variant Acceleration Coefficients)and IW(Inertial Weight).Proposed algorithm has been tested against linear,non-linear,traditional,andmultiswarmbased optimization algorithms.An experimental study is performed in two stages to assess the proposed PLTVACIW-PSO.Phase I uses 12 recognized Standard Benchmarks methods to evaluate the comparative performance of the proposed PLTVACIWPSO vs.IW based Particle Swarm Optimization(PSO)algorithms,TVAC based PSO algorithms,traditional PSO,Genetic algorithms(GA),Differential evolution(DE),and,finally,Flower Pollination(FP)algorithms.In phase II,the proposed PLTVACIW-PSO uses the same 12 known Benchmark functions to test its performance against the BAT(BA)and Multi-Swarm BAT algorithms.In phase III,the proposed PLTVACIW-PSO is employed to augment the feature selection problem formedical datasets.This experimental study shows that the planned PLTVACIW-PSO outpaces the performances of other comparable algorithms.Outcomes from the experiments shows that the PLTVACIW-PSO is capable of outlining a feature subset that is capable of enhancing the classification efficiency and gives the minimal subset of the core features.展开更多
Parkinson’s disease(PD),one of whose symptoms is dysphonia,is a prevalent neurodegenerative disease.The use of outdated diagnosis techniques,which yield inaccurate and unreliable results,continues to represent an obs...Parkinson’s disease(PD),one of whose symptoms is dysphonia,is a prevalent neurodegenerative disease.The use of outdated diagnosis techniques,which yield inaccurate and unreliable results,continues to represent an obstacle in early-stage detection and diagnosis for clinical professionals in the medical field.To solve this issue,the study proposes using machine learning and deep learning models to analyze processed speech signals of patients’voice recordings.Datasets of these processed speech signals were obtained and experimented on by random forest and logistic regression classifiers.Results were highly successful,with 90%accuracy produced by the random forest classifier and 81.5%by the logistic regression classifier.Furthermore,a deep neural network was implemented to investigate if such variation in method could add to the findings.It proved to be effective,as the neural network yielded an accuracy of nearly 92%.Such results suggest that it is possible to accurately diagnose early-stage PD through merely testing patients’voices.This research calls for a revolutionary diagnostic approach in decision support systems,and is the first step in a market-wide implementation of healthcare software dedicated to the aid of clinicians in early diagnosis of PD.展开更多
Because of the widespread availability of low-cost printers and scanners,document forgery has become extremely popular.Watermarks or signatures are used to protect important papers such as certificates,passports,and i...Because of the widespread availability of low-cost printers and scanners,document forgery has become extremely popular.Watermarks or signatures are used to protect important papers such as certificates,passports,and identification cards.Identifying the origins of printed documents is helpful for criminal investigations and also for authenticating digital versions of a document in today’s world.Source printer identification(SPI)has become increasingly popular for identifying frauds in printed documents.This paper provides a proposed algorithm for identifying the source printer and categorizing the questioned document into one of the printer classes.A dataset of 1200 papers from 20 distinct(13)laser and(7)inkjet printers achieved significant identification results.A proposed algorithm based on global features such as the Histogram of Oriented Gradient(HOG)and local features such as Local Binary Pattern(LBP)descriptors has been proposed for printer identification.For classification,Decision Trees(DT),k-Nearest Neighbors(k-NN),Random Forests,Aggregate bootstrapping(bagging),Adaptive-boosting(boosting),Support Vector Machine(SVM),and mixtures of these classifiers have been employed.The proposed algorithm can accurately classify the questioned documents into their appropriate printer classes.The adaptive boosting classifier attained a 96%accuracy.The proposed algorithm is compared to four recently published algorithms that used the same dataset and gives better classification accuracy.展开更多
Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that ...Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.展开更多
Nowadays,an unprecedented number of users interact through social media platforms and generate a massive amount of content due to the explosion of online communication.However,because user-generated content is unregul...Nowadays,an unprecedented number of users interact through social media platforms and generate a massive amount of content due to the explosion of online communication.However,because user-generated content is unregulated,it may contain offensive content such as fake news,insults,and harassment phrases.The identification of fake news and rumors and their dissemination on social media has become a critical requirement.They have adverse effects on users,businesses,enterprises,and even political regimes and governments.State of the art has tackled the English language for news and used feature-based algorithms.This paper proposes a model architecture to detect fake news in the Arabic language by using only textual features.Machine learning and deep learning algorithms were used.The deep learning models are used depending on conventional neural nets(CNN),long short-term memory(LSTM),bidirectional LSTM(BiLSTM),CNN+LSTM,and CNN+BiLSTM.Three datasets were used in the experiments,each containing the textual content of Arabic news articles;one of them is reallife data.The results indicate that the BiLSTM model outperforms the other models regarding accuracy rate when both simple data split and recursive training modes are used in the training process.展开更多
Classification of edge-on galaxies is important to astronomical studies due to our Milky Way galaxy being an edge-on galaxy.Edge-on galaxies pose a problem to classification due to their less overall brightness levels...Classification of edge-on galaxies is important to astronomical studies due to our Milky Way galaxy being an edge-on galaxy.Edge-on galaxies pose a problem to classification due to their less overall brightness levels and smaller numbers of pixels.In the current work,a novel technique for the classification of edge-on galaxies has been developed.This technique is based on the mathematical treatment of galaxy brightness data from their images.A special treatment for galaxies’brightness data is developed to enhance faint galaxies and eliminate adverse effects of high brightness backgrounds as well as adverse effects of background bright stars.A novel slimness weighting factor is developed to classify edge-on galaxies based on their slimness.The technique has the capacity to be optimized for different catalogs with different brightness levels.In the current work,the developed technique is optimized for the EFIGI catalog and is trained using a set of 1800 galaxies from this catalog.Upon classification of the full set of 4458 galaxies from the EFIGI catalog,an accuracy of 97.5% has been achieved,with an average processing time of about 0.26 seconds per galaxy on an average laptop.展开更多
文摘Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patients'data has become an urgent and practical concern.We present a novel approach for reversible data hiding for colour medical images.In a hybrid domain,we employ AlexNet,tuned with watershed transform(WST)and L-shaped fractal Tromino encryption.Our approach commences by constructing the host image's feature vector using a pre-trained AlexNet model.Next,we use the watershed transform to convert the extracted feature vector into a vector for a topographic map,which we then encrypt using an L-shaped fractal Tromino cryptosystem.We embed the secret image in the transformed image vector using a histogram-based embedding strategy to enhance payload and visual fidelity.When there are no attacks,the RDHNet exhibits robust performance,can be reversed to the original image and maintains a visually appealing stego image,with an average PSNR of 73.14 dB,an SSIM of 0.9999 and perfect values of NC=1 and BER=0 under normal conditions.The proposed RDHNet demonstrates a robust ability to withstand detrimental geometric and noise-adding attacks as well as various steganalysis methods.Furthermore,our RDHNet method initiative demonstrates efficacy in tackling contemporary confidentiality issues.
文摘The continual growth of the use of technological appliances during the COVID-19 pandemic has resulted in a massive volume of data flow on the Internet,as many employees have transitioned to working from home.Furthermore,with the increase in the adoption of encrypted data transmission by many people who tend to use a Virtual Private Network(VPN)or Tor Browser(dark web)to keep their data privacy and hidden,network traffic encryption is rapidly becoming a universal approach.This affects and complicates the quality of service(QoS),traffic monitoring,and network security provided by Internet Service Providers(ISPs),particularly for analysis and anomaly detection approaches based on the network traffic’s nature.The method of categorizing encrypted traffic is one of the most challenging issues introduced by a VPN as a way to bypass censorship as well as gain access to geo-locked services.Therefore,an efficient approach is especially needed that enables the identification of encrypted network traffic data to extract and select valuable features which improve the quality of service and network management as well as to oversee the overall performance.In this paper,the classification of network traffic data in terms of VPN and non-VPN traffic is studied based on the efficiency of time-based features extracted from network packets.Therefore,this paper suggests two machine learning models that categorize network traffic into encrypted and non-encrypted traffic.The proposed models utilize statistical features(SF),Pearson Correlation(PC),and a Genetic Algorithm(GA),preprocessing the traffic samples into net flow traffic to accomplish the experiment’s objectives.The GA-based method utilizes a stochastic method based on natural genetics and biological evolution to extract essential features.The PC-based method performs well in removing different features of network traffic.With a microsecond perpacket prediction time,the best model achieved an accuracy of more than 95.02 percent in the most demanding traffic classification task,a drop in accuracy of only 2.37 percent in comparison to the entire statistical-based machine learning approach.This is extremely promising for the development of real-time traffic analyzers.
文摘The defense in depth methodology was popularized in the early 2000’s amid growing concerns for information security;this paper will address the shortcomings of early implementations. In the last two years, many supporters of the defense in depth security methodology have changed their allegiance to an offshoot method dubbed the defense in breadth methodology. A substantial portion of this paper’s body will be devoted to comparing real-world usage scenarios and discussing the flaws in each method. A major goal of this publication will be to assist readers in selecting a method that will best benefit their personal environment. Scenarios certainly exist where one method may be clearly favored;this article will help identify the factors that make one method a clear choice over another. This paper will strive not only to highlight key strengths and weaknesses for the two strategies listed, but also provide the evaluation techniques necessary for readers to apply to other popular methodologies in order to make the most appropriate personal determinations.
文摘This paper presents an investigation to evaluate the reading speed and reading comprehension of non-native English speaking students by presenting a simple analytical model. For this purpose, various readability softwares were used to estimate the average grade level of the given texts. The relationship between the score obtained by the students and their reading speed under average grade level 9 and 14 using font size 12 and 14 is presented. The experimental results show that the reading speed and the score versus the students may be explained by a linear regression. Reading speed decreases as the score decreases. The students with a higher magnitude of reading speed scored better marks. More importantly, we find that the reading speed of our students is lower than the native English speakers. This approach of modeling the readability in linear form significantly simplifies the readability analysis.
基金funding was provided by the Institute for Research and Consulting Studies at King Khalid University through Corona Research(Fast Track)[Grant No.3-103S-2020].
文摘Healthcare systems nowadays depend on IoT sensors for sending data over the internet as a common practice.Encryption ofmedical images is very important to secure patient information.Encrypting these images consumes a lot of time onedge computing;therefore,theuse of anauto-encoder for compressionbefore encodingwill solve such a problem.In this paper,we use an auto-encoder to compress amedical image before encryption,and an encryption output(vector)is sent out over the network.On the other hand,a decoder was used to reproduce the original image back after the vector was received and decrypted.Two convolutional neural networks were conducted to evaluate our proposed approach:The first one is the auto-encoder,which is utilized to compress and encrypt the images,and the other assesses the classification accuracy of the image after decryption and decoding.Different hyperparameters of the encoder were tested,followed by the classification of the image to verify that no critical information was lost,to test the encryption and encoding resolution.In this approach,sixteen hyperparameter permutations are utilized,but this research discusses three main cases in detail.The first case shows that the combination of Mean Square Logarithmic Error(MSLE),ADAgrad,two layers for the auto-encoder,and ReLU had the best auto-encoder results with a Mean Absolute Error(MAE)=0.221 after 50 epochs and 75%classification with the best result for the classification algorithm.The second case shows the reflection of auto-encoder results on the classification results which is a combination ofMean Square Error(MSE),RMSprop,three layers for the auto-encoder,and ReLU,which had the best classification accuracy of 65%,the auto-encoder gives MAE=0.31 after 50 epochs.The third case is the worst,which is the combination of the hinge,RMSprop,three layers for the auto-encoder,and ReLU,providing accuracy of 20%and MAE=0.485.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number QURDO001Project title:Intelligent Real-Time Crowd Monitoring System Using Unmanned Aerial Vehicle(UAV)Video and Global Positioning Systems(GPS)Data。
文摘The advent of the COVID-19 pandemic has adversely affected the entire world and has put forth high demand for techniques that remotely manage crowd-related tasks.Video surveillance and crowd management using video analysis techniques have significantly impacted today’s research,and numerous applications have been developed in this domain.This research proposed an anomaly detection technique applied to Umrah videos in Kaaba during the COVID-19 pandemic through sparse crowd analysis.Managing theKaaba rituals is crucial since the crowd gathers from around the world and requires proper analysis during these days of the pandemic.The Umrah videos are analyzed,and a system is devised that can track and monitor the crowd flow in Kaaba.The crowd in these videos is sparse due to the pandemic,and we have developed a technique to track the maximum crowd flow and detect any object(person)moving in the direction unlikely of the major flow.We have detected abnormal movement by creating the histograms for the vertical and horizontal flows and applying thresholds to identify the non-majority flow.Our algorithm aims to analyze the crowd through video surveillance and timely detect any abnormal activity tomaintain a smooth crowd flowinKaaba during the pandemic.
文摘Enhancing the security of Wireless Sensor Networks(WSNs)improves the usability of their applications.Therefore,finding solutions to various attacks,such as the blackhole attack,is crucial for the success of WSN applications.This paper proposes an enhanced version of the AODV(Ad Hoc On-Demand Distance Vector)protocol capable of detecting blackholes and malfunctioning benign nodes in WSNs,thereby avoiding them when delivering packets.The proposed version employs a network-based reputation system to select the best and most secure path to a destination.To achieve this goal,the proposed version utilizes the Watchdogs/Pathrater mechanisms in AODV to gather and broadcast reputations to all network nodes to build the network-based reputation system.To minimize the network overhead of the proposed approach,the paper uses reputation aggregator nodes only for forwarding reputation tables.Moreover,to reduce the overhead of updating reputation tables,the paper proposes three mechanisms,which are the prompt broadcast,the regular broadcast,and the light broadcast approaches.The proposed enhanced version has been designed to perform effectively in dynamic environments such as mobile WSNs where nodes,including blackholes,move continuously,which is considered a challenge for other protocols.Using the proposed enhanced protocol,a node evaluates the security of different routes to a destination and can select the most secure routing path.The paper provides an algorithm that explains the proposed protocol in detail and demonstrates a case study that shows the operations of calculating and updating reputation values when nodes move across different zones.Furthermore,the paper discusses the proposed approach’s overhead analysis to prove the proposed enhancement’s correctness and applicability.
文摘Related to the growth of data sharing on the Internet and the wide-spread use of digital media,multimedia security and copyright protection have become of broad interest.Visual cryptography(VC)is a method of sharing a secret image between a group of participants,where certain groups of participants are defined as qualified and may combine their share of the image to obtain the original,and certain other groups are defined as prohibited,and even if they combine knowledge of their parts,they can’t obtain any information on the secret image.The visual cryptography is one of the techniques which used to transmit the secrete image under the cover picture.Human vision systems are connected to visual cryptography.The black and white image was originally used as a hidden image.In order to achieve the owner’s copy right security based on visual cryptography,a watermarking algorithm is presented.We suggest an approach in this paper to hide multiple images in video by meaningful shares using one binary share.With a common share,which we refer to as a smart key,we can decrypt several images simultaneously.Depending on a given share,the smart key decrypts several hidden images.The smart key is printed on transparency and the shares are involved in video and decryption is performed by physically superimposing the transparency on the video.Using binary,grayscale,and color images,we test the proposed method.
文摘This empirical study focused on investigating the perceived trust surrogated by a number of hy-pothesized factors and its effect on the choice of method of payment. The data were collected using a questionnaire, as the instrument for the primary data collection, with total collected back responses of 214 from customers of MarkaVIP. Structural equation modeling technique was used to fully analyze the data in order to determine what level of the relationship between the constituting factors of the perceived trust and the method of payment. The main findings were related to confirming the seven main hypotheses of the research that were related to testing if some factors were important to forming perceived trust by customers. Four factors (reputation, security, familiarity, and ease of use) were found to have a positive effect and the remaining three were not (privacy, size and usefulness). In addition, having perceived trust meant no preference to any method of payment by the customers.
文摘Mapping and assessment of mangrove environment are crucial since the mangrove has an important role in the process of human-environment interaction. In Indonesia alone, 25% of South East Asia's mangroves available are under threat. Recognizing the availability and the ability of new sensor of Landsat data, this study investigates the use of Landsat ETM + 7 and Landsat 8, acquired in 2002 and 2013 respectively, for assessing the extent of mangroves along the South Sulawesi’s coastline. For each year, a supervised classification of the mangrove was performed using open source GRASS GIS software. The resulting maps were then compared to quantify the change. Field work activities were conducted and confirmed with the changes that occurred in the study area.? Considering the accuracy assessment, our study shows that the RGB composite color-supervised classification is better than band ratio-supervised classification methods. By linking the open source software with the Landsat data and Google Earth satellite imagery that is available in public domain, mangroves forest conversion and changes can be observed remotely. Ground truth surveys confirmed that, decreases of mangroves forest is due to the expansion of fishpond activity. This technique could potentially allow rapid, inexpensive remote monitoring of cascading, indirect effects of human activities to mangroves forest.
基金The authors of this study extend their appreciation to the Researchers Supporting Project number(RSPD2025R544),King Saud University,Riyadh,Saudia Arabia.
文摘As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering community.Previous literature studies have proposed numerousmodels for the classification of security requirements.However,adopting those models is constrained due to the lack of essential datasets permitting the repetition and generalization of studies employing more advanced machine learning algorithms.Moreover,most of the researchers focus only on the classification of requirements with security keywords.They did not consider other nonfunctional requirements(NFR)directly or indirectly related to security.This has been identified as a significant research gap in security requirements engineering.The major objective of this study is to propose a security requirements classification model that categorizes security and other relevant security requirements.We use PROMISE_exp and DOSSPRE,the two most commonly used datasets in the software engineering community.The proposed methodology consists of two steps.In the first step,we analyze all the nonfunctional requirements and their relation with security requirements.We found 10 NFRs that have a strong relationship with security requirements.In the second step,we categorize those NFRs in the security requirements category.Our proposedmethodology is a hybridmodel based on the ConvolutionalNeural Network(CNN)and Extreme Gradient Boosting(XGBoost)models.Moreover,we evaluate the model by updating the requirement type column with a binary classification column in the dataset to classify the requirements into security and non-security categories.The performance is evaluated using four metrics:recall,precision,accuracy,and F1 Score with 20 and 28 epochs number and batch size of 32 for PROMISE_exp and DOSSPRE datasets and achieved 87.3%and 85.3%accuracy,respectively.The proposed study shows an enhancement in metrics values compared to the previous literature studies.This is a proof of concept for systematizing the evaluation of security recognition in software systems during the early phases of software development.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Convergence Security Core Talent Training Business Support Program(IITP-2025-RS-2023-00266605)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Surveillance systems can take various forms,but gait-based surveillance is emerging as a powerful approach due to its ability to identify individuals without requiring their cooperation.In the existing studies,several approaches have been suggested for gait recognition;nevertheless,the performance of existing systems is often degraded in real-world conditions due to covariate factors such as occlusions,clothing changes,walking speed,and varying camera viewpoints.Furthermore,most existing research focuses on single-person gait recognition;however,counting,tracking,detecting,and recognizing individuals in dual-subject settings with occlusions remains a challenging task.Therefore,this research proposed a variant of an automated gait model for occluded dual-subject walk scenarios.More precisely,in the proposed method,we have designed a deep learning(DL)-based dual-subject gait model(DSG)involving three modules.The first module handles silhouette segmentation,localization,and counting(SLC)using Mask-RCNN with MobileNetV2.The next stage uses a Convolutional block attention module(CBAM)-based Siamese network for frame-level tracking with a modified gallery setting.Following the last,gait recognition based on regionbased deep learning is proposed for dual-subject gait recognition.The proposed method,tested on Shri Mata Vaishno Devi University(SMVDU)-Multi-Gait and Single-Gait datasets,shows strong performance with 94.00%segmentation,58.36%tracking,and 63.04%gait recognition accuracy in dual-subject walk scenarios.
基金This work was partially supported by a National Research Foundation of Korea(NRF)grant(No.2019R1F1A1062237)under the ITRC(Information Technology Research Center)support program(IITP-2021-2018-0-01431)supervised by the IITP(Institute for Information and Communications Technology Planning and Evaluation)funded by the Ministry of Science and ICT(MSIT),Korea.
文摘Race classification is a long-standing challenge in the field of face image analysis.The investigation of salient facial features is an important task to avoid processing all face parts.Face segmentation strongly benefits several face analysis tasks,including ethnicity and race classification.We propose a race-classification algorithm using a prior face segmentation framework.A deep convolutional neural network(DCNN)was used to construct a face segmentation model.For training the DCNN,we label face images according to seven different classes,that is,nose,skin,hair,eyes,brows,back,and mouth.The DCNN model developed in the first phase was used to create segmentation results.The probabilistic classification method is used,and probability maps(PMs)are created for each semantic class.We investigated five salient facial features from among seven that help in race classification.Features are extracted from the PMs of five classes,and a new model is trained based on the DCNN.We assessed the performance of the proposed race classification method on four standard face datasets,reporting superior results compared with previous studies.
基金funded by the Prince Sultan University,Riyadh,Saudi Arabia.
文摘This paper proposes Parallelized Linear Time-Variant Acceleration Coefficients and Inertial Weight of Particle Swarm Optimization algorithm(PLTVACIW-PSO).Its designed has introduced the benefits of Parallel computing into the combined power of TVAC(Time-Variant Acceleration Coefficients)and IW(Inertial Weight).Proposed algorithm has been tested against linear,non-linear,traditional,andmultiswarmbased optimization algorithms.An experimental study is performed in two stages to assess the proposed PLTVACIW-PSO.Phase I uses 12 recognized Standard Benchmarks methods to evaluate the comparative performance of the proposed PLTVACIWPSO vs.IW based Particle Swarm Optimization(PSO)algorithms,TVAC based PSO algorithms,traditional PSO,Genetic algorithms(GA),Differential evolution(DE),and,finally,Flower Pollination(FP)algorithms.In phase II,the proposed PLTVACIW-PSO uses the same 12 known Benchmark functions to test its performance against the BAT(BA)and Multi-Swarm BAT algorithms.In phase III,the proposed PLTVACIW-PSO is employed to augment the feature selection problem formedical datasets.This experimental study shows that the planned PLTVACIW-PSO outpaces the performances of other comparable algorithms.Outcomes from the experiments shows that the PLTVACIW-PSO is capable of outlining a feature subset that is capable of enhancing the classification efficiency and gives the minimal subset of the core features.
文摘Parkinson’s disease(PD),one of whose symptoms is dysphonia,is a prevalent neurodegenerative disease.The use of outdated diagnosis techniques,which yield inaccurate and unreliable results,continues to represent an obstacle in early-stage detection and diagnosis for clinical professionals in the medical field.To solve this issue,the study proposes using machine learning and deep learning models to analyze processed speech signals of patients’voice recordings.Datasets of these processed speech signals were obtained and experimented on by random forest and logistic regression classifiers.Results were highly successful,with 90%accuracy produced by the random forest classifier and 81.5%by the logistic regression classifier.Furthermore,a deep neural network was implemented to investigate if such variation in method could add to the findings.It proved to be effective,as the neural network yielded an accuracy of nearly 92%.Such results suggest that it is possible to accurately diagnose early-stage PD through merely testing patients’voices.This research calls for a revolutionary diagnostic approach in decision support systems,and is the first step in a market-wide implementation of healthcare software dedicated to the aid of clinicians in early diagnosis of PD.
文摘Because of the widespread availability of low-cost printers and scanners,document forgery has become extremely popular.Watermarks or signatures are used to protect important papers such as certificates,passports,and identification cards.Identifying the origins of printed documents is helpful for criminal investigations and also for authenticating digital versions of a document in today’s world.Source printer identification(SPI)has become increasingly popular for identifying frauds in printed documents.This paper provides a proposed algorithm for identifying the source printer and categorizing the questioned document into one of the printer classes.A dataset of 1200 papers from 20 distinct(13)laser and(7)inkjet printers achieved significant identification results.A proposed algorithm based on global features such as the Histogram of Oriented Gradient(HOG)and local features such as Local Binary Pattern(LBP)descriptors has been proposed for printer identification.For classification,Decision Trees(DT),k-Nearest Neighbors(k-NN),Random Forests,Aggregate bootstrapping(bagging),Adaptive-boosting(boosting),Support Vector Machine(SVM),and mixtures of these classifiers have been employed.The proposed algorithm can accurately classify the questioned documents into their appropriate printer classes.The adaptive boosting classifier attained a 96%accuracy.The proposed algorithm is compared to four recently published algorithms that used the same dataset and gives better classification accuracy.
文摘Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.
文摘Nowadays,an unprecedented number of users interact through social media platforms and generate a massive amount of content due to the explosion of online communication.However,because user-generated content is unregulated,it may contain offensive content such as fake news,insults,and harassment phrases.The identification of fake news and rumors and their dissemination on social media has become a critical requirement.They have adverse effects on users,businesses,enterprises,and even political regimes and governments.State of the art has tackled the English language for news and used feature-based algorithms.This paper proposes a model architecture to detect fake news in the Arabic language by using only textual features.Machine learning and deep learning algorithms were used.The deep learning models are used depending on conventional neural nets(CNN),long short-term memory(LSTM),bidirectional LSTM(BiLSTM),CNN+LSTM,and CNN+BiLSTM.Three datasets were used in the experiments,each containing the textual content of Arabic news articles;one of them is reallife data.The results indicate that the BiLSTM model outperforms the other models regarding accuracy rate when both simple data split and recursive training modes are used in the training process.
文摘Classification of edge-on galaxies is important to astronomical studies due to our Milky Way galaxy being an edge-on galaxy.Edge-on galaxies pose a problem to classification due to their less overall brightness levels and smaller numbers of pixels.In the current work,a novel technique for the classification of edge-on galaxies has been developed.This technique is based on the mathematical treatment of galaxy brightness data from their images.A special treatment for galaxies’brightness data is developed to enhance faint galaxies and eliminate adverse effects of high brightness backgrounds as well as adverse effects of background bright stars.A novel slimness weighting factor is developed to classify edge-on galaxies based on their slimness.The technique has the capacity to be optimized for different catalogs with different brightness levels.In the current work,the developed technique is optimized for the EFIGI catalog and is trained using a set of 1800 galaxies from this catalog.Upon classification of the full set of 4458 galaxies from the EFIGI catalog,an accuracy of 97.5% has been achieved,with an average processing time of about 0.26 seconds per galaxy on an average laptop.