In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ...In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.展开更多
Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patie...Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patients'data has become an urgent and practical concern.We present a novel approach for reversible data hiding for colour medical images.In a hybrid domain,we employ AlexNet,tuned with watershed transform(WST)and L-shaped fractal Tromino encryption.Our approach commences by constructing the host image's feature vector using a pre-trained AlexNet model.Next,we use the watershed transform to convert the extracted feature vector into a vector for a topographic map,which we then encrypt using an L-shaped fractal Tromino cryptosystem.We embed the secret image in the transformed image vector using a histogram-based embedding strategy to enhance payload and visual fidelity.When there are no attacks,the RDHNet exhibits robust performance,can be reversed to the original image and maintains a visually appealing stego image,with an average PSNR of 73.14 dB,an SSIM of 0.9999 and perfect values of NC=1 and BER=0 under normal conditions.The proposed RDHNet demonstrates a robust ability to withstand detrimental geometric and noise-adding attacks as well as various steganalysis methods.Furthermore,our RDHNet method initiative demonstrates efficacy in tackling contemporary confidentiality issues.展开更多
Mobile-Edge Computing(MEC)displaces cloud services as closely as possible to the end user.This enables the edge servers to execute the offloaded tasks that are requested by the users,which in turn decreases the energy...Mobile-Edge Computing(MEC)displaces cloud services as closely as possible to the end user.This enables the edge servers to execute the offloaded tasks that are requested by the users,which in turn decreases the energy consumption and the turnaround time delay.However,as a result of a hostile environment or in catastrophic zones with no network,it could be difficult to deploy such edge servers.Unmanned Aerial Vehicles(UAVs)can be employed in such scenarios.The edge servers mounted on these UAVs assist with task offloading.For the majority of IoT applications,the execution times of tasks are often crucial.Therefore,UAVs tend to have a limited energy supply.This study presents an approach to offload IoT user applications based on the usage of Voronoi diagrams to determine task delays and cluster IoT devices dynamically as a first step.Second,the UAV flies over each cluster to perform the offloading process.In addition,we propose a Graphics Processing Unit(GPU)-based parallelization of particle swarm optimization to balance the cluster sizes and identify the shortest path along these clusters while minimizing the UAV flying time and energy consumption.Some evaluation results are given to demonstrate the effectiveness of the presented offloading strategy.展开更多
The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of th...The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of the problems associated with the proposed theory is extremely difficult or impossible to solve analytically due to nonlinearity,MDD diffusion,multi-variable nature,multi-stage processing and anisotropic properties of the considered material.Therefore,we propose a novel boundary element method(BEM)formulation for modeling and simulation of such system.The computational performance of the proposed technique has been investigated.The numerical results illustrate the effects of time delays and kernel functions on the nonlinear three-temperature and nonlinear displacement components.The numerical results also demonstrate the validity,efficiency and accuracy of the proposed methodology.The findings and solutions of this study contribute to the further development of industrial applications and devices typically include micropolar-thermoelastic materials.展开更多
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma...Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.展开更多
In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is test...In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is tested.The dynamical behavior of this model is studied according to the change of the control parameters.We find that the complex dynamical behavior extends from a stable state to chaotic attractors.Finally,the analytical results are clarified by some numerical simulations.展开更多
Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UA...Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UAV)communications have recently garnered a lot of attention due to the fact that they are extremely versatile and may be applied to a wide variety of contexts and purposes.A cognitive UAV is proposed as a solution for the Internet of Things ground terminal’s wireless nodes in this article.In the IoT system,the UAV is utilised not only to determine how the resources should be distributed but also to provide power to the wireless nodes.The quality of service(QoS)offered by the cognitive node was interpreted as a price-based utility function,which was demonstrated in the form of a non-cooperative game theory in order to maximise customers’net utility functions.An energyefficient non-cooperative game theory power allocation with pricing strategy abbreviated as(EE-NGPAP)is implemented in this study with two trajectories Spiral and Sigmoidal in order to facilitate effective power management in Internet of Things(IoT)wireless nodes.It has also been demonstrated,theoretically and by the use of simulations,that the Nash equilibrium does exist and that it is one of a kind.The proposed energy harvesting approach was shown,through simulations,to significantly reduce the typical amount of power thatwas sent.This is taken into consideration to agree with the objective of 5G networks.In order to converge to Nash Equilibrium(NE),the method that is advised only needs roughly 4 iterations,which makes it easier to utilise in the real world,where things aren’t always the same.展开更多
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ...In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.展开更多
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in...In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN.展开更多
Price prediction of goods is a vital point of research due to how common e-commerce platforms are.There are several efforts conducted to forecast the price of items using classicmachine learning algorithms and statist...Price prediction of goods is a vital point of research due to how common e-commerce platforms are.There are several efforts conducted to forecast the price of items using classicmachine learning algorithms and statisticalmodels.These models can predict prices of various financial instruments,e.g.,gold,oil,cryptocurrencies,stocks,and second-hand items.Despite these efforts,the literature has no model for predicting the prices of seasonal goods(e.g.,Christmas gifts).In this context,we framed the task of seasonal goods price prediction as a regression problem.First,we utilized a real online trailer dataset of Christmas gifts and then we proposed several machine learningbased models and one statistical-based model to predict the prices of these seasonal products.Second,we utilized a real-life dataset of Christmas gifts for the prediction task.Then,we proposed support vector regressor(SVR),linear regression,random forest,and ridgemodels as machine learningmodels for price prediction.Next,we proposed an autoregressive-integrated-movingaverage(ARIMA)model for the same purpose as a statistical-based model.Finally,we evaluated the performance of the proposed models;the comparison shows that the best performing model was the random forest model,followed by the ARIMA model.展开更多
This study was undertaken to examine the options and feasibility of deploying new technologies for transforming the aquaculture sector with the objective of increasing the production efficiency.Selection o...This study was undertaken to examine the options and feasibility of deploying new technologies for transforming the aquaculture sector with the objective of increasing the production efficiency.Selection of technologies to obtain the expected outcome should,obviously,be consistent with the criteria of sustainable development.There is a range of technologies being suggested for driving change in aquaculture to enhance its contribution to food security.It is necessary to highlight the complexity of issues for systems approach that can shape the course of development of aquaculture so that it can live-up to the expected fish demand by 2030 in addition to the current quantity of 82.1 million tons.Some of the Fourth Industrial Revolution(IR4.0)technologies suggested to achieve this target envisage the use of real-time monitoring,integration of a constant stream of data from connected production systems and intelligent automation in controls.This requires application of mobile devices,internet of things(IoT),smart sensors,artificial intelligence(AI),big data analytics,robotics as well as augmented virtual and mixed reality.AI is receiving more attention due to many reasons.Its use in aquaculture can happen in many ways,for example,in detecting and mitigating stress on the captive fish which is considered critical for the success of aquaculture.While the technology intensification in aquaculture holds a great potential but there are constraints in deploying IR4.0 tools in aquaculture.Possible solutions and practical options,especially with respect to future food choices are highlighted in this paper.展开更多
This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current...This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current internet in order for it to cope with IPv6 traffic transmission requirements in general and specifically voice traffic, which is being attracting the efforts of research, bodes currently. These tests were conducted in the application level without looking into the network level of the network. VoIPv6 performance tests were conducted in the current tunneled and native IPv6 aiming for better end-to-end VoIPv6 performance. The results obtained in this study were shown in deferent codec's for different bit rates in Kilo bits per second, which act as an indicator for the better performance of G.711 compared with the rest of the tested codes.展开更多
The prevalence of melanoma skin cancer has increased in recent decades.The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins.Thus,the early diagno...The prevalence of melanoma skin cancer has increased in recent decades.The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins.Thus,the early diagnosis of melanoma is a key factor in improving the prognosis of the disease.Deep learning makes it possible to design and develop intelligent systems that can be used in detecting and classifying skin lesions from visible-light images.Such systems can provide early and accurate diagnoses of melanoma and other types of skin diseases.This paper proposes a new method which can be used for both skin lesion segmentation and classification problems.This solution makes use of Convolutional neural networks(CNN)with the architecture two-dimensional(Conv2D)using three phases:feature extraction,classification and detection.The proposed method is mainly designed for skin cancer detection and diagnosis.Using the public dataset International Skin Imaging Collaboration(ISIC),the impact of the proposed segmentation method on the performance of the classification accuracy was investigated.The obtained results showed that the proposed skin cancer detection and classification method had a good performance with an accuracy of 94%,sensitivity of 92%and specificity of 96%.Also comparing with the related work using the same dataset,i.e.,ISIC,showed a better performance of the proposed method.展开更多
Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it...Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it is impossible to ensure that people wear face masks;automated systems are a much superior option for face mask detection and monitoring.This paper introduces a simple and efficient approach for masked face detection.The architecture of the proposed approach is very straightforward;it combines deep learning and local binary patterns to extract features and classify themasmasked or unmasked.The proposed systemrequires hardware withminimal power consumption compared to state-of-the-art deep learning algorithms.Our proposed system maintains two steps.At first,this work extracted the local features of an image by using a local binary pattern descriptor,and then we used deep learning to extract global features.The proposed approach has achieved excellent accuracy and high performance.The performance of the proposed method was tested on three benchmark datasets:the realworld masked faces dataset(RMFD),the simulated masked faces dataset(SMFD),and labeled faces in the wild(LFW).Performancemetrics for the proposed technique weremeasured in terms of accuracy,precision,recall,and F1-score.Results indicated the efficiency of the proposed technique,providing accuracies of 99.86%,99.98%,and 100%for RMFD,SMFD,and LFW,respectively.Moreover,the proposed method outperformed state-of-the-art deep learning methods in the recent bibliography for the same problem under study and on the same evaluation datasets.展开更多
Super-resolution techniques are used to reconstruct an image with a high resolution from one or more low-resolution image(s).In this paper,we proposed a single image super-resolution algorithm.It uses the nonlocal mea...Super-resolution techniques are used to reconstruct an image with a high resolution from one or more low-resolution image(s).In this paper,we proposed a single image super-resolution algorithm.It uses the nonlocal mean filter as a prior step to produce a denoised image.The proposed algorithm is based on curvelet transform.It converts the denoised image into low and high frequencies(sub-bands).Then we applied a multi-dimensional interpolation called Lancozos interpolation over both sub-bands.In parallel,we applied sparse representation with over complete dictionary for the denoised image.The proposed algorithm then combines the dictionary learning in the sparse representation and the interpolated sub-bands using inverse curvelet transform to have an image with a higher resolution.The experimental results of the proposed super-resolution algorithm show superior performance and obviously better-recovering images with enhanced edges.The comparison study shows that the proposed super-resolution algorithm outperforms the state-of-the-art.The mean absolute error is 0.021±0.008 and the structural similarity index measure is 0.89±0.08.展开更多
University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they...University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.展开更多
The Coronavirus Disease 2019(COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse tr...The Coronavirus Disease 2019(COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse transcription polymerase chain reaction(RT-PCR)testing.Nevertheless,this testing method is accurate enough for the diagnosis of COVID-19.However,it is time-consuming,expensive,expert-dependent,and violates social distancing.In this paper,this research proposed an effective multimodality-based and feature fusion-based(MMFF)COVID-19 detection technique through deep neural networks.In multi-modality,we have utilized the cough samples,breathe samples and sound samples of healthy as well as COVID-19 patients from publicly available COSWARA dataset.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.Several useful features were extracted from the aforementioned modalities that were then fed as an input to long short-term memory recurrent neural network algorithms for the classification purpose.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.The experimental results showed that our proposed approach outperformed compared to four baseline approaches published recently.We believe that our proposed technique will assists potential users to diagnose the COVID-19 without the intervention of any expert in minimum amount of time.展开更多
Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks...Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.展开更多
Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software c...Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.展开更多
In our today’s life, it is obvious that cloud computing is one of the new and most important innovations in the field of information technology which constitutes the ground for speeding up the development in great si...In our today’s life, it is obvious that cloud computing is one of the new and most important innovations in the field of information technology which constitutes the ground for speeding up the development in great size storage of data as well as the processing and distribution of data on the largest scale. In other words, the most important interests of any data owner nowadays are related to all of the security as well as the privacy of data, especially in the case of outsourcing private data on a cloud server publicly which has not been one of the well-trusted and reliable domains. With the aim of avoiding any leakage or disclosure of information, we will encrypt any information important or confidential prior to being uploaded to the server and this may lead to an obstacle which encounters any attempt to support any efficient keyword query to be and ranked with matching results on such encrypted data. Recent researches conducted in this area have focused on a single keyword query with no proper ranking scheme in hand. In this paper, we will propose a new model called Secure Model for Preserving Privacy Over Encrypted Cloud Computing (SPEC) to improve the performance of cloud computing and to safeguard privacy of data in comparison to the results of previous researches in regard to accuracy, privacy, security, key generation, storage capacity as well as trapdoor, index generation, index encryption, index update, and finally files retrieval depending on access frequency.展开更多
文摘In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.
文摘Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patients'data has become an urgent and practical concern.We present a novel approach for reversible data hiding for colour medical images.In a hybrid domain,we employ AlexNet,tuned with watershed transform(WST)and L-shaped fractal Tromino encryption.Our approach commences by constructing the host image's feature vector using a pre-trained AlexNet model.Next,we use the watershed transform to convert the extracted feature vector into a vector for a topographic map,which we then encrypt using an L-shaped fractal Tromino cryptosystem.We embed the secret image in the transformed image vector using a histogram-based embedding strategy to enhance payload and visual fidelity.When there are no attacks,the RDHNet exhibits robust performance,can be reversed to the original image and maintains a visually appealing stego image,with an average PSNR of 73.14 dB,an SSIM of 0.9999 and perfect values of NC=1 and BER=0 under normal conditions.The proposed RDHNet demonstrates a robust ability to withstand detrimental geometric and noise-adding attacks as well as various steganalysis methods.Furthermore,our RDHNet method initiative demonstrates efficacy in tackling contemporary confidentiality issues.
基金funded by the University of Jeddah,Saudi Arabia,under Grant No.(UJ-20-102-DR).
文摘Mobile-Edge Computing(MEC)displaces cloud services as closely as possible to the end user.This enables the edge servers to execute the offloaded tasks that are requested by the users,which in turn decreases the energy consumption and the turnaround time delay.However,as a result of a hostile environment or in catastrophic zones with no network,it could be difficult to deploy such edge servers.Unmanned Aerial Vehicles(UAVs)can be employed in such scenarios.The edge servers mounted on these UAVs assist with task offloading.For the majority of IoT applications,the execution times of tasks are often crucial.Therefore,UAVs tend to have a limited energy supply.This study presents an approach to offload IoT user applications based on the usage of Voronoi diagrams to determine task delays and cluster IoT devices dynamically as a first step.Second,the UAV flies over each cluster to perform the offloading process.In addition,we propose a Graphics Processing Unit(GPU)-based parallelization of particle swarm optimization to balance the cluster sizes and identify the shortest path along these clusters while minimizing the UAV flying time and energy consumption.Some evaluation results are given to demonstrate the effectiveness of the presented offloading strategy.
文摘The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of the problems associated with the proposed theory is extremely difficult or impossible to solve analytically due to nonlinearity,MDD diffusion,multi-variable nature,multi-stage processing and anisotropic properties of the considered material.Therefore,we propose a novel boundary element method(BEM)formulation for modeling and simulation of such system.The computational performance of the proposed technique has been investigated.The numerical results illustrate the effects of time delays and kernel functions on the nonlinear three-temperature and nonlinear displacement components.The numerical results also demonstrate the validity,efficiency and accuracy of the proposed methodology.The findings and solutions of this study contribute to the further development of industrial applications and devices typically include micropolar-thermoelastic materials.
文摘Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.
基金the Deanship of Scientific Research at King Khalid University for funding this work through the Big Research Group Project under grant number(R.G.P2/16/40).
文摘In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is tested.The dynamical behavior of this model is studied according to the change of the control parameters.We find that the complex dynamical behavior extends from a stable state to chaotic attractors.Finally,the analytical results are clarified by some numerical simulations.
基金The authors are grateful to the Taif University Researchers Supporting Project number(TURSP-2020/36),Taif University,Taif,Saudi Arabia.
文摘Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UAV)communications have recently garnered a lot of attention due to the fact that they are extremely versatile and may be applied to a wide variety of contexts and purposes.A cognitive UAV is proposed as a solution for the Internet of Things ground terminal’s wireless nodes in this article.In the IoT system,the UAV is utilised not only to determine how the resources should be distributed but also to provide power to the wireless nodes.The quality of service(QoS)offered by the cognitive node was interpreted as a price-based utility function,which was demonstrated in the form of a non-cooperative game theory in order to maximise customers’net utility functions.An energyefficient non-cooperative game theory power allocation with pricing strategy abbreviated as(EE-NGPAP)is implemented in this study with two trajectories Spiral and Sigmoidal in order to facilitate effective power management in Internet of Things(IoT)wireless nodes.It has also been demonstrated,theoretically and by the use of simulations,that the Nash equilibrium does exist and that it is one of a kind.The proposed energy harvesting approach was shown,through simulations,to significantly reduce the typical amount of power thatwas sent.This is taken into consideration to agree with the objective of 5G networks.In order to converge to Nash Equilibrium(NE),the method that is advised only needs roughly 4 iterations,which makes it easier to utilise in the real world,where things aren’t always the same.
基金The research will be funded by the Multimedia University,Department of Information Technology,Persiaran Multimedia,63100,Cyberjaya,Selangor,Malaysia.
文摘In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.
文摘In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN.
文摘Price prediction of goods is a vital point of research due to how common e-commerce platforms are.There are several efforts conducted to forecast the price of items using classicmachine learning algorithms and statisticalmodels.These models can predict prices of various financial instruments,e.g.,gold,oil,cryptocurrencies,stocks,and second-hand items.Despite these efforts,the literature has no model for predicting the prices of seasonal goods(e.g.,Christmas gifts).In this context,we framed the task of seasonal goods price prediction as a regression problem.First,we utilized a real online trailer dataset of Christmas gifts and then we proposed several machine learningbased models and one statistical-based model to predict the prices of these seasonal products.Second,we utilized a real-life dataset of Christmas gifts for the prediction task.Then,we proposed support vector regressor(SVR),linear regression,random forest,and ridgemodels as machine learningmodels for price prediction.Next,we proposed an autoregressive-integrated-movingaverage(ARIMA)model for the same purpose as a statistical-based model.Finally,we evaluated the performance of the proposed models;the comparison shows that the best performing model was the random forest model,followed by the ARIMA model.
基金Aquaculture Flagship program of Universiti Malaysia Sabah.
文摘This study was undertaken to examine the options and feasibility of deploying new technologies for transforming the aquaculture sector with the objective of increasing the production efficiency.Selection of technologies to obtain the expected outcome should,obviously,be consistent with the criteria of sustainable development.There is a range of technologies being suggested for driving change in aquaculture to enhance its contribution to food security.It is necessary to highlight the complexity of issues for systems approach that can shape the course of development of aquaculture so that it can live-up to the expected fish demand by 2030 in addition to the current quantity of 82.1 million tons.Some of the Fourth Industrial Revolution(IR4.0)technologies suggested to achieve this target envisage the use of real-time monitoring,integration of a constant stream of data from connected production systems and intelligent automation in controls.This requires application of mobile devices,internet of things(IoT),smart sensors,artificial intelligence(AI),big data analytics,robotics as well as augmented virtual and mixed reality.AI is receiving more attention due to many reasons.Its use in aquaculture can happen in many ways,for example,in detecting and mitigating stress on the captive fish which is considered critical for the success of aquaculture.While the technology intensification in aquaculture holds a great potential but there are constraints in deploying IR4.0 tools in aquaculture.Possible solutions and practical options,especially with respect to future food choices are highlighted in this paper.
文摘This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current internet in order for it to cope with IPv6 traffic transmission requirements in general and specifically voice traffic, which is being attracting the efforts of research, bodes currently. These tests were conducted in the application level without looking into the network level of the network. VoIPv6 performance tests were conducted in the current tunneled and native IPv6 aiming for better end-to-end VoIPv6 performance. The results obtained in this study were shown in deferent codec's for different bit rates in Kilo bits per second, which act as an indicator for the better performance of G.711 compared with the rest of the tested codes.
基金The authors would like to thank the deanship of scientific research and Re-search Center for engineering and applied sciences,Majmaah University,Saudi Arabia,for their support and encouragementthe authors would like also to express deep thanks to our College(College of Science at Zulfi City,Majmaah University,AL-Majmaah 11952,Saudi Arabia)Project No.31-1439.
文摘The prevalence of melanoma skin cancer has increased in recent decades.The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins.Thus,the early diagnosis of melanoma is a key factor in improving the prognosis of the disease.Deep learning makes it possible to design and develop intelligent systems that can be used in detecting and classifying skin lesions from visible-light images.Such systems can provide early and accurate diagnoses of melanoma and other types of skin diseases.This paper proposes a new method which can be used for both skin lesion segmentation and classification problems.This solution makes use of Convolutional neural networks(CNN)with the architecture two-dimensional(Conv2D)using three phases:feature extraction,classification and detection.The proposed method is mainly designed for skin cancer detection and diagnosis.Using the public dataset International Skin Imaging Collaboration(ISIC),the impact of the proposed segmentation method on the performance of the classification accuracy was investigated.The obtained results showed that the proposed skin cancer detection and classification method had a good performance with an accuracy of 94%,sensitivity of 92%and specificity of 96%.Also comparing with the related work using the same dataset,i.e.,ISIC,showed a better performance of the proposed method.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R442),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it is impossible to ensure that people wear face masks;automated systems are a much superior option for face mask detection and monitoring.This paper introduces a simple and efficient approach for masked face detection.The architecture of the proposed approach is very straightforward;it combines deep learning and local binary patterns to extract features and classify themasmasked or unmasked.The proposed systemrequires hardware withminimal power consumption compared to state-of-the-art deep learning algorithms.Our proposed system maintains two steps.At first,this work extracted the local features of an image by using a local binary pattern descriptor,and then we used deep learning to extract global features.The proposed approach has achieved excellent accuracy and high performance.The performance of the proposed method was tested on three benchmark datasets:the realworld masked faces dataset(RMFD),the simulated masked faces dataset(SMFD),and labeled faces in the wild(LFW).Performancemetrics for the proposed technique weremeasured in terms of accuracy,precision,recall,and F1-score.Results indicated the efficiency of the proposed technique,providing accuracies of 99.86%,99.98%,and 100%for RMFD,SMFD,and LFW,respectively.Moreover,the proposed method outperformed state-of-the-art deep learning methods in the recent bibliography for the same problem under study and on the same evaluation datasets.
文摘Super-resolution techniques are used to reconstruct an image with a high resolution from one or more low-resolution image(s).In this paper,we proposed a single image super-resolution algorithm.It uses the nonlocal mean filter as a prior step to produce a denoised image.The proposed algorithm is based on curvelet transform.It converts the denoised image into low and high frequencies(sub-bands).Then we applied a multi-dimensional interpolation called Lancozos interpolation over both sub-bands.In parallel,we applied sparse representation with over complete dictionary for the denoised image.The proposed algorithm then combines the dictionary learning in the sparse representation and the interpolated sub-bands using inverse curvelet transform to have an image with a higher resolution.The experimental results of the proposed super-resolution algorithm show superior performance and obviously better-recovering images with enhanced edges.The comparison study shows that the proposed super-resolution algorithm outperforms the state-of-the-art.The mean absolute error is 0.021±0.008 and the structural similarity index measure is 0.89±0.08.
基金This research work was supported by the University Malaysia Sabah,Malaysia.
文摘University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.
文摘The Coronavirus Disease 2019(COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse transcription polymerase chain reaction(RT-PCR)testing.Nevertheless,this testing method is accurate enough for the diagnosis of COVID-19.However,it is time-consuming,expensive,expert-dependent,and violates social distancing.In this paper,this research proposed an effective multimodality-based and feature fusion-based(MMFF)COVID-19 detection technique through deep neural networks.In multi-modality,we have utilized the cough samples,breathe samples and sound samples of healthy as well as COVID-19 patients from publicly available COSWARA dataset.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.Several useful features were extracted from the aforementioned modalities that were then fed as an input to long short-term memory recurrent neural network algorithms for the classification purpose.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.The experimental results showed that our proposed approach outperformed compared to four baseline approaches published recently.We believe that our proposed technique will assists potential users to diagnose the COVID-19 without the intervention of any expert in minimum amount of time.
文摘Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.
文摘Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.
文摘In our today’s life, it is obvious that cloud computing is one of the new and most important innovations in the field of information technology which constitutes the ground for speeding up the development in great size storage of data as well as the processing and distribution of data on the largest scale. In other words, the most important interests of any data owner nowadays are related to all of the security as well as the privacy of data, especially in the case of outsourcing private data on a cloud server publicly which has not been one of the well-trusted and reliable domains. With the aim of avoiding any leakage or disclosure of information, we will encrypt any information important or confidential prior to being uploaded to the server and this may lead to an obstacle which encounters any attempt to support any efficient keyword query to be and ranked with matching results on such encrypted data. Recent researches conducted in this area have focused on a single keyword query with no proper ranking scheme in hand. In this paper, we will propose a new model called Secure Model for Preserving Privacy Over Encrypted Cloud Computing (SPEC) to improve the performance of cloud computing and to safeguard privacy of data in comparison to the results of previous researches in regard to accuracy, privacy, security, key generation, storage capacity as well as trapdoor, index generation, index encryption, index update, and finally files retrieval depending on access frequency.