Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifical...Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifically at the intersections in order to collect traffic information from the vehicles and disseminate alarms and messages in emergency situations to the neighborhood vehicles cooperating with the network.However,the development of a predominant RSUs placement algorithm for ensuring competent communication in VANETs is a challenging issue due to the hindrance of obstacles like water bodies,trees and buildings.In this paper,Ruppert’s Delaunay Triangulation Refinement Scheme(RDTRS)for optimal RSUs placement is proposed for accurately estimating the optimal number of RSUs that has the possibility of enhancing the area of coverage during data communication.This RDTRS is proposed by considering the maximum number of factors such as global coverage,intersection popularity,vehicle density and obstacles present in the map for optimal RSUs placement,which is considered as the core improvement over the existing RSUs optimal placement strategies.It is contributed for deploying requisite RSUs with essential transmission range for maximal coverage in the convex map such that each position of the map could be effectively covered by at least one RSU in the presence of obstacles.The simulation experiments of the proposed RDTRS are conducted with complex road traffic environments.The results of this proposed RDTRS confirmed its predominance in reducing the end-to-end delay by 21.32%,packet loss by 9.38%with improved packet delivery rate of 10.68%,compared to the benchmarked schemes.展开更多
Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up t...Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up to 7G.Furthermore,it improves the array gain and directivity,increasing the detection range and angular resolution of radar systems.This study proposes two highly efficient SLL reduction techniques.These techniques are based on the hybridization between either the single convolution or the double convolution algorithms and the genetic algorithm(GA)to develop the Conv/GA andDConv/GA,respectively.The convolution process determines the element’s excitations while the GA optimizes the element spacing.For M elements linear antenna array(LAA),the convolution of the excitation coefficients vector by itself provides a new vector of excitations of length N=(2M−1).This new vector is divided into three different sets of excitations including the odd excitations,even excitations,and middle excitations of lengths M,M−1,andM,respectively.When the same element spacing as the original LAA is used,it is noticed that the odd and even excitations provide a much lower SLL than that of the LAA but with amuch wider half-power beamwidth(HPBW).While the middle excitations give the same HPBWas the original LAA with a relatively higher SLL.Tomitigate the increased HPBWof the odd and even excitations,the element spacing is optimized using the GA.Thereby,the synthesized arrays have the same HPBW as the original LAA with a two-fold reduction in the SLL.Furthermore,for extreme SLL reduction,the DConv/GA is introduced.In this technique,the same procedure of the aforementioned Conv/GA technique is performed on the resultant even and odd excitation vectors.It provides a relatively wider HPBWthan the original LAA with about quad-fold reduction in the SLL.展开更多
Authentication of the digital image has much attention for the digital revolution.Digital image authentication can be verified with image watermarking and image encryption schemes.These schemes are widely used to prot...Authentication of the digital image has much attention for the digital revolution.Digital image authentication can be verified with image watermarking and image encryption schemes.These schemes are widely used to protect images against forgery attacks,and they are useful for protecting copyright and rightful ownership.Depending on the desirable applications,several image encryption and watermarking schemes have been proposed to moderate this attention.This framework presents a new scheme that combines a Walsh Hadamard Transform(WHT)-based image watermarking scheme with an image encryption scheme based on Double Random Phase Encoding(DRPE).First,on the sender side,the secret medical image is encrypted using DRPE.Then the encrypted image is watermarking based on WHT.The combination between watermarking and encryption increases the security and robustness of transmitting an image.The performance evaluation of the proposed scheme is obtained by testing Structural Similarity Index(SSIM),Peak Signal-to-Noise Ratio(PSNR),Normalized cross-correlation(NC),and Feature Similarity Index(FSIM).展开更多
Technological advancement in the field of trans- portation and communication has been happening at a faster pace in the past few decades. As the demand for high-speed transportation increases, the need for an improved...Technological advancement in the field of trans- portation and communication has been happening at a faster pace in the past few decades. As the demand for high-speed transportation increases, the need for an improved seamless communication system to handle higher data traffic in a highly mobile environment becomes imperative. This paper proposes a novel scheme to enhance the quality of service in high-speed railway (HSR) communication environment using the concept of torch nodes (TNs) and adaptive measurement aggregation (AMA). The system was modeled using an object-oriented discrete event sim- ulator, and the performance was analyzed against the existing single-antenna scheme. The simulation results show that the proposed scheme with its minimal imple- mentation overhead can efficiently perform seamless han- dover with reduced handover failure and communication interruption probability.展开更多
In the near future, there are expected to have at least billions of devices interconnected with each other. How to connect so many devices becomes a big issue. Machine-to-Machine (M2M) communications serve as the fund...In the near future, there are expected to have at least billions of devices interconnected with each other. How to connect so many devices becomes a big issue. Machine-to-Machine (M2M) communications serve as the fundamental underlying technologies to support such Internet of Things (IoT) applications. The characteristics and services requirements of machine type communication devices (MTCDs) are totally different from the existing ones. Existing network technologies, ranging from personal area networks to wide area networks, are not well suited for M2M communications. Therefore, we first investigate the characteristics and service requirements for MTCDs. Recent advances in both cellular and capillary M2M communications are also discussed. Finally, we list some open issues and future research directions. 展开更多
The hyperloop idea,which is one of the most ecofriendly,low-carbon emissions,and fossil fuel-efficient modes of transportation,has recently become quite popular.In this study,a double-sided linear induction motor(LIM)...The hyperloop idea,which is one of the most ecofriendly,low-carbon emissions,and fossil fuel-efficient modes of transportation,has recently become quite popular.In this study,a double-sided linear induction motor(LIM)with 500 W of output power,60 N of thrust force and 200 V/38.58 Hz of supply voltage was designed to be used in hyperloop development competition hosted by the scientific and technological research council of turkey(TüB?TAK)rail transportation technologies institute(RUTE).In contrast to the studies in the literature,concentrated winding is preferred instead of distributed winding due to mechanical constraints.The electromagnetic design of LIM,whose mechanical and electrical requirements were determined considering the hyperloop development competition,was carried out by following certain steps.Then,the designed model was simulated and analyzed by finite element method(FEM),and the necessary optimizations have been performed to improve the motor characteristics.By examining the final model,the applicability of the concentrated winding type LIM for hyperloop technology has been investigated.Besides,the effects of primary material,railway material,and mechanical air-gap length on LIM performance were also investigated.In the practical phase of the study,the designed LIM has been prototyped and tested.The validation of the experimental results was achieved through good agreement with the finite element analysis results.展开更多
This article presents a compact crab-shaped reconfigurable antenna(CSRA)designed for 5G sub-6 GHz wireless applications. The antenna achieves enhanced gain in a miniaturized form factor by incorporating a hexagonal sp...This article presents a compact crab-shaped reconfigurable antenna(CSRA)designed for 5G sub-6 GHz wireless applications. The antenna achieves enhanced gain in a miniaturized form factor by incorporating a hexagonal split-ring structure controlled via two radio frequency(RF) positive-intrinsicnegative(PIN) diodes(BAR64-02V). While the antenna is primarily designed to operate at 3.50 GHz for sub-6 GHz 5G applications, RF switching enables the CSRA to cover a broader frequency spectrum, including the S-band, X-band, and portions of the Ku-band. The proposed antenna offers several advantages: It is low-cost(fabricated on an FR-4 substrate), compact(achieving 64.07% size reduction compared to conventional designs), and features both frequency and gain reconfigurability through digitally controlled PIN diode switching. The reflection coefficients of the antenna, both without diodes and across all four switching states, were experimentally validated in the laboratory using a Keysight Field Fox microwave analyzer(N9916A, 14 GHz). The simulated radiation patterns and gain characteristics closely matched the measured values, demonstrating an excellent agreement. This study bridges the gap between traditional and next-generation antenna designs by offering a compact,cost-effective, and high-performance solution for multiband, reconfigurable wireless communication systems. The integration of double-split-ring resonators and dynamic reconfigurability makes the proposed antenna a strong candidate for various applications, including S-band and X-band systems, as well as the emerging lower 6G band(7.125 GHz–8.400 GHz).展开更多
This paper provides a comprehensive bibliometric exposition on deepfake research,exploring the intersection of artificial intelligence and deepfakes as well as international collaborations,prominent researchers,organi...This paper provides a comprehensive bibliometric exposition on deepfake research,exploring the intersection of artificial intelligence and deepfakes as well as international collaborations,prominent researchers,organizations,institutions,publications,and key themes.We performed a search on theWeb of Science(WoS)database,focusing on Artificial Intelligence and Deepfakes,and filtered the results across 21 research areas,yielding 1412 articles.Using VOSviewer visualization tool,we analyzed thisWoS data through keyword co-occurrence graphs,emphasizing on four prominent research themes.Compared with existing bibliometric papers on deepfakes,this paper proceeds to identify and discuss some of the highly cited papers within these themes:deepfake detection,feature extraction,face recognition,and forensics.The discussion highlights key challenges and advancements in deepfake research.Furthermore,this paper also discusses pressing issues surrounding deepfakes such as security,regulation,and datasets.We also provide an analysis of another exhaustive search on Scopus database focusing solely on Deepfakes(while not excluding AI)revealing deep learning as the predominant keyword,underscoring AI’s central role in deepfake research.This comprehensive analysis,encompassing over 500 keywords from 8790 articles,uncovered a wide range of methods,implications,applications,concerns,requirements,challenges,models,tools,datasets,and modalities related to deepfakes.Finally,a discussion on recommendations for policymakers,researchers,and other stakeholders is also provided.展开更多
Recently,a new worldwide race has emerged to achieve a breakthrough in designing and deploying massive ultra-dense low-Earth orbit(LEO)satellite constellation(SatCon)networks with the vision of providing everywhere In...Recently,a new worldwide race has emerged to achieve a breakthrough in designing and deploying massive ultra-dense low-Earth orbit(LEO)satellite constellation(SatCon)networks with the vision of providing everywhere Internet coverage from space.Several players have started the deployment phase with different scales.However,the implementation is in its infancy,and many investigations are needed.This work provides an overview of the stateof-the-art architectures,orbital patterns,top players,and potential applications of SatCon networks.Moreover,we discuss new open research directions and challenges for improving network performance.Finally,a case study highlights the benefits of integrating SatCon network and non-orthogonal multiple access(NOMA)technologies for improving the achievable capacity of satellite end-users.展开更多
Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate...Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.展开更多
The rapid development of data communication in modern era demands secure exchange of information. Steganography is an established method for hiding secret data from an unauthorized access into a cover object in such a...The rapid development of data communication in modern era demands secure exchange of information. Steganography is an established method for hiding secret data from an unauthorized access into a cover object in such a way that it is invisible to human eyes. The cover object can be image, text, audio,or video. This paper proposes a secure steganography algorithm that hides a bitstream of the secret text into the least significant bits(LSBs) of the approximation coefficients of the integer wavelet transform(IWT) of grayscale images as well as each component of color images to form stego-images. The embedding and extracting phases of the proposed steganography algorithms are performed using the MATLAB software. Invisibility, payload capacity, and security in terms of peak signal to noise ratio(PSNR) and robustness are the key challenges to steganography. The statistical distortion between the cover images and the stego-images is measured by using the mean square error(MSE) and the PSNR, while the degree of closeness between them is evaluated using the normalized cross correlation(NCC). The experimental results show that, the proposed algorithms can hide the secret text with a large payload capacity with a high level of security and a higher invisibility. Furthermore, the proposed technique is computationally efficient and better results for both PSNR and NCC are achieved compared with the previous algorithms.展开更多
Studies have indicated that the distributed compressed sensing based(DCSbased) channel estimation can decrease the length of the reference signals effectively. In block transmission, a unique word(UW) can be used as a...Studies have indicated that the distributed compressed sensing based(DCSbased) channel estimation can decrease the length of the reference signals effectively. In block transmission, a unique word(UW) can be used as a cyclic prefix and reference signal. However, the DCS-based channel estimation requires diversity sequences instead of UW. In this paper, we proposed a novel method that employs a training sequence(TS) whose duration time is slightly longer than the maximum delay spread time. Based on proposed TS, the DCS approach perform perfectly in multipath channel estimation. Meanwhile, a cyclic prefix construct could be formed, which reduces the complexity of the frequency domain equalization(FDE) directly. Simulation results demonstrate that, by using the method of simultaneous orthogonal matching pursuit(SOMP), the required channel overhead has been reduced thanks to the proposed TS.展开更多
In this study,a 2kHz Tonpilz projector was designed using a Terfenol-D and modeled in ATILA.For the purpose of modeling studies,it has been determined that a radiating head mass exhibits better transmitting current re...In this study,a 2kHz Tonpilz projector was designed using a Terfenol-D and modeled in ATILA.For the purpose of modeling studies,it has been determined that a radiating head mass exhibits better transmitting current response(TCR) at 136 mm diameter,where the resonance occurs at 2.4kHz and the peak value of 118 dB re 1 μPa/A at 1 m occurs at 12 kHz.Also bolt at a 46mm distance from the center of the head mass offers resonance at 2.4kHz,and the peak value of 115.3 dB re 1 μPa/A at 1m occurs at 11.5kHz.This optimized design is fabricated and molded with polyurethane of 3mm thickness.The prototype was tested at the Acoustic Test Facility(ATF) of National Institute of Ocean Technology(NIOT) for its underwater performances.Based on the result,the fundamental resonance was determined to be 2.18kHz and the peak value of TCR of 182 dB re 1 μPa/A at 1m occurs at 14 kHz.The maximum value of the RS was found to be -190 dB re 1V/μPa at 1m at a frequency of 2.1kHz.展开更多
In the present scenario,cloud computing service provides on-request access to a collection of resources available in remote system that can be shared by numerous clients.Resources are in self-administration;consequent...In the present scenario,cloud computing service provides on-request access to a collection of resources available in remote system that can be shared by numerous clients.Resources are in self-administration;consequently,clients can adjust their usage according to their requirements.Resource usage is estimated and clients can pay according to their utilization.In literature,the existing method describes the usage of various hardware assets.Quality of Service(QoS)needs to be considered for ascertaining the schedule and the access of resources.Adhering with the security arrangement,any additional code is forbidden to ensure the usage of resources complying with QoS.Thus,all monitoring must be done from the hypervisor.To overcome the issues,Robust Resource Allocation and Utilization(RRAU)approach is developed for optimizing the management of its cloud resources.The work hosts a numerous virtual assets which could be expected under the circumstances and it enforces a controlled degree of QoS.The asset assignment calculation is heuristic,which is based on experimental evaluations,RRAU approach with J48 prediction model reduces Job Completion Time(JCT)by 4.75 s,Make Span(MS)6.25,and Monetary Cost(MC)4.25 for 15,25,35 and 45 resources are compared to the conventional methodologies in cloud environment.展开更多
Recently,many researchers have tried to develop a robust,fast,and accurate algorithm.This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking,gaze-based hu...Recently,many researchers have tried to develop a robust,fast,and accurate algorithm.This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking,gaze-based human-computer interaction,medical applications(such as deaf and diabetes patients),and attention analysis.Many real-world conditions challenge the eye appearance,such as illumination,reflections,and occasions.On the other hand,individual differences in eye physiology and other sources of noise,such as contact lenses or make-up.The present work introduces a robust pupil detection algorithm with and higher accuracy than the previous attempts for real-time analytics applications.The proposed circular hough transform with morphing canny edge detection for Pupillometery(CHMCEP)algorithm can detect even the blurred or noisy images by using different filtering methods in the pre-processing or start phase to remove the blur and noise and finally the second filtering process before the circular Hough transform for the center fitting to make sure better accuracy.The performance of the proposed CHMCEP algorithm was tested against recent pupil detection methods.Simulations and results show that the proposed CHMCEP algorithm achieved detection rates of 87.11,78.54,58,and 78 according to´Swirski,ExCuSe,Else,and labeled pupils in the wild(LPW)data sets,respectively.These results show that the proposed approach performs better than the other pupil detection methods by a large margin by providing exact and robust pupil positions on challenging ordinary eye pictures.展开更多
This paper presents a new adaptive mutation approach for fastening the convergence of immune algorithms (IAs). This method is adopted to realize the twin goals of maintaining diversity in the population and sustaining...This paper presents a new adaptive mutation approach for fastening the convergence of immune algorithms (IAs). This method is adopted to realize the twin goals of maintaining diversity in the population and sustaining the convergence capacity of the IA. In this method, the mutation rate (pm) is adaptively varied depending on the fitness values of the solutions. Solutions of high fitness are protected, while solutions with sub-average fitness are totally disrupted. A solution to the problem of deciding the optimal value of pm is obtained. Experiments are carried out to compare the proposed approach to traditional one on a set of optimization problems. These are namely: 1) an exponential multi-variable function;2) a rapidly varying multimodal function and 3) design of a second order 2-D narrow band recursive LPF. Simulation results show that the proposed method efficiently improves IA’s performance and prevents it from getting stuck at a local optimum.展开更多
Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish betwee...Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.展开更多
Real-time disease prediction has emerged as the main focus of study in the field of computerized medicine.Intelligent disease identification framework can assist medical practitioners in diagnosing disease in a way th...Real-time disease prediction has emerged as the main focus of study in the field of computerized medicine.Intelligent disease identification framework can assist medical practitioners in diagnosing disease in a way that is reliable,consistent,and timely,successfully lowering mortality rates,particularly during endemics and pandemics.To prevent this pandemic’s rapid and widespread,it is vital to quickly identify,confine,and treat affected individuals.The need for auxiliary computer-aided diagnostic(CAD)systems has grown.Numerous recent studies have indicated that radiological pictures contained critical information regarding the COVID-19 virus.Utilizing advanced convolutional neural network(CNN)architectures in conjunction with radiological imaging makes it possible to provide rapid,accurate,and extremely useful susceptible classifications.This research work proposes a methodology for real-time detection of COVID-19 infections caused by the Corona Virus.The purpose of this study is to offer a two-way COVID-19(2WCD)diagnosis prediction deep learning system that is built on Transfer Learning Methodologies(TLM)and features customized fine-tuning on top of fully connected layered pre-trained CNN architectures.2WCD has applied modifications to pre-trained models for better performance.It is designed and implemented to improve the generalization ability of the classifier for binary and multi-class models.Along with the ability to differentiate COVID-19 and No-Patient in the binary class model and COVID-19,No-Patient,and Pneumonia in the multi-class model,our framework is augmented with a critical add-on for visually demonstrating infection in any tested radiological image by highlighting the affected region in the patient’s lung in a recognizable color pattern.The proposed system is shown to be extremely robust and reliable for real-time COVID-19 diagnostic prediction.It can also be used to forecast other lung-related disorders.As the system can assist medical practitioners in diagnosing the greatest number of patients in the shortestamount of time, radiologists can also be used or published online to assistany less-experienced individual in obtaining an accurate immediate screeningfor their radiological images.展开更多
Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have bee...Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have been susceptible to black fungus.Therefore,recovered COVID-19 patients should seek medical support when they notice mucormycosis symptoms.This paper proposes a novel ensemble deep-learning model that includes three pre-trained models:reset(50),VGG(19),and Inception.Our approach is medically intuitive and efficient compared to the traditional deep learning models.An image dataset was aggregated from various resources and divided into two classes:a black fungus class and a skin infection class.To the best of our knowledge,our study is the first that is concerned with building black fungus detection models based on deep learning algorithms.The proposed approach can significantly improve the performance of the classification task and increase the generalization ability of such a binary classification task.According to the reported results,it has empirically achieved a sensitivity value of 0.9907,a specificity value of 0.9938,a precision value of 0.9938,and a negative predictive value of 0.9907.展开更多
Wireless sensor networks are energy constraint networks. Energy efficiency, to prolong the network for a longer time is critical issue for wireless sensor network protocols. Clustering protocols are energy efficient a...Wireless sensor networks are energy constraint networks. Energy efficiency, to prolong the network for a longer time is critical issue for wireless sensor network protocols. Clustering protocols are energy efficient approaches to extend the lifetime of network. Intra-cluster communication is the main driving factor for energy efficiency of clustering protocols. Intra-cluster energy consumption depends upon the position of cluster head in the cluster. Wrongly positioned clusters head make cluster more energy consuming. In this paper, a simple and efficient cluster head selection scheme is proposed, named Smart Cluster Head Selection (SCHS). It can be implemented with any distributed clustering approach. In SCHS, the area is divided into two parts: border area and inner area. Only inner area nodes are eligible for cluster head role. SCHS reduces the intra-cluster communication distance hence improves the energy efficiency of cluster. The simulation results show that SCHS has significant improvement over LEACH in terms of lifetime of network and data units gathered at base station.展开更多
文摘Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifically at the intersections in order to collect traffic information from the vehicles and disseminate alarms and messages in emergency situations to the neighborhood vehicles cooperating with the network.However,the development of a predominant RSUs placement algorithm for ensuring competent communication in VANETs is a challenging issue due to the hindrance of obstacles like water bodies,trees and buildings.In this paper,Ruppert’s Delaunay Triangulation Refinement Scheme(RDTRS)for optimal RSUs placement is proposed for accurately estimating the optimal number of RSUs that has the possibility of enhancing the area of coverage during data communication.This RDTRS is proposed by considering the maximum number of factors such as global coverage,intersection popularity,vehicle density and obstacles present in the map for optimal RSUs placement,which is considered as the core improvement over the existing RSUs optimal placement strategies.It is contributed for deploying requisite RSUs with essential transmission range for maximal coverage in the convex map such that each position of the map could be effectively covered by at least one RSU in the presence of obstacles.The simulation experiments of the proposed RDTRS are conducted with complex road traffic environments.The results of this proposed RDTRS confirmed its predominance in reducing the end-to-end delay by 21.32%,packet loss by 9.38%with improved packet delivery rate of 10.68%,compared to the benchmarked schemes.
基金Research Supporting Project Number(RSPD2023R 585),King Saud University,Riyadh,Saudi Arabia.
文摘Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up to 7G.Furthermore,it improves the array gain and directivity,increasing the detection range and angular resolution of radar systems.This study proposes two highly efficient SLL reduction techniques.These techniques are based on the hybridization between either the single convolution or the double convolution algorithms and the genetic algorithm(GA)to develop the Conv/GA andDConv/GA,respectively.The convolution process determines the element’s excitations while the GA optimizes the element spacing.For M elements linear antenna array(LAA),the convolution of the excitation coefficients vector by itself provides a new vector of excitations of length N=(2M−1).This new vector is divided into three different sets of excitations including the odd excitations,even excitations,and middle excitations of lengths M,M−1,andM,respectively.When the same element spacing as the original LAA is used,it is noticed that the odd and even excitations provide a much lower SLL than that of the LAA but with amuch wider half-power beamwidth(HPBW).While the middle excitations give the same HPBWas the original LAA with a relatively higher SLL.Tomitigate the increased HPBWof the odd and even excitations,the element spacing is optimized using the GA.Thereby,the synthesized arrays have the same HPBW as the original LAA with a two-fold reduction in the SLL.Furthermore,for extreme SLL reduction,the DConv/GA is introduced.In this technique,the same procedure of the aforementioned Conv/GA technique is performed on the resultant even and odd excitation vectors.It provides a relatively wider HPBWthan the original LAA with about quad-fold reduction in the SLL.
基金Princess Nourah bint Abdulrahman University Researchers Supporting ProjectNumber (PNURSP2022R66), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
文摘Authentication of the digital image has much attention for the digital revolution.Digital image authentication can be verified with image watermarking and image encryption schemes.These schemes are widely used to protect images against forgery attacks,and they are useful for protecting copyright and rightful ownership.Depending on the desirable applications,several image encryption and watermarking schemes have been proposed to moderate this attention.This framework presents a new scheme that combines a Walsh Hadamard Transform(WHT)-based image watermarking scheme with an image encryption scheme based on Double Random Phase Encoding(DRPE).First,on the sender side,the secret medical image is encrypted using DRPE.Then the encrypted image is watermarking based on WHT.The combination between watermarking and encryption increases the security and robustness of transmitting an image.The performance evaluation of the proposed scheme is obtained by testing Structural Similarity Index(SSIM),Peak Signal-to-Noise Ratio(PSNR),Normalized cross-correlation(NC),and Feature Similarity Index(FSIM).
文摘Technological advancement in the field of trans- portation and communication has been happening at a faster pace in the past few decades. As the demand for high-speed transportation increases, the need for an improved seamless communication system to handle higher data traffic in a highly mobile environment becomes imperative. This paper proposes a novel scheme to enhance the quality of service in high-speed railway (HSR) communication environment using the concept of torch nodes (TNs) and adaptive measurement aggregation (AMA). The system was modeled using an object-oriented discrete event sim- ulator, and the performance was analyzed against the existing single-antenna scheme. The simulation results show that the proposed scheme with its minimal imple- mentation overhead can efficiently perform seamless han- dover with reduced handover failure and communication interruption probability.
文摘In the near future, there are expected to have at least billions of devices interconnected with each other. How to connect so many devices becomes a big issue. Machine-to-Machine (M2M) communications serve as the fundamental underlying technologies to support such Internet of Things (IoT) applications. The characteristics and services requirements of machine type communication devices (MTCDs) are totally different from the existing ones. Existing network technologies, ranging from personal area networks to wide area networks, are not well suited for M2M communications. Therefore, we first investigate the characteristics and service requirements for MTCDs. Recent advances in both cellular and capillary M2M communications are also discussed. Finally, we list some open issues and future research directions.
基金the Istanbul Technical University Scientific Research Projects Unit with grant number MGA-2022-43948。
文摘The hyperloop idea,which is one of the most ecofriendly,low-carbon emissions,and fossil fuel-efficient modes of transportation,has recently become quite popular.In this study,a double-sided linear induction motor(LIM)with 500 W of output power,60 N of thrust force and 200 V/38.58 Hz of supply voltage was designed to be used in hyperloop development competition hosted by the scientific and technological research council of turkey(TüB?TAK)rail transportation technologies institute(RUTE).In contrast to the studies in the literature,concentrated winding is preferred instead of distributed winding due to mechanical constraints.The electromagnetic design of LIM,whose mechanical and electrical requirements were determined considering the hyperloop development competition,was carried out by following certain steps.Then,the designed model was simulated and analyzed by finite element method(FEM),and the necessary optimizations have been performed to improve the motor characteristics.By examining the final model,the applicability of the concentrated winding type LIM for hyperloop technology has been investigated.Besides,the effects of primary material,railway material,and mechanical air-gap length on LIM performance were also investigated.In the practical phase of the study,the designed LIM has been prototyped and tested.The validation of the experimental results was achieved through good agreement with the finite element analysis results.
文摘This article presents a compact crab-shaped reconfigurable antenna(CSRA)designed for 5G sub-6 GHz wireless applications. The antenna achieves enhanced gain in a miniaturized form factor by incorporating a hexagonal split-ring structure controlled via two radio frequency(RF) positive-intrinsicnegative(PIN) diodes(BAR64-02V). While the antenna is primarily designed to operate at 3.50 GHz for sub-6 GHz 5G applications, RF switching enables the CSRA to cover a broader frequency spectrum, including the S-band, X-band, and portions of the Ku-band. The proposed antenna offers several advantages: It is low-cost(fabricated on an FR-4 substrate), compact(achieving 64.07% size reduction compared to conventional designs), and features both frequency and gain reconfigurability through digitally controlled PIN diode switching. The reflection coefficients of the antenna, both without diodes and across all four switching states, were experimentally validated in the laboratory using a Keysight Field Fox microwave analyzer(N9916A, 14 GHz). The simulated radiation patterns and gain characteristics closely matched the measured values, demonstrating an excellent agreement. This study bridges the gap between traditional and next-generation antenna designs by offering a compact,cost-effective, and high-performance solution for multiband, reconfigurable wireless communication systems. The integration of double-split-ring resonators and dynamic reconfigurability makes the proposed antenna a strong candidate for various applications, including S-band and X-band systems, as well as the emerging lower 6G band(7.125 GHz–8.400 GHz).
文摘This paper provides a comprehensive bibliometric exposition on deepfake research,exploring the intersection of artificial intelligence and deepfakes as well as international collaborations,prominent researchers,organizations,institutions,publications,and key themes.We performed a search on theWeb of Science(WoS)database,focusing on Artificial Intelligence and Deepfakes,and filtered the results across 21 research areas,yielding 1412 articles.Using VOSviewer visualization tool,we analyzed thisWoS data through keyword co-occurrence graphs,emphasizing on four prominent research themes.Compared with existing bibliometric papers on deepfakes,this paper proceeds to identify and discuss some of the highly cited papers within these themes:deepfake detection,feature extraction,face recognition,and forensics.The discussion highlights key challenges and advancements in deepfake research.Furthermore,this paper also discusses pressing issues surrounding deepfakes such as security,regulation,and datasets.We also provide an analysis of another exhaustive search on Scopus database focusing solely on Deepfakes(while not excluding AI)revealing deep learning as the predominant keyword,underscoring AI’s central role in deepfake research.This comprehensive analysis,encompassing over 500 keywords from 8790 articles,uncovered a wide range of methods,implications,applications,concerns,requirements,challenges,models,tools,datasets,and modalities related to deepfakes.Finally,a discussion on recommendations for policymakers,researchers,and other stakeholders is also provided.
基金Ehab Mahmoud Mohamed is supported via funding from Prince sattam bin Abdulaziz University project number(PSAU/2025/R/1446).
文摘Recently,a new worldwide race has emerged to achieve a breakthrough in designing and deploying massive ultra-dense low-Earth orbit(LEO)satellite constellation(SatCon)networks with the vision of providing everywhere Internet coverage from space.Several players have started the deployment phase with different scales.However,the implementation is in its infancy,and many investigations are needed.This work provides an overview of the stateof-the-art architectures,orbital patterns,top players,and potential applications of SatCon networks.Moreover,we discuss new open research directions and challenges for improving network performance.Finally,a case study highlights the benefits of integrating SatCon network and non-orthogonal multiple access(NOMA)technologies for improving the achievable capacity of satellite end-users.
文摘Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.
文摘The rapid development of data communication in modern era demands secure exchange of information. Steganography is an established method for hiding secret data from an unauthorized access into a cover object in such a way that it is invisible to human eyes. The cover object can be image, text, audio,or video. This paper proposes a secure steganography algorithm that hides a bitstream of the secret text into the least significant bits(LSBs) of the approximation coefficients of the integer wavelet transform(IWT) of grayscale images as well as each component of color images to form stego-images. The embedding and extracting phases of the proposed steganography algorithms are performed using the MATLAB software. Invisibility, payload capacity, and security in terms of peak signal to noise ratio(PSNR) and robustness are the key challenges to steganography. The statistical distortion between the cover images and the stego-images is measured by using the mean square error(MSE) and the PSNR, while the degree of closeness between them is evaluated using the normalized cross correlation(NCC). The experimental results show that, the proposed algorithms can hide the secret text with a large payload capacity with a high level of security and a higher invisibility. Furthermore, the proposed technique is computationally efficient and better results for both PSNR and NCC are achieved compared with the previous algorithms.
基金support by National Key Technology Research and Development Program of the Ministry of Science and Technology of China (2015BAK05B01)
文摘Studies have indicated that the distributed compressed sensing based(DCSbased) channel estimation can decrease the length of the reference signals effectively. In block transmission, a unique word(UW) can be used as a cyclic prefix and reference signal. However, the DCS-based channel estimation requires diversity sequences instead of UW. In this paper, we proposed a novel method that employs a training sequence(TS) whose duration time is slightly longer than the maximum delay spread time. Based on proposed TS, the DCS approach perform perfectly in multipath channel estimation. Meanwhile, a cyclic prefix construct could be formed, which reduces the complexity of the frequency domain equalization(FDE) directly. Simulation results demonstrate that, by using the method of simultaneous orthogonal matching pursuit(SOMP), the required channel overhead has been reduced thanks to the proposed TS.
文摘In this study,a 2kHz Tonpilz projector was designed using a Terfenol-D and modeled in ATILA.For the purpose of modeling studies,it has been determined that a radiating head mass exhibits better transmitting current response(TCR) at 136 mm diameter,where the resonance occurs at 2.4kHz and the peak value of 118 dB re 1 μPa/A at 1 m occurs at 12 kHz.Also bolt at a 46mm distance from the center of the head mass offers resonance at 2.4kHz,and the peak value of 115.3 dB re 1 μPa/A at 1m occurs at 11.5kHz.This optimized design is fabricated and molded with polyurethane of 3mm thickness.The prototype was tested at the Acoustic Test Facility(ATF) of National Institute of Ocean Technology(NIOT) for its underwater performances.Based on the result,the fundamental resonance was determined to be 2.18kHz and the peak value of TCR of 182 dB re 1 μPa/A at 1m occurs at 14 kHz.The maximum value of the RS was found to be -190 dB re 1V/μPa at 1m at a frequency of 2.1kHz.
文摘In the present scenario,cloud computing service provides on-request access to a collection of resources available in remote system that can be shared by numerous clients.Resources are in self-administration;consequently,clients can adjust their usage according to their requirements.Resource usage is estimated and clients can pay according to their utilization.In literature,the existing method describes the usage of various hardware assets.Quality of Service(QoS)needs to be considered for ascertaining the schedule and the access of resources.Adhering with the security arrangement,any additional code is forbidden to ensure the usage of resources complying with QoS.Thus,all monitoring must be done from the hypervisor.To overcome the issues,Robust Resource Allocation and Utilization(RRAU)approach is developed for optimizing the management of its cloud resources.The work hosts a numerous virtual assets which could be expected under the circumstances and it enforces a controlled degree of QoS.The asset assignment calculation is heuristic,which is based on experimental evaluations,RRAU approach with J48 prediction model reduces Job Completion Time(JCT)by 4.75 s,Make Span(MS)6.25,and Monetary Cost(MC)4.25 for 15,25,35 and 45 resources are compared to the conventional methodologies in cloud environment.
基金This research was funded by“TAIF UNIVERSITY RESEARCHERS SUPPORTING PROJECT,grant number TURSP-2020/345”,Taif University,Taif,Saudi Arabia.
文摘Recently,many researchers have tried to develop a robust,fast,and accurate algorithm.This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking,gaze-based human-computer interaction,medical applications(such as deaf and diabetes patients),and attention analysis.Many real-world conditions challenge the eye appearance,such as illumination,reflections,and occasions.On the other hand,individual differences in eye physiology and other sources of noise,such as contact lenses or make-up.The present work introduces a robust pupil detection algorithm with and higher accuracy than the previous attempts for real-time analytics applications.The proposed circular hough transform with morphing canny edge detection for Pupillometery(CHMCEP)algorithm can detect even the blurred or noisy images by using different filtering methods in the pre-processing or start phase to remove the blur and noise and finally the second filtering process before the circular Hough transform for the center fitting to make sure better accuracy.The performance of the proposed CHMCEP algorithm was tested against recent pupil detection methods.Simulations and results show that the proposed CHMCEP algorithm achieved detection rates of 87.11,78.54,58,and 78 according to´Swirski,ExCuSe,Else,and labeled pupils in the wild(LPW)data sets,respectively.These results show that the proposed approach performs better than the other pupil detection methods by a large margin by providing exact and robust pupil positions on challenging ordinary eye pictures.
文摘This paper presents a new adaptive mutation approach for fastening the convergence of immune algorithms (IAs). This method is adopted to realize the twin goals of maintaining diversity in the population and sustaining the convergence capacity of the IA. In this method, the mutation rate (pm) is adaptively varied depending on the fitness values of the solutions. Solutions of high fitness are protected, while solutions with sub-average fitness are totally disrupted. A solution to the problem of deciding the optimal value of pm is obtained. Experiments are carried out to compare the proposed approach to traditional one on a set of optimization problems. These are namely: 1) an exponential multi-variable function;2) a rapidly varying multimodal function and 3) design of a second order 2-D narrow band recursive LPF. Simulation results show that the proposed method efficiently improves IA’s performance and prevents it from getting stuck at a local optimum.
基金The authors would like to thank the support of the Taif University Researchers Supporting Project TURSP 2020/34,Taif University,Taif Saudi Arabia for supporting this work.
文摘Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.
基金This work was funded by the Researchers Supporting Project Number(RSP-2021/300),King Saud University,Riyadh,Saudi Arabia.
文摘Real-time disease prediction has emerged as the main focus of study in the field of computerized medicine.Intelligent disease identification framework can assist medical practitioners in diagnosing disease in a way that is reliable,consistent,and timely,successfully lowering mortality rates,particularly during endemics and pandemics.To prevent this pandemic’s rapid and widespread,it is vital to quickly identify,confine,and treat affected individuals.The need for auxiliary computer-aided diagnostic(CAD)systems has grown.Numerous recent studies have indicated that radiological pictures contained critical information regarding the COVID-19 virus.Utilizing advanced convolutional neural network(CNN)architectures in conjunction with radiological imaging makes it possible to provide rapid,accurate,and extremely useful susceptible classifications.This research work proposes a methodology for real-time detection of COVID-19 infections caused by the Corona Virus.The purpose of this study is to offer a two-way COVID-19(2WCD)diagnosis prediction deep learning system that is built on Transfer Learning Methodologies(TLM)and features customized fine-tuning on top of fully connected layered pre-trained CNN architectures.2WCD has applied modifications to pre-trained models for better performance.It is designed and implemented to improve the generalization ability of the classifier for binary and multi-class models.Along with the ability to differentiate COVID-19 and No-Patient in the binary class model and COVID-19,No-Patient,and Pneumonia in the multi-class model,our framework is augmented with a critical add-on for visually demonstrating infection in any tested radiological image by highlighting the affected region in the patient’s lung in a recognizable color pattern.The proposed system is shown to be extremely robust and reliable for real-time COVID-19 diagnostic prediction.It can also be used to forecast other lung-related disorders.As the system can assist medical practitioners in diagnosing the greatest number of patients in the shortestamount of time, radiologists can also be used or published online to assistany less-experienced individual in obtaining an accurate immediate screeningfor their radiological images.
基金supported by the MSIT (Ministry of Science and ICT),Korea,under the ICAN (ICT Challenge and Advanced Network of HRD)Program (IITP-2023-2020-0-01832)supervised by the IITP (Institute of Information&Communications Technology Planning&Evaluation)and the Soonchunhyang University Research Fund.
文摘Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have been susceptible to black fungus.Therefore,recovered COVID-19 patients should seek medical support when they notice mucormycosis symptoms.This paper proposes a novel ensemble deep-learning model that includes three pre-trained models:reset(50),VGG(19),and Inception.Our approach is medically intuitive and efficient compared to the traditional deep learning models.An image dataset was aggregated from various resources and divided into two classes:a black fungus class and a skin infection class.To the best of our knowledge,our study is the first that is concerned with building black fungus detection models based on deep learning algorithms.The proposed approach can significantly improve the performance of the classification task and increase the generalization ability of such a binary classification task.According to the reported results,it has empirically achieved a sensitivity value of 0.9907,a specificity value of 0.9938,a precision value of 0.9938,and a negative predictive value of 0.9907.
文摘Wireless sensor networks are energy constraint networks. Energy efficiency, to prolong the network for a longer time is critical issue for wireless sensor network protocols. Clustering protocols are energy efficient approaches to extend the lifetime of network. Intra-cluster communication is the main driving factor for energy efficiency of clustering protocols. Intra-cluster energy consumption depends upon the position of cluster head in the cluster. Wrongly positioned clusters head make cluster more energy consuming. In this paper, a simple and efficient cluster head selection scheme is proposed, named Smart Cluster Head Selection (SCHS). It can be implemented with any distributed clustering approach. In SCHS, the area is divided into two parts: border area and inner area. Only inner area nodes are eligible for cluster head role. SCHS reduces the intra-cluster communication distance hence improves the energy efficiency of cluster. The simulation results show that SCHS has significant improvement over LEACH in terms of lifetime of network and data units gathered at base station.