Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up t...Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up to 7G.Furthermore,it improves the array gain and directivity,increasing the detection range and angular resolution of radar systems.This study proposes two highly efficient SLL reduction techniques.These techniques are based on the hybridization between either the single convolution or the double convolution algorithms and the genetic algorithm(GA)to develop the Conv/GA andDConv/GA,respectively.The convolution process determines the element’s excitations while the GA optimizes the element spacing.For M elements linear antenna array(LAA),the convolution of the excitation coefficients vector by itself provides a new vector of excitations of length N=(2M−1).This new vector is divided into three different sets of excitations including the odd excitations,even excitations,and middle excitations of lengths M,M−1,andM,respectively.When the same element spacing as the original LAA is used,it is noticed that the odd and even excitations provide a much lower SLL than that of the LAA but with amuch wider half-power beamwidth(HPBW).While the middle excitations give the same HPBWas the original LAA with a relatively higher SLL.Tomitigate the increased HPBWof the odd and even excitations,the element spacing is optimized using the GA.Thereby,the synthesized arrays have the same HPBW as the original LAA with a two-fold reduction in the SLL.Furthermore,for extreme SLL reduction,the DConv/GA is introduced.In this technique,the same procedure of the aforementioned Conv/GA technique is performed on the resultant even and odd excitation vectors.It provides a relatively wider HPBWthan the original LAA with about quad-fold reduction in the SLL.展开更多
This paper provides a comprehensive bibliometric exposition on deepfake research,exploring the intersection of artificial intelligence and deepfakes as well as international collaborations,prominent researchers,organi...This paper provides a comprehensive bibliometric exposition on deepfake research,exploring the intersection of artificial intelligence and deepfakes as well as international collaborations,prominent researchers,organizations,institutions,publications,and key themes.We performed a search on theWeb of Science(WoS)database,focusing on Artificial Intelligence and Deepfakes,and filtered the results across 21 research areas,yielding 1412 articles.Using VOSviewer visualization tool,we analyzed thisWoS data through keyword co-occurrence graphs,emphasizing on four prominent research themes.Compared with existing bibliometric papers on deepfakes,this paper proceeds to identify and discuss some of the highly cited papers within these themes:deepfake detection,feature extraction,face recognition,and forensics.The discussion highlights key challenges and advancements in deepfake research.Furthermore,this paper also discusses pressing issues surrounding deepfakes such as security,regulation,and datasets.We also provide an analysis of another exhaustive search on Scopus database focusing solely on Deepfakes(while not excluding AI)revealing deep learning as the predominant keyword,underscoring AI’s central role in deepfake research.This comprehensive analysis,encompassing over 500 keywords from 8790 articles,uncovered a wide range of methods,implications,applications,concerns,requirements,challenges,models,tools,datasets,and modalities related to deepfakes.Finally,a discussion on recommendations for policymakers,researchers,and other stakeholders is also provided.展开更多
Recently,a new worldwide race has emerged to achieve a breakthrough in designing and deploying massive ultra-dense low-Earth orbit(LEO)satellite constellation(SatCon)networks with the vision of providing everywhere In...Recently,a new worldwide race has emerged to achieve a breakthrough in designing and deploying massive ultra-dense low-Earth orbit(LEO)satellite constellation(SatCon)networks with the vision of providing everywhere Internet coverage from space.Several players have started the deployment phase with different scales.However,the implementation is in its infancy,and many investigations are needed.This work provides an overview of the stateof-the-art architectures,orbital patterns,top players,and potential applications of SatCon networks.Moreover,we discuss new open research directions and challenges for improving network performance.Finally,a case study highlights the benefits of integrating SatCon network and non-orthogonal multiple access(NOMA)technologies for improving the achievable capacity of satellite end-users.展开更多
Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate...Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.展开更多
Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifical...Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifically at the intersections in order to collect traffic information from the vehicles and disseminate alarms and messages in emergency situations to the neighborhood vehicles cooperating with the network.However,the development of a predominant RSUs placement algorithm for ensuring competent communication in VANETs is a challenging issue due to the hindrance of obstacles like water bodies,trees and buildings.In this paper,Ruppert’s Delaunay Triangulation Refinement Scheme(RDTRS)for optimal RSUs placement is proposed for accurately estimating the optimal number of RSUs that has the possibility of enhancing the area of coverage during data communication.This RDTRS is proposed by considering the maximum number of factors such as global coverage,intersection popularity,vehicle density and obstacles present in the map for optimal RSUs placement,which is considered as the core improvement over the existing RSUs optimal placement strategies.It is contributed for deploying requisite RSUs with essential transmission range for maximal coverage in the convex map such that each position of the map could be effectively covered by at least one RSU in the presence of obstacles.The simulation experiments of the proposed RDTRS are conducted with complex road traffic environments.The results of this proposed RDTRS confirmed its predominance in reducing the end-to-end delay by 21.32%,packet loss by 9.38%with improved packet delivery rate of 10.68%,compared to the benchmarked schemes.展开更多
Authentication of the digital image has much attention for the digital revolution.Digital image authentication can be verified with image watermarking and image encryption schemes.These schemes are widely used to prot...Authentication of the digital image has much attention for the digital revolution.Digital image authentication can be verified with image watermarking and image encryption schemes.These schemes are widely used to protect images against forgery attacks,and they are useful for protecting copyright and rightful ownership.Depending on the desirable applications,several image encryption and watermarking schemes have been proposed to moderate this attention.This framework presents a new scheme that combines a Walsh Hadamard Transform(WHT)-based image watermarking scheme with an image encryption scheme based on Double Random Phase Encoding(DRPE).First,on the sender side,the secret medical image is encrypted using DRPE.Then the encrypted image is watermarking based on WHT.The combination between watermarking and encryption increases the security and robustness of transmitting an image.The performance evaluation of the proposed scheme is obtained by testing Structural Similarity Index(SSIM),Peak Signal-to-Noise Ratio(PSNR),Normalized cross-correlation(NC),and Feature Similarity Index(FSIM).展开更多
Technological advancement in the field of trans- portation and communication has been happening at a faster pace in the past few decades. As the demand for high-speed transportation increases, the need for an improved...Technological advancement in the field of trans- portation and communication has been happening at a faster pace in the past few decades. As the demand for high-speed transportation increases, the need for an improved seamless communication system to handle higher data traffic in a highly mobile environment becomes imperative. This paper proposes a novel scheme to enhance the quality of service in high-speed railway (HSR) communication environment using the concept of torch nodes (TNs) and adaptive measurement aggregation (AMA). The system was modeled using an object-oriented discrete event sim- ulator, and the performance was analyzed against the existing single-antenna scheme. The simulation results show that the proposed scheme with its minimal imple- mentation overhead can efficiently perform seamless han- dover with reduced handover failure and communication interruption probability.展开更多
In the near future, there are expected to have at least billions of devices interconnected with each other. How to connect so many devices becomes a big issue. Machine-to-Machine (M2M) communications serve as the fund...In the near future, there are expected to have at least billions of devices interconnected with each other. How to connect so many devices becomes a big issue. Machine-to-Machine (M2M) communications serve as the fundamental underlying technologies to support such Internet of Things (IoT) applications. The characteristics and services requirements of machine type communication devices (MTCDs) are totally different from the existing ones. Existing network technologies, ranging from personal area networks to wide area networks, are not well suited for M2M communications. Therefore, we first investigate the characteristics and service requirements for MTCDs. Recent advances in both cellular and capillary M2M communications are also discussed. Finally, we list some open issues and future research directions. 展开更多
As the field of autonomous driving evolves, real-time semantic segmentation has become a crucial part of computer vision tasks. However, most existing methods use lightweight convolution to reduce the computational ef...As the field of autonomous driving evolves, real-time semantic segmentation has become a crucial part of computer vision tasks. However, most existing methods use lightweight convolution to reduce the computational effort, resulting in lower accuracy. To address this problem, we construct TBANet, a network with an encoder-decoder structure for efficient feature extraction. In the encoder part, the TBA module is designed to extract details and the ETBA module is used to learn semantic representations in a high-dimensional space. In the decoder part, we design a combination of multiple upsampling methods to aggregate features with less computational overhead. We validate the efficiency of TBANet on the Cityscapes dataset. It achieves 75.1% mean Intersection over Union(mIoU) with only 2.07 million parameters and can reach 90.3 Frames Per Second(FPS).展开更多
As 5th Generation(5G)and Beyond 5G(B5G)networks become increasingly prevalent,ensuring not only networksecurity but also the security and reliability of the applications,the so-called network applications,becomesof pa...As 5th Generation(5G)and Beyond 5G(B5G)networks become increasingly prevalent,ensuring not only networksecurity but also the security and reliability of the applications,the so-called network applications,becomesof paramount importance.This paper introduces a novel integrated model architecture,combining a networkapplication validation framework with an AI-driven reactive system to enhance security in real-time.The proposedmodel leverages machine learning(ML)and artificial intelligence(AI)to dynamically monitor and respond tosecurity threats,effectively mitigating potential risks before they impact the network infrastructure.This dualapproach not only validates the functionality and performance of network applications before their real deploymentbut also enhances the network’s ability to adapt and respond to threats as they arise.The implementation ofthis model,in the shape of an architecture deployed in two distinct sites,demonstrates its practical viability andeffectiveness.Integrating application validation with proactive threat detection and response,the proposed modeladdresses critical security challenges unique to 5G infrastructures.This paper details the model,architecture’sdesign,implementation,and evaluation of this solution,illustrating its potential to improve network securitymanagement in 5G environments significantly.Our findings highlight the architecture’s capability to ensure boththe operational integrity of network applications and the security of the underlying infrastructure,presenting asignificant advancement in network security.展开更多
Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial ...Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.展开更多
Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN t...Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.展开更多
In the research of video-based violent behavior detection,the motion information in the video is vital for violence detection.How to highlight motion information in videos and integrate spatiotemporal information is a...In the research of video-based violent behavior detection,the motion information in the video is vital for violence detection.How to highlight motion information in videos and integrate spatiotemporal information is an urgent problem that needs to be solved in violence detection.In this paper,we propose a deep learning architecture that integrates shallow features into deep features to strengthen the network's ability to express motion information at a deep level.To enhance the weight of motion information in the network,we design a downsampling module to extract shallow features,fused with the deep features extracted by MobileNet's Blocks.Furthermore we constructed a channel attention module and introduced a Convolutional Long Short-Term Memory(ConvLSTM)module.These two modules aim to redistribute network attention:the channel attention module focuses on channel-level information and the ConvLSTM module emphasizes temporal aspects.Finally,we employ 3D convolution and global pooling to compress the feature sizes,fed into fully connected layers to perform violence detection.Experiments are conducted on three publicly available standard datasets,achieving an accuracy rate of 91%on the surveillance video dataset RWF2000,97.5%on the Hockey fight dataset,and 100%on the movies dataset.Overall,the proposed model demonstrates satisfactory performance in violence detection.展开更多
The rapid development of data communication in modern era demands secure exchange of information. Steganography is an established method for hiding secret data from an unauthorized access into a cover object in such a...The rapid development of data communication in modern era demands secure exchange of information. Steganography is an established method for hiding secret data from an unauthorized access into a cover object in such a way that it is invisible to human eyes. The cover object can be image, text, audio,or video. This paper proposes a secure steganography algorithm that hides a bitstream of the secret text into the least significant bits(LSBs) of the approximation coefficients of the integer wavelet transform(IWT) of grayscale images as well as each component of color images to form stego-images. The embedding and extracting phases of the proposed steganography algorithms are performed using the MATLAB software. Invisibility, payload capacity, and security in terms of peak signal to noise ratio(PSNR) and robustness are the key challenges to steganography. The statistical distortion between the cover images and the stego-images is measured by using the mean square error(MSE) and the PSNR, while the degree of closeness between them is evaluated using the normalized cross correlation(NCC). The experimental results show that, the proposed algorithms can hide the secret text with a large payload capacity with a high level of security and a higher invisibility. Furthermore, the proposed technique is computationally efficient and better results for both PSNR and NCC are achieved compared with the previous algorithms.展开更多
Studies have indicated that the distributed compressed sensing based(DCSbased) channel estimation can decrease the length of the reference signals effectively. In block transmission, a unique word(UW) can be used as a...Studies have indicated that the distributed compressed sensing based(DCSbased) channel estimation can decrease the length of the reference signals effectively. In block transmission, a unique word(UW) can be used as a cyclic prefix and reference signal. However, the DCS-based channel estimation requires diversity sequences instead of UW. In this paper, we proposed a novel method that employs a training sequence(TS) whose duration time is slightly longer than the maximum delay spread time. Based on proposed TS, the DCS approach perform perfectly in multipath channel estimation. Meanwhile, a cyclic prefix construct could be formed, which reduces the complexity of the frequency domain equalization(FDE) directly. Simulation results demonstrate that, by using the method of simultaneous orthogonal matching pursuit(SOMP), the required channel overhead has been reduced thanks to the proposed TS.展开更多
In this study,a 2kHz Tonpilz projector was designed using a Terfenol-D and modeled in ATILA.For the purpose of modeling studies,it has been determined that a radiating head mass exhibits better transmitting current re...In this study,a 2kHz Tonpilz projector was designed using a Terfenol-D and modeled in ATILA.For the purpose of modeling studies,it has been determined that a radiating head mass exhibits better transmitting current response(TCR) at 136 mm diameter,where the resonance occurs at 2.4kHz and the peak value of 118 dB re 1 μPa/A at 1 m occurs at 12 kHz.Also bolt at a 46mm distance from the center of the head mass offers resonance at 2.4kHz,and the peak value of 115.3 dB re 1 μPa/A at 1m occurs at 11.5kHz.This optimized design is fabricated and molded with polyurethane of 3mm thickness.The prototype was tested at the Acoustic Test Facility(ATF) of National Institute of Ocean Technology(NIOT) for its underwater performances.Based on the result,the fundamental resonance was determined to be 2.18kHz and the peak value of TCR of 182 dB re 1 μPa/A at 1m occurs at 14 kHz.The maximum value of the RS was found to be -190 dB re 1V/μPa at 1m at a frequency of 2.1kHz.展开更多
In the present scenario,cloud computing service provides on-request access to a collection of resources available in remote system that can be shared by numerous clients.Resources are in self-administration;consequent...In the present scenario,cloud computing service provides on-request access to a collection of resources available in remote system that can be shared by numerous clients.Resources are in self-administration;consequently,clients can adjust their usage according to their requirements.Resource usage is estimated and clients can pay according to their utilization.In literature,the existing method describes the usage of various hardware assets.Quality of Service(QoS)needs to be considered for ascertaining the schedule and the access of resources.Adhering with the security arrangement,any additional code is forbidden to ensure the usage of resources complying with QoS.Thus,all monitoring must be done from the hypervisor.To overcome the issues,Robust Resource Allocation and Utilization(RRAU)approach is developed for optimizing the management of its cloud resources.The work hosts a numerous virtual assets which could be expected under the circumstances and it enforces a controlled degree of QoS.The asset assignment calculation is heuristic,which is based on experimental evaluations,RRAU approach with J48 prediction model reduces Job Completion Time(JCT)by 4.75 s,Make Span(MS)6.25,and Monetary Cost(MC)4.25 for 15,25,35 and 45 resources are compared to the conventional methodologies in cloud environment.展开更多
Recently,many researchers have tried to develop a robust,fast,and accurate algorithm.This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking,gaze-based hu...Recently,many researchers have tried to develop a robust,fast,and accurate algorithm.This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking,gaze-based human-computer interaction,medical applications(such as deaf and diabetes patients),and attention analysis.Many real-world conditions challenge the eye appearance,such as illumination,reflections,and occasions.On the other hand,individual differences in eye physiology and other sources of noise,such as contact lenses or make-up.The present work introduces a robust pupil detection algorithm with and higher accuracy than the previous attempts for real-time analytics applications.The proposed circular hough transform with morphing canny edge detection for Pupillometery(CHMCEP)algorithm can detect even the blurred or noisy images by using different filtering methods in the pre-processing or start phase to remove the blur and noise and finally the second filtering process before the circular Hough transform for the center fitting to make sure better accuracy.The performance of the proposed CHMCEP algorithm was tested against recent pupil detection methods.Simulations and results show that the proposed CHMCEP algorithm achieved detection rates of 87.11,78.54,58,and 78 according to´Swirski,ExCuSe,Else,and labeled pupils in the wild(LPW)data sets,respectively.These results show that the proposed approach performs better than the other pupil detection methods by a large margin by providing exact and robust pupil positions on challenging ordinary eye pictures.展开更多
This paper presents a new adaptive mutation approach for fastening the convergence of immune algorithms (IAs). This method is adopted to realize the twin goals of maintaining diversity in the population and sustaining...This paper presents a new adaptive mutation approach for fastening the convergence of immune algorithms (IAs). This method is adopted to realize the twin goals of maintaining diversity in the population and sustaining the convergence capacity of the IA. In this method, the mutation rate (pm) is adaptively varied depending on the fitness values of the solutions. Solutions of high fitness are protected, while solutions with sub-average fitness are totally disrupted. A solution to the problem of deciding the optimal value of pm is obtained. Experiments are carried out to compare the proposed approach to traditional one on a set of optimization problems. These are namely: 1) an exponential multi-variable function;2) a rapidly varying multimodal function and 3) design of a second order 2-D narrow band recursive LPF. Simulation results show that the proposed method efficiently improves IA’s performance and prevents it from getting stuck at a local optimum.展开更多
Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish betwee...Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.展开更多
基金Research Supporting Project Number(RSPD2023R 585),King Saud University,Riyadh,Saudi Arabia.
文摘Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up to 7G.Furthermore,it improves the array gain and directivity,increasing the detection range and angular resolution of radar systems.This study proposes two highly efficient SLL reduction techniques.These techniques are based on the hybridization between either the single convolution or the double convolution algorithms and the genetic algorithm(GA)to develop the Conv/GA andDConv/GA,respectively.The convolution process determines the element’s excitations while the GA optimizes the element spacing.For M elements linear antenna array(LAA),the convolution of the excitation coefficients vector by itself provides a new vector of excitations of length N=(2M−1).This new vector is divided into three different sets of excitations including the odd excitations,even excitations,and middle excitations of lengths M,M−1,andM,respectively.When the same element spacing as the original LAA is used,it is noticed that the odd and even excitations provide a much lower SLL than that of the LAA but with amuch wider half-power beamwidth(HPBW).While the middle excitations give the same HPBWas the original LAA with a relatively higher SLL.Tomitigate the increased HPBWof the odd and even excitations,the element spacing is optimized using the GA.Thereby,the synthesized arrays have the same HPBW as the original LAA with a two-fold reduction in the SLL.Furthermore,for extreme SLL reduction,the DConv/GA is introduced.In this technique,the same procedure of the aforementioned Conv/GA technique is performed on the resultant even and odd excitation vectors.It provides a relatively wider HPBWthan the original LAA with about quad-fold reduction in the SLL.
文摘This paper provides a comprehensive bibliometric exposition on deepfake research,exploring the intersection of artificial intelligence and deepfakes as well as international collaborations,prominent researchers,organizations,institutions,publications,and key themes.We performed a search on theWeb of Science(WoS)database,focusing on Artificial Intelligence and Deepfakes,and filtered the results across 21 research areas,yielding 1412 articles.Using VOSviewer visualization tool,we analyzed thisWoS data through keyword co-occurrence graphs,emphasizing on four prominent research themes.Compared with existing bibliometric papers on deepfakes,this paper proceeds to identify and discuss some of the highly cited papers within these themes:deepfake detection,feature extraction,face recognition,and forensics.The discussion highlights key challenges and advancements in deepfake research.Furthermore,this paper also discusses pressing issues surrounding deepfakes such as security,regulation,and datasets.We also provide an analysis of another exhaustive search on Scopus database focusing solely on Deepfakes(while not excluding AI)revealing deep learning as the predominant keyword,underscoring AI’s central role in deepfake research.This comprehensive analysis,encompassing over 500 keywords from 8790 articles,uncovered a wide range of methods,implications,applications,concerns,requirements,challenges,models,tools,datasets,and modalities related to deepfakes.Finally,a discussion on recommendations for policymakers,researchers,and other stakeholders is also provided.
基金Ehab Mahmoud Mohamed is supported via funding from Prince sattam bin Abdulaziz University project number(PSAU/2025/R/1446).
文摘Recently,a new worldwide race has emerged to achieve a breakthrough in designing and deploying massive ultra-dense low-Earth orbit(LEO)satellite constellation(SatCon)networks with the vision of providing everywhere Internet coverage from space.Several players have started the deployment phase with different scales.However,the implementation is in its infancy,and many investigations are needed.This work provides an overview of the stateof-the-art architectures,orbital patterns,top players,and potential applications of SatCon networks.Moreover,we discuss new open research directions and challenges for improving network performance.Finally,a case study highlights the benefits of integrating SatCon network and non-orthogonal multiple access(NOMA)technologies for improving the achievable capacity of satellite end-users.
文摘Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.
文摘Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifically at the intersections in order to collect traffic information from the vehicles and disseminate alarms and messages in emergency situations to the neighborhood vehicles cooperating with the network.However,the development of a predominant RSUs placement algorithm for ensuring competent communication in VANETs is a challenging issue due to the hindrance of obstacles like water bodies,trees and buildings.In this paper,Ruppert’s Delaunay Triangulation Refinement Scheme(RDTRS)for optimal RSUs placement is proposed for accurately estimating the optimal number of RSUs that has the possibility of enhancing the area of coverage during data communication.This RDTRS is proposed by considering the maximum number of factors such as global coverage,intersection popularity,vehicle density and obstacles present in the map for optimal RSUs placement,which is considered as the core improvement over the existing RSUs optimal placement strategies.It is contributed for deploying requisite RSUs with essential transmission range for maximal coverage in the convex map such that each position of the map could be effectively covered by at least one RSU in the presence of obstacles.The simulation experiments of the proposed RDTRS are conducted with complex road traffic environments.The results of this proposed RDTRS confirmed its predominance in reducing the end-to-end delay by 21.32%,packet loss by 9.38%with improved packet delivery rate of 10.68%,compared to the benchmarked schemes.
基金Princess Nourah bint Abdulrahman University Researchers Supporting ProjectNumber (PNURSP2022R66), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
文摘Authentication of the digital image has much attention for the digital revolution.Digital image authentication can be verified with image watermarking and image encryption schemes.These schemes are widely used to protect images against forgery attacks,and they are useful for protecting copyright and rightful ownership.Depending on the desirable applications,several image encryption and watermarking schemes have been proposed to moderate this attention.This framework presents a new scheme that combines a Walsh Hadamard Transform(WHT)-based image watermarking scheme with an image encryption scheme based on Double Random Phase Encoding(DRPE).First,on the sender side,the secret medical image is encrypted using DRPE.Then the encrypted image is watermarking based on WHT.The combination between watermarking and encryption increases the security and robustness of transmitting an image.The performance evaluation of the proposed scheme is obtained by testing Structural Similarity Index(SSIM),Peak Signal-to-Noise Ratio(PSNR),Normalized cross-correlation(NC),and Feature Similarity Index(FSIM).
文摘Technological advancement in the field of trans- portation and communication has been happening at a faster pace in the past few decades. As the demand for high-speed transportation increases, the need for an improved seamless communication system to handle higher data traffic in a highly mobile environment becomes imperative. This paper proposes a novel scheme to enhance the quality of service in high-speed railway (HSR) communication environment using the concept of torch nodes (TNs) and adaptive measurement aggregation (AMA). The system was modeled using an object-oriented discrete event sim- ulator, and the performance was analyzed against the existing single-antenna scheme. The simulation results show that the proposed scheme with its minimal imple- mentation overhead can efficiently perform seamless han- dover with reduced handover failure and communication interruption probability.
文摘In the near future, there are expected to have at least billions of devices interconnected with each other. How to connect so many devices becomes a big issue. Machine-to-Machine (M2M) communications serve as the fundamental underlying technologies to support such Internet of Things (IoT) applications. The characteristics and services requirements of machine type communication devices (MTCDs) are totally different from the existing ones. Existing network technologies, ranging from personal area networks to wide area networks, are not well suited for M2M communications. Therefore, we first investigate the characteristics and service requirements for MTCDs. Recent advances in both cellular and capillary M2M communications are also discussed. Finally, we list some open issues and future research directions.
文摘As the field of autonomous driving evolves, real-time semantic segmentation has become a crucial part of computer vision tasks. However, most existing methods use lightweight convolution to reduce the computational effort, resulting in lower accuracy. To address this problem, we construct TBANet, a network with an encoder-decoder structure for efficient feature extraction. In the encoder part, the TBA module is designed to extract details and the ETBA module is used to learn semantic representations in a high-dimensional space. In the decoder part, we design a combination of multiple upsampling methods to aggregate features with less computational overhead. We validate the efficiency of TBANet on the Cityscapes dataset. It achieves 75.1% mean Intersection over Union(mIoU) with only 2.07 million parameters and can reach 90.3 Frames Per Second(FPS).
文摘As 5th Generation(5G)and Beyond 5G(B5G)networks become increasingly prevalent,ensuring not only networksecurity but also the security and reliability of the applications,the so-called network applications,becomesof paramount importance.This paper introduces a novel integrated model architecture,combining a networkapplication validation framework with an AI-driven reactive system to enhance security in real-time.The proposedmodel leverages machine learning(ML)and artificial intelligence(AI)to dynamically monitor and respond tosecurity threats,effectively mitigating potential risks before they impact the network infrastructure.This dualapproach not only validates the functionality and performance of network applications before their real deploymentbut also enhances the network’s ability to adapt and respond to threats as they arise.The implementation ofthis model,in the shape of an architecture deployed in two distinct sites,demonstrates its practical viability andeffectiveness.Integrating application validation with proactive threat detection and response,the proposed modeladdresses critical security challenges unique to 5G infrastructures.This paper details the model,architecture’sdesign,implementation,and evaluation of this solution,illustrating its potential to improve network securitymanagement in 5G environments significantly.Our findings highlight the architecture’s capability to ensure boththe operational integrity of network applications and the security of the underlying infrastructure,presenting asignificant advancement in network security.
基金Shenzhen Science and Technology Program,Grant/Award Number:ZDSYS20211021111415025Shenzhen Institute of Artificial Intelligence and Robotics for SocietyYouth Science and Technology Talents Development Project of Guizhou Education Department,Grant/Award Number:QianJiaoheKYZi[2018]459。
文摘Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.
基金supported by UniversitiKebangsaan Malaysia,under Dana Impak Perdana 2.0.(Ref:DIP–2022–020).
文摘Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.
文摘In the research of video-based violent behavior detection,the motion information in the video is vital for violence detection.How to highlight motion information in videos and integrate spatiotemporal information is an urgent problem that needs to be solved in violence detection.In this paper,we propose a deep learning architecture that integrates shallow features into deep features to strengthen the network's ability to express motion information at a deep level.To enhance the weight of motion information in the network,we design a downsampling module to extract shallow features,fused with the deep features extracted by MobileNet's Blocks.Furthermore we constructed a channel attention module and introduced a Convolutional Long Short-Term Memory(ConvLSTM)module.These two modules aim to redistribute network attention:the channel attention module focuses on channel-level information and the ConvLSTM module emphasizes temporal aspects.Finally,we employ 3D convolution and global pooling to compress the feature sizes,fed into fully connected layers to perform violence detection.Experiments are conducted on three publicly available standard datasets,achieving an accuracy rate of 91%on the surveillance video dataset RWF2000,97.5%on the Hockey fight dataset,and 100%on the movies dataset.Overall,the proposed model demonstrates satisfactory performance in violence detection.
文摘The rapid development of data communication in modern era demands secure exchange of information. Steganography is an established method for hiding secret data from an unauthorized access into a cover object in such a way that it is invisible to human eyes. The cover object can be image, text, audio,or video. This paper proposes a secure steganography algorithm that hides a bitstream of the secret text into the least significant bits(LSBs) of the approximation coefficients of the integer wavelet transform(IWT) of grayscale images as well as each component of color images to form stego-images. The embedding and extracting phases of the proposed steganography algorithms are performed using the MATLAB software. Invisibility, payload capacity, and security in terms of peak signal to noise ratio(PSNR) and robustness are the key challenges to steganography. The statistical distortion between the cover images and the stego-images is measured by using the mean square error(MSE) and the PSNR, while the degree of closeness between them is evaluated using the normalized cross correlation(NCC). The experimental results show that, the proposed algorithms can hide the secret text with a large payload capacity with a high level of security and a higher invisibility. Furthermore, the proposed technique is computationally efficient and better results for both PSNR and NCC are achieved compared with the previous algorithms.
基金support by National Key Technology Research and Development Program of the Ministry of Science and Technology of China (2015BAK05B01)
文摘Studies have indicated that the distributed compressed sensing based(DCSbased) channel estimation can decrease the length of the reference signals effectively. In block transmission, a unique word(UW) can be used as a cyclic prefix and reference signal. However, the DCS-based channel estimation requires diversity sequences instead of UW. In this paper, we proposed a novel method that employs a training sequence(TS) whose duration time is slightly longer than the maximum delay spread time. Based on proposed TS, the DCS approach perform perfectly in multipath channel estimation. Meanwhile, a cyclic prefix construct could be formed, which reduces the complexity of the frequency domain equalization(FDE) directly. Simulation results demonstrate that, by using the method of simultaneous orthogonal matching pursuit(SOMP), the required channel overhead has been reduced thanks to the proposed TS.
文摘In this study,a 2kHz Tonpilz projector was designed using a Terfenol-D and modeled in ATILA.For the purpose of modeling studies,it has been determined that a radiating head mass exhibits better transmitting current response(TCR) at 136 mm diameter,where the resonance occurs at 2.4kHz and the peak value of 118 dB re 1 μPa/A at 1 m occurs at 12 kHz.Also bolt at a 46mm distance from the center of the head mass offers resonance at 2.4kHz,and the peak value of 115.3 dB re 1 μPa/A at 1m occurs at 11.5kHz.This optimized design is fabricated and molded with polyurethane of 3mm thickness.The prototype was tested at the Acoustic Test Facility(ATF) of National Institute of Ocean Technology(NIOT) for its underwater performances.Based on the result,the fundamental resonance was determined to be 2.18kHz and the peak value of TCR of 182 dB re 1 μPa/A at 1m occurs at 14 kHz.The maximum value of the RS was found to be -190 dB re 1V/μPa at 1m at a frequency of 2.1kHz.
文摘In the present scenario,cloud computing service provides on-request access to a collection of resources available in remote system that can be shared by numerous clients.Resources are in self-administration;consequently,clients can adjust their usage according to their requirements.Resource usage is estimated and clients can pay according to their utilization.In literature,the existing method describes the usage of various hardware assets.Quality of Service(QoS)needs to be considered for ascertaining the schedule and the access of resources.Adhering with the security arrangement,any additional code is forbidden to ensure the usage of resources complying with QoS.Thus,all monitoring must be done from the hypervisor.To overcome the issues,Robust Resource Allocation and Utilization(RRAU)approach is developed for optimizing the management of its cloud resources.The work hosts a numerous virtual assets which could be expected under the circumstances and it enforces a controlled degree of QoS.The asset assignment calculation is heuristic,which is based on experimental evaluations,RRAU approach with J48 prediction model reduces Job Completion Time(JCT)by 4.75 s,Make Span(MS)6.25,and Monetary Cost(MC)4.25 for 15,25,35 and 45 resources are compared to the conventional methodologies in cloud environment.
基金This research was funded by“TAIF UNIVERSITY RESEARCHERS SUPPORTING PROJECT,grant number TURSP-2020/345”,Taif University,Taif,Saudi Arabia.
文摘Recently,many researchers have tried to develop a robust,fast,and accurate algorithm.This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking,gaze-based human-computer interaction,medical applications(such as deaf and diabetes patients),and attention analysis.Many real-world conditions challenge the eye appearance,such as illumination,reflections,and occasions.On the other hand,individual differences in eye physiology and other sources of noise,such as contact lenses or make-up.The present work introduces a robust pupil detection algorithm with and higher accuracy than the previous attempts for real-time analytics applications.The proposed circular hough transform with morphing canny edge detection for Pupillometery(CHMCEP)algorithm can detect even the blurred or noisy images by using different filtering methods in the pre-processing or start phase to remove the blur and noise and finally the second filtering process before the circular Hough transform for the center fitting to make sure better accuracy.The performance of the proposed CHMCEP algorithm was tested against recent pupil detection methods.Simulations and results show that the proposed CHMCEP algorithm achieved detection rates of 87.11,78.54,58,and 78 according to´Swirski,ExCuSe,Else,and labeled pupils in the wild(LPW)data sets,respectively.These results show that the proposed approach performs better than the other pupil detection methods by a large margin by providing exact and robust pupil positions on challenging ordinary eye pictures.
文摘This paper presents a new adaptive mutation approach for fastening the convergence of immune algorithms (IAs). This method is adopted to realize the twin goals of maintaining diversity in the population and sustaining the convergence capacity of the IA. In this method, the mutation rate (pm) is adaptively varied depending on the fitness values of the solutions. Solutions of high fitness are protected, while solutions with sub-average fitness are totally disrupted. A solution to the problem of deciding the optimal value of pm is obtained. Experiments are carried out to compare the proposed approach to traditional one on a set of optimization problems. These are namely: 1) an exponential multi-variable function;2) a rapidly varying multimodal function and 3) design of a second order 2-D narrow band recursive LPF. Simulation results show that the proposed method efficiently improves IA’s performance and prevents it from getting stuck at a local optimum.
基金The authors would like to thank the support of the Taif University Researchers Supporting Project TURSP 2020/34,Taif University,Taif Saudi Arabia for supporting this work.
文摘Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.