In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulat...In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.展开更多
Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that lever...Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.展开更多
Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for b...Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.展开更多
Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging ...Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput.展开更多
Ransomware,particularly crypto-ransomware,remains a significant cybersecurity challenge,encrypting victim data and demanding a ransom,often leaving the data irretrievable even if payment is made.This study proposes an...Ransomware,particularly crypto-ransomware,remains a significant cybersecurity challenge,encrypting victim data and demanding a ransom,often leaving the data irretrievable even if payment is made.This study proposes an early detection approach to mitigate such threats by identifying ransomware activity before the encryption process begins.The approach employs a two-tiered approach:a signature-based method using hashing techniques to match known threats and a dynamic behavior-based analysis leveraging Cuckoo Sandbox and machine learning algorithms.A critical feature is the integration of the most effective Application Programming Interface call monitoring,which analyzes system-level interactions such as file encryption,key generation,and registry modifications.This enables the detection of both known and zero-day ransomware variants,overcoming limitations of traditional methods.The proposed technique was evaluated using classifiers such as Random Forest,Support Vector Machine,and K-Nearest Neighbors,achieving a detection accuracy of 98%based on 26 key ransomware attributes with an 80:20 training-to-testing ratio and 10-fold cross-validation.By combining minimal feature sets with robust behavioral analysis,the proposed method outperforms existing solutions and addresses current challenges in ransomware detection,thereby enhancing cybersecurity resilience.展开更多
Ransomware is malware that encrypts data without permission,demanding payment for access.Detecting ransomware on Android platforms is challenging due to evolving malicious techniques and diverse application behaviors....Ransomware is malware that encrypts data without permission,demanding payment for access.Detecting ransomware on Android platforms is challenging due to evolving malicious techniques and diverse application behaviors.Traditional methods,such as static and dynamic analysis,suffer from polymorphism,code obfuscation,and high resource demands.This paper introduces a multi-stage approach to enhance behavioral analysis for Android ransomware detection,focusing on a reduced set of distinguishing features.The approach includes ransomware app collection,behavioral profile generation,dataset creation,feature identification,reduction,and classification.Experiments were conducted on∼3300 Android-based ransomware samples,despite the challenges posed by their evolving nature and complexity.The feature reduction strategy successfully reduced features by 80%,with only a marginal loss of detection accuracy(0.59%).Different machine learning algorithms are employed for classification and achieve 96.71%detection accuracy.Additionally,10-fold cross-validation demonstrated robustness,yielding an AUC-ROC of 99.3%.Importantly,latency and memory evaluations revealed that models using the reduced feature set achieved up to a 99%reduction in inference time and significant memory savings across classifiers.The proposed approach outperforms existing techniques by achieving high detection accuracy with a minimal feature set,also suitable for deployment in resource-constrained environments.Future work may extend datasets and include iOS-based ransomware applications.展开更多
Recently,Network Functions Virtualization(NFV)has become a critical resource for optimizing capability utilization in the 5G/B5G era.NFV decomposes the network resource paradigm,demonstrating the efficient utilization...Recently,Network Functions Virtualization(NFV)has become a critical resource for optimizing capability utilization in the 5G/B5G era.NFV decomposes the network resource paradigm,demonstrating the efficient utilization of Network Functions(NFs)to enable configurable service priorities and resource demands.Telecommunications Service Providers(TSPs)face challenges in network utilization,as the vast amounts of data generated by the Internet of Things(IoT)overwhelm existing infrastructures.IoT applications,which generate massive volumes of diverse data and require real-time communication,contribute to bottlenecks and congestion.In this context,Multiaccess Edge Computing(MEC)is employed to support resource and priority-aware IoT applications by implementing Virtual Network Function(VNF)sequences within Service Function Chaining(SFC).This paper proposes the use of Deep Reinforcement Learning(DRL)combined with Graph Neural Networks(GNN)to enhance network processing,performance,and resource pooling capabilities.GNN facilitates feature extraction through Message-Passing Neural Network(MPNN)mechanisms.Together with DRL,Deep Q-Networks(DQN)are utilized to dynamically allocate resources based on IoT network priorities and demands.Our focus is on minimizing delay times for VNF instance execution,ensuring effective resource placement,and allocation in SFC deployments,offering flexibility to adapt to real-time changes in priority and workload.Simulation results demonstrate that our proposed scheme outperforms reference models in terms of reward,delay,delivery,service drop ratios,and average completion ratios,proving its potential for IoT applications.展开更多
Smart learning environments have been considered as vital sources and essential needs in modern digital education systems.With the rapid proliferation of smart and assistive technologies,smart learning processes have ...Smart learning environments have been considered as vital sources and essential needs in modern digital education systems.With the rapid proliferation of smart and assistive technologies,smart learning processes have become quite convenient,comfortable,and financially affordable.This shift has led to the emergence of pervasive computing environments,where user’s intelligent behavior is supported by smart gadgets;however,it is becoming more challenging due to inconsistent behavior of Artificial intelligence(AI)assistive technologies in terms of networking issues,slow user responses to technologies and limited computational resources.This paper presents a context-aware predictive reasoning based formalism for smart learning environments that facilitates students in managing their academic as well as extra-curricular activities autonomously with limited human intervention.This system consists of a three-tier architecture including the acquisition of the contextualized information from the environment autonomously,modeling the system using Web Ontology Rule Language(OWL 2 RL)and Semantic Web Rule Language(SWRL),and perform reasoning to infer the desired goals whenever and wherever needed.For contextual reasoning,we develop a non-monotonic reasoning based formalism to reason with contextual information using rule-based reasoning.The focus is on distributed problem solving,where context-aware agents exchange information using rule-based reasoning and specify constraints to accomplish desired goals.To formally model-check and simulate the system behavior,we model the case study of a smart learning environment in the UPPAAL model checker and verify the desired properties in the model,such as safety,liveness and robust properties to reflect the overall correctness behavior of the system with achieving the minimum analysis time of 0.002 s and 34,712 KB memory utilization.展开更多
Subjective visual vertical(SVV)and subjective visual horizontal(SVH)tests can be used to evaluate the perception of verticality and horizontality,respectively,and can aid the diagnosis of otolith dysfunction in clinic...Subjective visual vertical(SVV)and subjective visual horizontal(SVH)tests can be used to evaluate the perception of verticality and horizontality,respectively,and can aid the diagnosis of otolith dysfunction in clinical practice.In this study,SVV and SVH screen version tests are implemented using virtual reality(VR)equipment;the proposed test method promotes a more immersive feeling for the subject while using a simple equipment configuration and possessing excellent mobility.To verify the performance of the proposed VR-based SVV and SVH tests,a reliable comparison was made between the traditional screen-based SVV and SVH tests and the proposed method,based on 30 healthy subjects.The average results of our experimental tests on the VR-based binocular SVV and SVH equipment were−0.15◦±1.74 and 0.60◦±1.18,respectively.The proposed VR-based method satisfies the normal tolerance for horizontal or vertical lines,i.e.,a±3◦error,as defined in previous studies,and it can be used to replace existing test methods.展开更多
Recently,to build a smart factory,research has been conducted to perform fault diagnosis and defect detection based on vibration and noise signals generated when a mechanical system is driven using deep-learning techn...Recently,to build a smart factory,research has been conducted to perform fault diagnosis and defect detection based on vibration and noise signals generated when a mechanical system is driven using deep-learning technology,a field of artificial intelligence.Most of the related studies apply various audio-feature extraction techniques to one-dimensional raw data to extract sound-specific features and then classify the sound by using the derived spectral image as a training dataset.However,compared to numerical raw data,learning based on image data has the disadvantage that creating a training dataset is very time-consuming.Therefore,we devised a two-step data preprocessing method that efficiently detects machine anomalies in numerical raw data.In the first preprocessing process,sound signal information is analyzed to extract features,and in the second preprocessing process,data filtering is performed by applying the proposed algorithm.An efficient dataset was built formodel learning through a total of two steps of data preprocessing.In addition,both showed excellent performance in the training accuracy of the model that entered each dataset,but it can be seen that the time required to build the dataset was 203 s compared to 39 s,which is about 5.2 times than when building the image dataset.展开更多
Edge intelligence brings the deployment of applied deep learning(DL)models in edge computing systems to alleviate the core backbone network congestions.The setup of programmable software-defined networking(SDN)control...Edge intelligence brings the deployment of applied deep learning(DL)models in edge computing systems to alleviate the core backbone network congestions.The setup of programmable software-defined networking(SDN)control and elastic virtual computing resources within network functions virtualization(NFV)are cooperative for enhancing the applicability of intelligent edge softwarization.To offer advancement for multi-dimensional model task offloading in edge networks with SDN/NFV-based control softwarization,this study proposes a DL mechanism to recommend the optimal edge node selection with primary features of congestion windows,link delays,and allocatable bandwidth capacities.Adaptive partial task offloading policy considered the DL-based recommendation to modify efficient virtual resource placement for minimizing the completion time and termination drop ratio.The optimization problem of resource placement is tackled by a deep reinforcement learning(DRL)-based policy following the Markov decision process(MDP).The agent observes the state spaces and applies value-maximized action of available computation resources and adjustable resource allocation steps.The reward formulation primarily considers taskrequired computing resources and action-applied allocation properties.With defined policies of resource determination,the orchestration procedure is configured within each virtual network function(VNF)descriptor using topology and orchestration specification for cloud applications(TOSCA)by specifying the allocated properties.The simulation for the control rule installation is conducted using Mininet and Ryu SDN controller.Average delay and task delivery/drop ratios are used as the key performance metrics.展开更多
In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to im...In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method.展开更多
In a vehicular ad hoc network(VANET),a massive quantity of data needs to be transmitted on a large scale in shorter time durations.At the same time,vehicles exhibit high velocity,leading to more vehicle disconnections...In a vehicular ad hoc network(VANET),a massive quantity of data needs to be transmitted on a large scale in shorter time durations.At the same time,vehicles exhibit high velocity,leading to more vehicle disconnections.Both of these characteristics result in unreliable data communication in VANET.A vehicle clustering algorithm clusters the vehicles in groups employed in VANET to enhance network scalability and connection reliability.Clustering is considered one of the possible solutions for attaining effectual interaction in VANETs.But one such difficulty was reducing the cluster number under increasing transmitting nodes.This article introduces an Evolutionary Hide Objects Game Optimization based Distance Aware Clustering(EHOGO-DAC)Scheme for VANET.The major intention of the EHOGO-DAC technique is to portion the VANET into distinct sets of clusters by grouping vehicles.In addition,the DHOGO-EAC technique is mainly based on the HOGO algorithm,which is stimulated by old games,and the searching agent tries to identify hidden objects in a given space.The DHOGO-EAC technique derives a fitness function for the clustering process,including the total number of clusters and Euclidean distance.The experimental assessment of the DHOGO-EAC technique was carried out under distinct aspects.The comparison outcome stated the enhanced outcomes of the DHOGO-EAC technique compared to recent approaches.展开更多
Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as re...Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as reduction in the lifespan of equipment due to frequent switching and interruption,delay,and stoppage of services may occur.Therefore,applying a machine learning(ML)method,which is possible to automatically judge and classify network-related service anomaly,and switch multi-input signals without dropping or changing signals by predicting or quickly determining the time of error occurrence for smooth stream switching when there are problems such as transmission errors,is required.In this paper,we propose an intelligent packet switching method based on the ML method of classification,which is one of the supervised learning methods,that presents the risk level of abnormal multi-stream occurring in broadcasting gateway equipment based on data.Furthermore,we subdivide the risk levels obtained from classification techniques into probabilities and then derive vectorized representative values for each attribute value of the collected input data and continuously update them.The obtained reference vector value is used for switching judgment through the cosine similarity value between input data obtained when a dangerous situation occurs.In the broadcasting gateway equipment to which the proposed method is applied,it is possible to perform more stable and smarter switching than before by solving problems of reliability and broadcasting accidents of the equipment and can maintain stable video streaming as well.展开更多
This paper proposes a new approach to counter cyberattacks using the increasingly diverse malware in cyber security.Traditional signature detection methods that utilize static and dynamic features face limitations due...This paper proposes a new approach to counter cyberattacks using the increasingly diverse malware in cyber security.Traditional signature detection methods that utilize static and dynamic features face limitations due to the continuous evolution and diversity of new malware.Recently,machine learning-based malware detection techniques,such as Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN),have gained attention.While these methods demonstrate high performance by leveraging static and dynamic features,they are limited in detecting new malware or variants because they learn based on the characteristics of existing malware.To overcome these limitations,malware detection techniques employing One-Shot Learning and Few-Shot Learning have been introduced.Based on this,the Siamese Network,which can effectively learn from a small number of samples and perform predictions based on similarity rather than learning the characteristics of the input data,enables the detection of new malware or variants.We propose a dual Siamese network-based detection framework that utilizes byte images converted frommalware binary data to grayscale,and opcode frequency-based images generated after extracting opcodes and converting them into 2-gramfrequencies.The proposed framework integrates two independent Siamese network models,one learning from byte images and the other from opcode frequency-based images.The detection models trained on the different kinds of images generated separately apply the L1 distancemeasure to the output vectors themodels generate,calculate the similarity,and then apply different weights to each model.Our proposed framework achieved a malware detection accuracy of 95.9%and 99.83%in the experimentsusingdifferentmalware datasets.The experimental resultsdemonstrate that ourmalware detection model can effectively detect malware by utilizing two different types of features and employing the dual Siamese network-based model.展开更多
In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core te...In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core technology to increase the capacity of Radio Networks(RN)and enable Multiple-Input and Multiple-Output(MIMO)of Radio Remote Head(RRH)technology.However,the challenging key issues in unfair radio resource handling remain unsolved when massive requests are occurring concurrently.The imbalance of resource utilization is one of the main issues occurs when there is overloaded connectivity to the closest RRH receiving exceeding requests.To handle this issue effectively,Machine Learning(ML)algorithm plays an important role to tackle the requests of massive IoT devices to RRH with its obvious capacity conditions.This paper proposed a dynamic RRH gateways steering based on a lightweight supervised learning algorithm,namely K-Nearest Neighbor(KNN),to improve the communication Quality of Service(QoS)in real-time IoT networks.KNN supervises the model to classify and recommend the user’s requests to optimal RRHs which preserves higher power.The experimental dataset was generated by using computer software and the simulation results illustrated a remarkable outperformance of the proposed scheme over the conventional methods in terms of multiple significant QoS parameters,including communication reliability,latency,and throughput.展开更多
Recently,with the advancement of Information and Communications Technology(ICT),Internet of Things(IoT)has been connected to the cloud and used in industrial sectors,medical environments,and smart grids.However,if dat...Recently,with the advancement of Information and Communications Technology(ICT),Internet of Things(IoT)has been connected to the cloud and used in industrial sectors,medical environments,and smart grids.However,if data is transmitted in plain text when collecting data in an IoTcloud environment,it can be exposed to various security threats such as replay attacks and data forgery.Thus,digital signatures are required.Data integrity is ensured when a user(or a device)transmits data using a signature.In addition,the concept of data aggregation is important to efficiently collect data transmitted from multiple users(or a devices)in an industrial IoT environment.However,signatures based on pairing during aggregation compromise efficiency as the number of signatories increases.Aggregate signature methods(e.g.,identity-based and certificateless cryptography)have been studied.Both methods pose key escrow and key distribution problems.In order to solve these problems,the use of aggregate signatures in certificate-based cryptography is being studied,and studies to satisfy the prevention of forgery of signatures and other security problems are being conducted.In this paper,we propose a new lightweight signature scheme that uses a certificate-based aggregate signature and can generate and verify signed messages from IoT devices in an IoT-cloud environment.In this proposed method,by providing key insulation,security threats that occur when keys are exposed due to physical attacks such as side channels can be solved.This can be applied to create an environment in which data is collected safely and efficiently in IoT-cloud is environments.展开更多
Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that deliver...Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that delivers data;this enables new services in battery-less domains with massive Internet-of-Things(IoT)devices.Connectivity is highly energy-efficient in the context of massive IoT applications.Outdoors,long-range(LoRa)backscattering facilitates large IoT services.A backscatter network guarantees timeslot-and contention-based transmission.Timeslot-based transmission ensures data transmission,but is not scalable to different numbers of transmission devices.If contention-based transmission is used,collisions are unavoidable.To reduce collisions and increase transmission efficiency,the number of devices transmitting data must be controlled.To control device activation,the RF source range can be modulated by adjusting the RF source power during LoRa backscatter.This reduces the number of transmitting devices,and thus collisions and retransmission,thereby improving transmission efficiency.We performed extensive simulations to evaluate the performance of our method.展开更多
Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challeng...Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challenging despite the economic benefits.Existing PV forecasting techniques(sequential and convolutional neural networks(CNN))are sensitive to environmental conditions,reducing energy distribution system performance.To handle these issues,this article proposes an efficient,weather-resilient convolutional-transformer-based network(CT-NET)for accurate and efficient PV power forecasting.The network consists of three main modules.First,the acquired PV generation data are forwarded to the pre-processing module for data refinement.Next,to carry out data encoding,a CNNbased multi-head attention(MHA)module is developed in which a single MHA is used to decode the encoded data.The encoder module is mainly composed of 1D convolutional and MHA layers,which extract local as well as contextual features,while the decoder part includes MHA and feedforward layers to generate the final prediction.Finally,the performance of the proposed network is evaluated using standard error metrics,including the mean squared error(MSE),root mean squared error(RMSE),and mean absolute percentage error(MAPE).An ablation study and comparative analysis with several competitive state-of-the-art approaches revealed a lower error rate in terms of MSE(0.0471),RMSE(0.2167),and MAPE(0.6135)over publicly available benchmark data.In addition,it is demonstrated that our proposed model is less complex,with the lowest number of parameters(0.0135 M),size(0.106 MB),and inference time(2 ms/step),suggesting that it is easy to integrate into the smart grid.展开更多
The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied t...The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied to real-time cloth simulation with three types of springs.However,hard spring cloth simulation using the mass–spring system requires a small integration time-step in order to use a large stiffness coefficient.Furthermore,to obtain stable behavior,constraint enforcement is used instead of maintenance of the force of each spring.Constraint force computation involves a large sparse linear solving operation.Due to the large computation,we implement a cloth simulation using adaptive constraint activation and deactivation techniques that involve the mass-spring system and constraint enforcement method to prevent excessive elongation of cloth.At the same time,when the length of the spring is stretched or compressed over a defined threshold,adaptive constraint activation and deactivation method deactivates the spring and generate the implicit constraint.Traditional method that uses a serial process of the Central Processing Unit(CPU)to solve the system in every frame cannot handle the complex structure of cloth model in real-time.Our simulation utilizes the Graphic Processing Unit(GPU)parallel processing with compute shader in OpenGL Shading Language(GLSL)to solve the system effectively.In this paper,we design and implement parallel method for cloth simulation,and experiment on the performance and behavior comparison of the mass-spring system,constraint enforcement,and adaptive constraint activation and deactivation techniques the using GPU-based parallel method.展开更多
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A3049788).
文摘In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.
基金funded by Soonchunhyang University,Grant Numbers 20241422BK21 FOUR(Fostering Outstanding Universities for Research,Grant Number 5199990914048).
文摘Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.
基金This work was funded by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)this research was supported by the Bio and Medical Technology Development Program of the National Research Foundation(NRF)funded by the Korean government(MSIT)(No.NRF-2019M3E5D1A02069073)In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.
基金This work was funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.5199990914048)this research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput.
基金funded by the National University of Sciences and Technology(NUST)supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF),funded by the Ministry of Education(2021R1IIA3049788).
文摘Ransomware,particularly crypto-ransomware,remains a significant cybersecurity challenge,encrypting victim data and demanding a ransom,often leaving the data irretrievable even if payment is made.This study proposes an early detection approach to mitigate such threats by identifying ransomware activity before the encryption process begins.The approach employs a two-tiered approach:a signature-based method using hashing techniques to match known threats and a dynamic behavior-based analysis leveraging Cuckoo Sandbox and machine learning algorithms.A critical feature is the integration of the most effective Application Programming Interface call monitoring,which analyzes system-level interactions such as file encryption,key generation,and registry modifications.This enables the detection of both known and zero-day ransomware variants,overcoming limitations of traditional methods.The proposed technique was evaluated using classifiers such as Random Forest,Support Vector Machine,and K-Nearest Neighbors,achieving a detection accuracy of 98%based on 26 key ransomware attributes with an 80:20 training-to-testing ratio and 10-fold cross-validation.By combining minimal feature sets with robust behavioral analysis,the proposed method outperforms existing solutions and addresses current challenges in ransomware detection,thereby enhancing cybersecurity resilience.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A3049788).
文摘Ransomware is malware that encrypts data without permission,demanding payment for access.Detecting ransomware on Android platforms is challenging due to evolving malicious techniques and diverse application behaviors.Traditional methods,such as static and dynamic analysis,suffer from polymorphism,code obfuscation,and high resource demands.This paper introduces a multi-stage approach to enhance behavioral analysis for Android ransomware detection,focusing on a reduced set of distinguishing features.The approach includes ransomware app collection,behavioral profile generation,dataset creation,feature identification,reduction,and classification.Experiments were conducted on∼3300 Android-based ransomware samples,despite the challenges posed by their evolving nature and complexity.The feature reduction strategy successfully reduced features by 80%,with only a marginal loss of detection accuracy(0.59%).Different machine learning algorithms are employed for classification and achieve 96.71%detection accuracy.Additionally,10-fold cross-validation demonstrated robustness,yielding an AUC-ROC of 99.3%.Importantly,latency and memory evaluations revealed that models using the reduced feature set achieved up to a 99%reduction in inference time and significant memory savings across classifiers.The proposed approach outperforms existing techniques by achieving high detection accuracy with a minimal feature set,also suitable for deployment in resource-constrained environments.Future work may extend datasets and include iOS-based ransomware applications.
基金supported by Institute of Information&Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.RS-2022-00167197,Development of Intelligent 5G/6G Infrastructure Technology for the Smart City)in part by the National Research Foundation of Korea(NRF),Ministry of Education,through the Basic Science Research Program under Grant NRF-2020R1I1A3066543+1 种基金in part by BK21 FOUR(Fostering Outstanding Universities for Research)under Grant 5199990914048in part by the Soonchunhyang University Research Fund.
文摘Recently,Network Functions Virtualization(NFV)has become a critical resource for optimizing capability utilization in the 5G/B5G era.NFV decomposes the network resource paradigm,demonstrating the efficient utilization of Network Functions(NFs)to enable configurable service priorities and resource demands.Telecommunications Service Providers(TSPs)face challenges in network utilization,as the vast amounts of data generated by the Internet of Things(IoT)overwhelm existing infrastructures.IoT applications,which generate massive volumes of diverse data and require real-time communication,contribute to bottlenecks and congestion.In this context,Multiaccess Edge Computing(MEC)is employed to support resource and priority-aware IoT applications by implementing Virtual Network Function(VNF)sequences within Service Function Chaining(SFC).This paper proposes the use of Deep Reinforcement Learning(DRL)combined with Graph Neural Networks(GNN)to enhance network processing,performance,and resource pooling capabilities.GNN facilitates feature extraction through Message-Passing Neural Network(MPNN)mechanisms.Together with DRL,Deep Q-Networks(DQN)are utilized to dynamically allocate resources based on IoT network priorities and demands.Our focus is on minimizing delay times for VNF instance execution,ensuring effective resource placement,and allocation in SFC deployments,offering flexibility to adapt to real-time changes in priority and workload.Simulation results demonstrate that our proposed scheme outperforms reference models in terms of reward,delay,delivery,service drop ratios,and average completion ratios,proving its potential for IoT applications.
基金supported by the National Research Foundation(NRF),Republic of Korea,under project BK21 FOUR(4299990213939).
文摘Smart learning environments have been considered as vital sources and essential needs in modern digital education systems.With the rapid proliferation of smart and assistive technologies,smart learning processes have become quite convenient,comfortable,and financially affordable.This shift has led to the emergence of pervasive computing environments,where user’s intelligent behavior is supported by smart gadgets;however,it is becoming more challenging due to inconsistent behavior of Artificial intelligence(AI)assistive technologies in terms of networking issues,slow user responses to technologies and limited computational resources.This paper presents a context-aware predictive reasoning based formalism for smart learning environments that facilitates students in managing their academic as well as extra-curricular activities autonomously with limited human intervention.This system consists of a three-tier architecture including the acquisition of the contextualized information from the environment autonomously,modeling the system using Web Ontology Rule Language(OWL 2 RL)and Semantic Web Rule Language(SWRL),and perform reasoning to infer the desired goals whenever and wherever needed.For contextual reasoning,we develop a non-monotonic reasoning based formalism to reason with contextual information using rule-based reasoning.The focus is on distributed problem solving,where context-aware agents exchange information using rule-based reasoning and specify constraints to accomplish desired goals.To formally model-check and simulate the system behavior,we model the case study of a smart learning environment in the UPPAAL model checker and verify the desired properties in the model,such as safety,liveness and robust properties to reflect the overall correctness behavior of the system with achieving the minimum analysis time of 0.002 s and 34,712 KB memory utilization.
基金supported by the Soonchunhyang University Research Fund and 2018 Ulsan University Hospital Research Grant(UUH-2018-12)(Grantee:JYP,http://www.uuh.ulsan.kr).The authors are grateful for their supports.
文摘Subjective visual vertical(SVV)and subjective visual horizontal(SVH)tests can be used to evaluate the perception of verticality and horizontality,respectively,and can aid the diagnosis of otolith dysfunction in clinical practice.In this study,SVV and SVH screen version tests are implemented using virtual reality(VR)equipment;the proposed test method promotes a more immersive feeling for the subject while using a simple equipment configuration and possessing excellent mobility.To verify the performance of the proposed VR-based SVV and SVH tests,a reliable comparison was made between the traditional screen-based SVV and SVH tests and the proposed method,based on 30 healthy subjects.The average results of our experimental tests on the VR-based binocular SVV and SVH equipment were−0.15◦±1.74 and 0.60◦±1.18,respectively.The proposed VR-based method satisfies the normal tolerance for horizontal or vertical lines,i.e.,a±3◦error,as defined in previous studies,and it can be used to replace existing test methods.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)(No.2021R1C1C1013133)funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.5199990914048)supported by the Soonchunhyang University Research Fund.
文摘Recently,to build a smart factory,research has been conducted to perform fault diagnosis and defect detection based on vibration and noise signals generated when a mechanical system is driven using deep-learning technology,a field of artificial intelligence.Most of the related studies apply various audio-feature extraction techniques to one-dimensional raw data to extract sound-specific features and then classify the sound by using the derived spectral image as a training dataset.However,compared to numerical raw data,learning based on image data has the disadvantage that creating a training dataset is very time-consuming.Therefore,we devised a two-step data preprocessing method that efficiently detects machine anomalies in numerical raw data.In the first preprocessing process,sound signal information is analyzed to extract features,and in the second preprocessing process,data filtering is performed by applying the proposed algorithm.An efficient dataset was built formodel learning through a total of two steps of data preprocessing.In addition,both showed excellent performance in the training accuracy of the model that entered each dataset,but it can be seen that the time required to build the dataset was 203 s compared to 39 s,which is about 5.2 times than when building the image dataset.
基金This work was funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.5199990914048)this research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543).In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Edge intelligence brings the deployment of applied deep learning(DL)models in edge computing systems to alleviate the core backbone network congestions.The setup of programmable software-defined networking(SDN)control and elastic virtual computing resources within network functions virtualization(NFV)are cooperative for enhancing the applicability of intelligent edge softwarization.To offer advancement for multi-dimensional model task offloading in edge networks with SDN/NFV-based control softwarization,this study proposes a DL mechanism to recommend the optimal edge node selection with primary features of congestion windows,link delays,and allocatable bandwidth capacities.Adaptive partial task offloading policy considered the DL-based recommendation to modify efficient virtual resource placement for minimizing the completion time and termination drop ratio.The optimization problem of resource placement is tackled by a deep reinforcement learning(DRL)-based policy following the Markov decision process(MDP).The agent observes the state spaces and applies value-maximized action of available computation resources and adjustable resource allocation steps.The reward formulation primarily considers taskrequired computing resources and action-applied allocation properties.With defined policies of resource determination,the orchestration procedure is configured within each virtual network function(VNF)descriptor using topology and orchestration specification for cloud applications(TOSCA)by specifying the allocated properties.The simulation for the control rule installation is conducted using Mininet and Ryu SDN controller.Average delay and task delivery/drop ratios are used as the key performance metrics.
基金supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (No.2021R1C1C1013133)supported by the Institute of Information and Communications Technology Planning and Evaluation (IITP)grant funded by the Korea Government (MSIT) (RS-2022-00167197,Development of Intelligent 5G/6G Infrastructure Technology for The Smart City)supported by the Soonchunhyang University Research Fund.
文摘In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method.
基金This work was supported by the Ulsan City&Electronics and Telecommunications Research Institute(ETRI)grant funded by the Ulsan City[22AS1600,the development of intelligentization technology for the main industry for manufacturing innovation and Human-mobile-space autonomous collaboration intelligence technology development in industrial sites].
文摘In a vehicular ad hoc network(VANET),a massive quantity of data needs to be transmitted on a large scale in shorter time durations.At the same time,vehicles exhibit high velocity,leading to more vehicle disconnections.Both of these characteristics result in unreliable data communication in VANET.A vehicle clustering algorithm clusters the vehicles in groups employed in VANET to enhance network scalability and connection reliability.Clustering is considered one of the possible solutions for attaining effectual interaction in VANETs.But one such difficulty was reducing the cluster number under increasing transmitting nodes.This article introduces an Evolutionary Hide Objects Game Optimization based Distance Aware Clustering(EHOGO-DAC)Scheme for VANET.The major intention of the EHOGO-DAC technique is to portion the VANET into distinct sets of clusters by grouping vehicles.In addition,the DHOGO-EAC technique is mainly based on the HOGO algorithm,which is stimulated by old games,and the searching agent tries to identify hidden objects in a given space.The DHOGO-EAC technique derives a fitness function for the clustering process,including the total number of clusters and Euclidean distance.The experimental assessment of the DHOGO-EAC technique was carried out under distinct aspects.The comparison outcome stated the enhanced outcomes of the DHOGO-EAC technique compared to recent approaches.
基金This work was supported by a research grant from Seoul Women’s University(2023-0183).
文摘Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as reduction in the lifespan of equipment due to frequent switching and interruption,delay,and stoppage of services may occur.Therefore,applying a machine learning(ML)method,which is possible to automatically judge and classify network-related service anomaly,and switch multi-input signals without dropping or changing signals by predicting or quickly determining the time of error occurrence for smooth stream switching when there are problems such as transmission errors,is required.In this paper,we propose an intelligent packet switching method based on the ML method of classification,which is one of the supervised learning methods,that presents the risk level of abnormal multi-stream occurring in broadcasting gateway equipment based on data.Furthermore,we subdivide the risk levels obtained from classification techniques into probabilities and then derive vectorized representative values for each attribute value of the collected input data and continuously update them.The obtained reference vector value is used for switching judgment through the cosine similarity value between input data obtained when a dangerous situation occurs.In the broadcasting gateway equipment to which the proposed method is applied,it is possible to perform more stable and smarter switching than before by solving problems of reliability and broadcasting accidents of the equipment and can maintain stable video streaming as well.
文摘This paper proposes a new approach to counter cyberattacks using the increasingly diverse malware in cyber security.Traditional signature detection methods that utilize static and dynamic features face limitations due to the continuous evolution and diversity of new malware.Recently,machine learning-based malware detection techniques,such as Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN),have gained attention.While these methods demonstrate high performance by leveraging static and dynamic features,they are limited in detecting new malware or variants because they learn based on the characteristics of existing malware.To overcome these limitations,malware detection techniques employing One-Shot Learning and Few-Shot Learning have been introduced.Based on this,the Siamese Network,which can effectively learn from a small number of samples and perform predictions based on similarity rather than learning the characteristics of the input data,enables the detection of new malware or variants.We propose a dual Siamese network-based detection framework that utilizes byte images converted frommalware binary data to grayscale,and opcode frequency-based images generated after extracting opcodes and converting them into 2-gramfrequencies.The proposed framework integrates two independent Siamese network models,one learning from byte images and the other from opcode frequency-based images.The detection models trained on the different kinds of images generated separately apply the L1 distancemeasure to the output vectors themodels generate,calculate the similarity,and then apply different weights to each model.Our proposed framework achieved a malware detection accuracy of 95.9%and 99.83%in the experimentsusingdifferentmalware datasets.The experimental resultsdemonstrate that ourmalware detection model can effectively detect malware by utilizing two different types of features and employing the dual Siamese network-based model.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)this work was supported by the Soonchunhyang University Research Fund.
文摘In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core technology to increase the capacity of Radio Networks(RN)and enable Multiple-Input and Multiple-Output(MIMO)of Radio Remote Head(RRH)technology.However,the challenging key issues in unfair radio resource handling remain unsolved when massive requests are occurring concurrently.The imbalance of resource utilization is one of the main issues occurs when there is overloaded connectivity to the closest RRH receiving exceeding requests.To handle this issue effectively,Machine Learning(ML)algorithm plays an important role to tackle the requests of massive IoT devices to RRH with its obvious capacity conditions.This paper proposed a dynamic RRH gateways steering based on a lightweight supervised learning algorithm,namely K-Nearest Neighbor(KNN),to improve the communication Quality of Service(QoS)in real-time IoT networks.KNN supervises the model to classify and recommend the user’s requests to optimal RRHs which preserves higher power.The experimental dataset was generated by using computer software and the simulation results illustrated a remarkable outperformance of the proposed scheme over the conventional methods in terms of multiple significant QoS parameters,including communication reliability,latency,and throughput.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF2019R1A2C1085718)was supported by the Soonchunhyang University Research Fund.
文摘Recently,with the advancement of Information and Communications Technology(ICT),Internet of Things(IoT)has been connected to the cloud and used in industrial sectors,medical environments,and smart grids.However,if data is transmitted in plain text when collecting data in an IoTcloud environment,it can be exposed to various security threats such as replay attacks and data forgery.Thus,digital signatures are required.Data integrity is ensured when a user(or a device)transmits data using a signature.In addition,the concept of data aggregation is important to efficiently collect data transmitted from multiple users(or a devices)in an industrial IoT environment.However,signatures based on pairing during aggregation compromise efficiency as the number of signatories increases.Aggregate signature methods(e.g.,identity-based and certificateless cryptography)have been studied.Both methods pose key escrow and key distribution problems.In order to solve these problems,the use of aggregate signatures in certificate-based cryptography is being studied,and studies to satisfy the prevention of forgery of signatures and other security problems are being conducted.In this paper,we propose a new lightweight signature scheme that uses a certificate-based aggregate signature and can generate and verify signed messages from IoT devices in an IoT-cloud environment.In this proposed method,by providing key insulation,security threats that occur when keys are exposed due to physical attacks such as side channels can be solved.This can be applied to create an environment in which data is collected safely and efficiently in IoT-cloud is environments.
基金the National Research Foundation of Korea(NRF)grant funded by theKoreaGovernment(MSIT)(No.2021R1C1C1013133)Basic ScienceResearch Programthrough the NationalResearch Foundation ofKorea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)the Soonchunhyang University Research Fund.
文摘Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that delivers data;this enables new services in battery-less domains with massive Internet-of-Things(IoT)devices.Connectivity is highly energy-efficient in the context of massive IoT applications.Outdoors,long-range(LoRa)backscattering facilitates large IoT services.A backscatter network guarantees timeslot-and contention-based transmission.Timeslot-based transmission ensures data transmission,but is not scalable to different numbers of transmission devices.If contention-based transmission is used,collisions are unavoidable.To reduce collisions and increase transmission efficiency,the number of devices transmitting data must be controlled.To control device activation,the RF source range can be modulated by adjusting the RF source power during LoRa backscatter.This reduces the number of transmitting devices,and thus collisions and retransmission,thereby improving transmission efficiency.We performed extensive simulations to evaluate the performance of our method.
基金supported by the National Research Foundation of Korea (NRF)grant funded by the Korean government (MSIT) (No.2019M3F2A1073179).
文摘Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challenging despite the economic benefits.Existing PV forecasting techniques(sequential and convolutional neural networks(CNN))are sensitive to environmental conditions,reducing energy distribution system performance.To handle these issues,this article proposes an efficient,weather-resilient convolutional-transformer-based network(CT-NET)for accurate and efficient PV power forecasting.The network consists of three main modules.First,the acquired PV generation data are forwarded to the pre-processing module for data refinement.Next,to carry out data encoding,a CNNbased multi-head attention(MHA)module is developed in which a single MHA is used to decode the encoded data.The encoder module is mainly composed of 1D convolutional and MHA layers,which extract local as well as contextual features,while the decoder part includes MHA and feedforward layers to generate the final prediction.Finally,the performance of the proposed network is evaluated using standard error metrics,including the mean squared error(MSE),root mean squared error(RMSE),and mean absolute percentage error(MAPE).An ablation study and comparative analysis with several competitive state-of-the-art approaches revealed a lower error rate in terms of MSE(0.0471),RMSE(0.2167),and MAPE(0.6135)over publicly available benchmark data.In addition,it is demonstrated that our proposed model is less complex,with the lowest number of parameters(0.0135 M),size(0.106 MB),and inference time(2 ms/step),suggesting that it is easy to integrate into the smart grid.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF-2019R1F1A1062752)funded by the Ministry of Education+1 种基金funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.:5199990914048)supported by the Soonchunhyang University Research Fund.
文摘The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied to real-time cloth simulation with three types of springs.However,hard spring cloth simulation using the mass–spring system requires a small integration time-step in order to use a large stiffness coefficient.Furthermore,to obtain stable behavior,constraint enforcement is used instead of maintenance of the force of each spring.Constraint force computation involves a large sparse linear solving operation.Due to the large computation,we implement a cloth simulation using adaptive constraint activation and deactivation techniques that involve the mass-spring system and constraint enforcement method to prevent excessive elongation of cloth.At the same time,when the length of the spring is stretched or compressed over a defined threshold,adaptive constraint activation and deactivation method deactivates the spring and generate the implicit constraint.Traditional method that uses a serial process of the Central Processing Unit(CPU)to solve the system in every frame cannot handle the complex structure of cloth model in real-time.Our simulation utilizes the Graphic Processing Unit(GPU)parallel processing with compute shader in OpenGL Shading Language(GLSL)to solve the system effectively.In this paper,we design and implement parallel method for cloth simulation,and experiment on the performance and behavior comparison of the mass-spring system,constraint enforcement,and adaptive constraint activation and deactivation techniques the using GPU-based parallel method.