Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for b...Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.展开更多
Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging ...Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput.展开更多
Software reverse engineering is the process of analyzing a software system to extract the design and implementation details.Reverse engineering provides the source code of an application,the insight view of the archit...Software reverse engineering is the process of analyzing a software system to extract the design and implementation details.Reverse engineering provides the source code of an application,the insight view of the architecture and the third-party dependencies.From a security perspective,it is mostly used for finding vulnerabilities and attacking or cracking an application.The process is carried out either by obtaining the code in plaintext or reading it through the binaries or mnemonics.Nowadays,reverse engineering is widely used for mobile applications and is considered a security risk.The Open Web Application Security Project(OWASP),a leading security research forum,has included reverse engineering in its top 10 list of mobile application vulnerabilities.Mobile applications are used in many sectors,e.g.,banking,education,health.In particular,the banking applications are critical in terms of security as they are used for financial transactions.A security breach of such applications can result in huge financial losses for the customers as well as the banks.There exist various tools for reverse engineering of mobile applications,however,they have deficiencies,e.g.,complex configurations,lack of detailed analysis reports.In this research work,we perform an analysis of the available tools for reverse engineering of mobile applications.Our dataset consists of the mobile banking applications of the banks providing services in Pakistan.Our results indicate that none of the existing tools can carry out the complete reverse engineering process as a standalone tool.In addition,we observe significant differences in terms of the execution time and the number of files generated by each tool for the same file.展开更多
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn...Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.展开更多
Recently,Network Functions Virtualization(NFV)has become a critical resource for optimizing capability utilization in the 5G/B5G era.NFV decomposes the network resource paradigm,demonstrating the efficient utilization...Recently,Network Functions Virtualization(NFV)has become a critical resource for optimizing capability utilization in the 5G/B5G era.NFV decomposes the network resource paradigm,demonstrating the efficient utilization of Network Functions(NFs)to enable configurable service priorities and resource demands.Telecommunications Service Providers(TSPs)face challenges in network utilization,as the vast amounts of data generated by the Internet of Things(IoT)overwhelm existing infrastructures.IoT applications,which generate massive volumes of diverse data and require real-time communication,contribute to bottlenecks and congestion.In this context,Multiaccess Edge Computing(MEC)is employed to support resource and priority-aware IoT applications by implementing Virtual Network Function(VNF)sequences within Service Function Chaining(SFC).This paper proposes the use of Deep Reinforcement Learning(DRL)combined with Graph Neural Networks(GNN)to enhance network processing,performance,and resource pooling capabilities.GNN facilitates feature extraction through Message-Passing Neural Network(MPNN)mechanisms.Together with DRL,Deep Q-Networks(DQN)are utilized to dynamically allocate resources based on IoT network priorities and demands.Our focus is on minimizing delay times for VNF instance execution,ensuring effective resource placement,and allocation in SFC deployments,offering flexibility to adapt to real-time changes in priority and workload.Simulation results demonstrate that our proposed scheme outperforms reference models in terms of reward,delay,delivery,service drop ratios,and average completion ratios,proving its potential for IoT applications.展开更多
Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will ...Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.展开更多
Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that lever...Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.展开更多
Recently,the fifth generation(5G)of mobile networks has been deployed and various ranges of mobile services have been provided.The 5G mobile network supports improved mobile broadband,ultra-low latency and densely dep...Recently,the fifth generation(5G)of mobile networks has been deployed and various ranges of mobile services have been provided.The 5G mobile network supports improved mobile broadband,ultra-low latency and densely deployed massive devices.It allows multiple radio access technologies and interworks them for services.5G mobile systems employ traffic steering techniques to efficiently use multiple radio access technologies.However,conventional traffic steering techniques do not consider dynamic network conditions efficiently.In this paper,we propose a network aided traffic steering technique in 5G mobile network architecture.5G mobile systems monitor network conditions and learn with network data.Through a machine learning algorithm such as a feed-forward neural network,it recognizes dynamic network conditions and then performs traffic steering.The proposed scheme controls traffic for multiple radio access according to the ratio of measured throughput.Thus,it can be expected to improve traffic steering efficiency.The performance of the proposed traffic steering scheme is evaluated using extensive computer simulations.展开更多
As an extension of the traditional encryption technology,information hiding has been increasingly used in the fields of communication and network media,and the covert communication technology has gradually developed.T...As an extension of the traditional encryption technology,information hiding has been increasingly used in the fields of communication and network media,and the covert communication technology has gradually developed.The blockchain technology that has emerged in recent years has the characteristics of decentralization and tamper resistance,which can effectively alleviate the disadvantages and problems of traditional covert communication.However,its combination with covert communication thus far has been mostly at the theoretical level.The BLOCCE method,as an early result of the combination of blockchain and covert communication technology,has the problems of low information embedding efficiency,the use of too many Bitcoin addresses,low communication efficiency,and high costs.The present research improved on this method,designed the V-BLOCCE which uses base58 to encrypt the plaintext and reuses the addresses generated by Vanitygen multiple times to embed information.This greatly improves the efficiency of information embedding and decreases the number of Bitcoin addresses used.Under the premise of ensuring the order,the Bitcoin transaction OP_RETURN field is used to store the information required to restore the plaintext and the transactions are issued at the same time to improve the information transmission efficiency.Thus,a more efficient and feasible method for the application of covert communication on the blockchain is proposed.In addition,this paper also provides a more feasible scheme and theoretical support for covert communication in blockchain.展开更多
In the digital era,electronic medical record(EMR)has been a major way for hospitals to store patients’medical data.The traditional centralized medical system and semi-trusted cloud storage are difficult to achieve dy...In the digital era,electronic medical record(EMR)has been a major way for hospitals to store patients’medical data.The traditional centralized medical system and semi-trusted cloud storage are difficult to achieve dynamic balance between privacy protection and data sharing.The storage capacity of blockchain is limited and single blockchain schemes have poor scalability and low throughput.To address these issues,we propose a secure and efficient medical data storage and sharing scheme based on double blockchain.In our scheme,we encrypt the original EMR and store it in the cloud.The storage blockchain stores the index of the complete EMR,and the shared blockchain stores the index of the shared part of the EMR.Users with different attributes can make requests to different blockchains to share different parts according to their own permissions.Through experiments,it was found that cloud storage combined with blockchain not only solved the problem of limited storage capacity of blockchain,but also greatly reduced the risk of leakage of the original EMR.Content Extraction Signature(CES)combined with the double blockchain technology realized the separation of the privacy part and the shared part of the original EMR.The symmetric encryption technology combined with Ciphertext-Policy Attribute-Based Encryption(CP–ABE)not only ensures the safe storage of data in the cloud,but also achieves the consistency and convenience of data update,avoiding redundant backup of data.Safety analysis and performance analysis verified the feasibility and effectiveness of our scheme.展开更多
In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core te...In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core technology to increase the capacity of Radio Networks(RN)and enable Multiple-Input and Multiple-Output(MIMO)of Radio Remote Head(RRH)technology.However,the challenging key issues in unfair radio resource handling remain unsolved when massive requests are occurring concurrently.The imbalance of resource utilization is one of the main issues occurs when there is overloaded connectivity to the closest RRH receiving exceeding requests.To handle this issue effectively,Machine Learning(ML)algorithm plays an important role to tackle the requests of massive IoT devices to RRH with its obvious capacity conditions.This paper proposed a dynamic RRH gateways steering based on a lightweight supervised learning algorithm,namely K-Nearest Neighbor(KNN),to improve the communication Quality of Service(QoS)in real-time IoT networks.KNN supervises the model to classify and recommend the user’s requests to optimal RRHs which preserves higher power.The experimental dataset was generated by using computer software and the simulation results illustrated a remarkable outperformance of the proposed scheme over the conventional methods in terms of multiple significant QoS parameters,including communication reliability,latency,and throughput.展开更多
The Internet of Things(IoT)has enabled various intelligent services,and IoT service range has been steadily extended through long range wide area communication technologies,which enable very long distance wireless dat...The Internet of Things(IoT)has enabled various intelligent services,and IoT service range has been steadily extended through long range wide area communication technologies,which enable very long distance wireless data transmission.End-nodes are connected to a gateway with a single hop.They consume very low-power,using very low data rate to deliver data.Since long transmission time is consequently needed for each data packet transmission in long range wide area networks,data transmission should be efficiently performed.Therefore,this paper proposes a multicast uplink data transmission mechanism particularly for bad network conditions.Transmission delay will be increased if only retransmissions are used under bad network conditions.However,employing multicast techniques in bad network conditions can significantly increase packet delivery rate.Thus,retransmission can be reduced and hence transmission efficiency increased.Therefore,the proposed method adopts multicast uplink after network condition prediction.To predict network conditions,the proposed method uses a deep neural network algorithm.The proposed method performance was verified by comparison with uplink unicast transmission only,confirming significantly improved performance.展开更多
Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that deliver...Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that delivers data;this enables new services in battery-less domains with massive Internet-of-Things(IoT)devices.Connectivity is highly energy-efficient in the context of massive IoT applications.Outdoors,long-range(LoRa)backscattering facilitates large IoT services.A backscatter network guarantees timeslot-and contention-based transmission.Timeslot-based transmission ensures data transmission,but is not scalable to different numbers of transmission devices.If contention-based transmission is used,collisions are unavoidable.To reduce collisions and increase transmission efficiency,the number of devices transmitting data must be controlled.To control device activation,the RF source range can be modulated by adjusting the RF source power during LoRa backscatter.This reduces the number of transmitting devices,and thus collisions and retransmission,thereby improving transmission efficiency.We performed extensive simulations to evaluate the performance of our method.展开更多
Physical contamination of food occurs when it comes into contact with foreign objects.Foreign objects can be introduced to food at any time during food delivery and packaging and can cause serious concerns such as bro...Physical contamination of food occurs when it comes into contact with foreign objects.Foreign objects can be introduced to food at any time during food delivery and packaging and can cause serious concerns such as broken teeth or choking.Therefore,a preventive method that can detect and remove foreign objects in advance is required.Several studies have attempted to detect defective products using deep learning networks.Because it is difficult to obtain foreign object-containing food data from industry,most studies on industrial anomaly detection have used unsupervised learning methods.This paper proposes a new method for real-time anomaly detection in packaged food products using a supervised learning network.In this study,a realistic X-ray image training dataset was constructed by augmenting foreign objects with normal product images in a cut-paste manner.Based on the augmented training dataset,we trained YOLOv4,a real-time object detection network,and detected foreign objects in the test data.We evaluated this method on images of pasta,snacks,pistachios,and red beans under the same conditions.The results show that the normal and defective products were classified with an accuracy of at least 94%for all packaged foods.For detecting foreign objects that are typically difficult to detect using the unsupervised learning and traditional methods,the proposed method achieved high-performance realtime anomaly detection.In addition,to eliminate the loss in high-resolution X-ray images,the false positive rate and accuracy could be lowered to 5%with patch-based training and a new post-processing algorithm.展开更多
The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a ...The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a large population;however,illiterate and semi-illiterate people face challenges in using them.A major population of Pakistan is illiterate that has little or no practice of computer usage.In this paper,we investigate the challenges of using email applications by illiterate and semi-illiterate people.In addition,we also propose a solution by developing an application tailored to the needs of illiterate/semi-illiterate people.Research shows that illiterate people are good at learning the designs that convey information with pictures instead of text-only,and focus more on one object/action at a time.Our proposed solution is based on designing user interfaces that consist of icons and vocal/audio instructions instead of text.Further,we use background voice/audio which is more helpful than flooding a picture with a lot of information.We tested our application using a large number of users with various skill levels(from no computer knowledge to experts).Our results of the usability tests indicate that the application can be used by illiterate people without any training or third-party’s help.展开更多
Fingerprint security technology has attracted a great deal of attention in recent years because of its unique biometric information that does not change over an individual’s lifetime and is a highly reliable and secu...Fingerprint security technology has attracted a great deal of attention in recent years because of its unique biometric information that does not change over an individual’s lifetime and is a highly reliable and secure way to identify a certain individuals.AFIS(Automated Fingerprint Identification System)is a system used by Korean police for identifying a specific person by fingerprint.The AFIS system,however,only selects a list of possible candidates through fingerprints,the exact individual must be found by fingerprint experts.In this paper,we designed a deep learning system using deep convolution network to categorize fingerprints as coming from either the left or right hand.In this paper,we applied the Classic CNN(Convolutional Neural Network),AlexNet,Resnet50(Residual Network),VGG-16,and YOLO(You Only Look Once)networks to this problem,these are deep learning architectures that have been widely used in image analysis research.We used total 9,080 fingerprint images for training and 1,000 fingerprint to test the performance of the proposed model.As a result of our tests,we found the ResNet50 network performed the best at determining if an input fingerprint image came from the left or right hand with an accuracy of 96.80%.展开更多
The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied t...The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied to real-time cloth simulation with three types of springs.However,hard spring cloth simulation using the mass–spring system requires a small integration time-step in order to use a large stiffness coefficient.Furthermore,to obtain stable behavior,constraint enforcement is used instead of maintenance of the force of each spring.Constraint force computation involves a large sparse linear solving operation.Due to the large computation,we implement a cloth simulation using adaptive constraint activation and deactivation techniques that involve the mass-spring system and constraint enforcement method to prevent excessive elongation of cloth.At the same time,when the length of the spring is stretched or compressed over a defined threshold,adaptive constraint activation and deactivation method deactivates the spring and generate the implicit constraint.Traditional method that uses a serial process of the Central Processing Unit(CPU)to solve the system in every frame cannot handle the complex structure of cloth model in real-time.Our simulation utilizes the Graphic Processing Unit(GPU)parallel processing with compute shader in OpenGL Shading Language(GLSL)to solve the system effectively.In this paper,we design and implement parallel method for cloth simulation,and experiment on the performance and behavior comparison of the mass-spring system,constraint enforcement,and adaptive constraint activation and deactivation techniques the using GPU-based parallel method.展开更多
Specific medical data has limitations in that there are not many numbers and it is not standardized.to solve these limitations,it is necessary to study how to efficiently process these limited amounts of data.In this ...Specific medical data has limitations in that there are not many numbers and it is not standardized.to solve these limitations,it is necessary to study how to efficiently process these limited amounts of data.In this paper,deep learning methods for automatically determining cardiovascular diseases are described,and an effective preprocessing method for CT images that can be applied to improve the performance of deep learning was conducted.The cardiac CT images include several parts of the body such as the heart,lungs,spine,and ribs.The preprocessing step proposed in this paper divided CT image data into regions of interest and other regions using K-means clustering and the Grabcut algorithm.We compared the deep learning performance results of original data,data using only K-means clustering,and data using both K-means clustering and the Grabcut algorithm.All data used in this paper were collected at Soonchunhyang University Cheonan Hospital in Korea and the experimental test proceeded with IRB approval.The training was conducted using Resnet 50,VGG,and Inception resnet V2 models,and Resnet 50 had the best accuracy in validation and testing.Through the preprocessing process proposed in this paper,the accuracy of deep learning models was significantly improved by at least 10%and up to 40%.展开更多
Movies are the better source of entertainment.Every year,a great percentage of movies are released.People comment on movies in the form of reviews after watching them.Since it is difficult to read all of the reviews f...Movies are the better source of entertainment.Every year,a great percentage of movies are released.People comment on movies in the form of reviews after watching them.Since it is difficult to read all of the reviews for a movie,summarizing all of the reviews will help make this decision without wasting time in reading all of the reviews.Opinion mining also known as sentiment analysis is the process of extracting subjective information from textual data.Opinion mining involves identifying and extracting the opinions of individuals,which can be positive,neutral,or negative.The task of opinion mining also called sentiment analysis is performed to understand people’s emotions and attitudes in movie reviews.Movie reviews are an important source of opinion data because they provide insight into the general public’s opinions about a particular movie.The summary of all reviews can give a general idea about the movie.This study compares baseline techniques,Logistic Regression,Random Forest Classifier,Decision Tree,K-Nearest Neighbor,Gradient Boosting Classifier,and Passive Aggressive Classifier with Linear Support Vector Machines and Multinomial Naïve Bayes on the IMDB Dataset of 50K reviews and Sentiment Polarity Dataset Version 2.0.Before applying these classifiers,in pre-processing both datasets are cleaned,duplicate data is dropped and chat words are treated for better results.On the IMDB Dataset of 50K reviews,Linear Support Vector Machines achieve the highest accuracy of 89.48%,and after hyperparameter tuning,the Passive Aggressive Classifier achieves the highest accuracy of 90.27%,while Multinomial Nave Bayes achieves the highest accuracy of 70.69%and 71.04%after hyperparameter tuning on the Sentiment Polarity Dataset Version 2.0.This study highlights the importance of sentiment analysis as a tool for understanding the emotions and attitudes in movie reviews and predicts the performance of a movie based on the average sentiment of all the reviews.展开更多
In recent years,the significant growth in the Internet of Things(IoT)technology has brought a lot of attention to information and communication industry.Various IoT paradigms like the Internet of Vehicle Things(IoVT)a...In recent years,the significant growth in the Internet of Things(IoT)technology has brought a lot of attention to information and communication industry.Various IoT paradigms like the Internet of Vehicle Things(IoVT)and the Internet of Health Things(IoHT)create massive volumes of data every day which consume a lot of bandwidth and storage.However,to process such large volumes of data,the existing cloud computing platforms offer limited resources due to their distance from IoT devices.Consequently,cloudcomputing systems produce intolerable latency problems for latency-sensitive real-time applications.Therefore,a newparadigm called fog computingmakes use of computing nodes in the form of mobile devices,which utilize and process the real-time IoT devices data in orders of milliseconds.This paper proposes workload-aware efficient resource allocation and load balancing in the fog-computing environment for the IoHT.The proposed algorithmic framework consists of the following components:task sequencing,dynamic resource allocation,and load balancing.We consider electrocardiography(ECG)sensors for patient’s critical tasks to achieve maximum load balancing among fog nodes and to measure the performance of end-to-end delay,energy,network consumption and average throughput.The proposed algorithm has been evaluated using the iFogSim tool,and results with the existing approach have been conducted.The experimental results exhibit that the proposed technique achieves a 45%decrease in delay,37%reduction in energy consumption,and 25%decrease in network bandwidth consumption compared to the existing studies.展开更多
基金This work was funded by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)this research was supported by the Bio and Medical Technology Development Program of the National Research Foundation(NRF)funded by the Korean government(MSIT)(No.NRF-2019M3E5D1A02069073)In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.
基金This work was funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.5199990914048)this research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput.
基金The authors acknowledge the support of Security Testing-Innovative Secured Systems Lab(ISSL)established at University of Engineering&Technology,Peshawar,Pakistan under the Higher Education Commission initiative of National Center for Cyber Security(Grant No.2(1078)/HEC/M&E/2018/707).
文摘Software reverse engineering is the process of analyzing a software system to extract the design and implementation details.Reverse engineering provides the source code of an application,the insight view of the architecture and the third-party dependencies.From a security perspective,it is mostly used for finding vulnerabilities and attacking or cracking an application.The process is carried out either by obtaining the code in plaintext or reading it through the binaries or mnemonics.Nowadays,reverse engineering is widely used for mobile applications and is considered a security risk.The Open Web Application Security Project(OWASP),a leading security research forum,has included reverse engineering in its top 10 list of mobile application vulnerabilities.Mobile applications are used in many sectors,e.g.,banking,education,health.In particular,the banking applications are critical in terms of security as they are used for financial transactions.A security breach of such applications can result in huge financial losses for the customers as well as the banks.There exist various tools for reverse engineering of mobile applications,however,they have deficiencies,e.g.,complex configurations,lack of detailed analysis reports.In this research work,we perform an analysis of the available tools for reverse engineering of mobile applications.Our dataset consists of the mobile banking applications of the banks providing services in Pakistan.Our results indicate that none of the existing tools can carry out the complete reverse engineering process as a standalone tool.In addition,we observe significant differences in terms of the execution time and the number of files generated by each tool for the same file.
基金This Research is funded by Researchers Supporting Project Number(RSPD2024R947),King Saud University,Riyadh,Saudi Arabia.
文摘Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.
基金supported by Institute of Information&Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.RS-2022-00167197,Development of Intelligent 5G/6G Infrastructure Technology for the Smart City)in part by the National Research Foundation of Korea(NRF),Ministry of Education,through the Basic Science Research Program under Grant NRF-2020R1I1A3066543+1 种基金in part by BK21 FOUR(Fostering Outstanding Universities for Research)under Grant 5199990914048in part by the Soonchunhyang University Research Fund.
文摘Recently,Network Functions Virtualization(NFV)has become a critical resource for optimizing capability utilization in the 5G/B5G era.NFV decomposes the network resource paradigm,demonstrating the efficient utilization of Network Functions(NFs)to enable configurable service priorities and resource demands.Telecommunications Service Providers(TSPs)face challenges in network utilization,as the vast amounts of data generated by the Internet of Things(IoT)overwhelm existing infrastructures.IoT applications,which generate massive volumes of diverse data and require real-time communication,contribute to bottlenecks and congestion.In this context,Multiaccess Edge Computing(MEC)is employed to support resource and priority-aware IoT applications by implementing Virtual Network Function(VNF)sequences within Service Function Chaining(SFC).This paper proposes the use of Deep Reinforcement Learning(DRL)combined with Graph Neural Networks(GNN)to enhance network processing,performance,and resource pooling capabilities.GNN facilitates feature extraction through Message-Passing Neural Network(MPNN)mechanisms.Together with DRL,Deep Q-Networks(DQN)are utilized to dynamically allocate resources based on IoT network priorities and demands.Our focus is on minimizing delay times for VNF instance execution,ensuring effective resource placement,and allocation in SFC deployments,offering flexibility to adapt to real-time changes in priority and workload.Simulation results demonstrate that our proposed scheme outperforms reference models in terms of reward,delay,delivery,service drop ratios,and average completion ratios,proving its potential for IoT applications.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RP23082).
文摘Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.
基金funded by Soonchunhyang University,Grant Numbers 20241422BK21 FOUR(Fostering Outstanding Universities for Research,Grant Number 5199990914048).
文摘Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2020-2015-0-00403)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation)this work was supported by the Soonchunhyang University Research Fund.
文摘Recently,the fifth generation(5G)of mobile networks has been deployed and various ranges of mobile services have been provided.The 5G mobile network supports improved mobile broadband,ultra-low latency and densely deployed massive devices.It allows multiple radio access technologies and interworks them for services.5G mobile systems employ traffic steering techniques to efficiently use multiple radio access technologies.However,conventional traffic steering techniques do not consider dynamic network conditions efficiently.In this paper,we propose a network aided traffic steering technique in 5G mobile network architecture.5G mobile systems monitor network conditions and learn with network data.Through a machine learning algorithm such as a feed-forward neural network,it recognizes dynamic network conditions and then performs traffic steering.The proposed scheme controls traffic for multiple radio access according to the ratio of measured throughput.Thus,it can be expected to improve traffic steering efficiency.The performance of the proposed traffic steering scheme is evaluated using extensive computer simulations.
基金This work is sponsored by the Natural Science Foundation of Heilongjiang Province of China under Grant No.LC2016024Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No.17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108.
文摘As an extension of the traditional encryption technology,information hiding has been increasingly used in the fields of communication and network media,and the covert communication technology has gradually developed.The blockchain technology that has emerged in recent years has the characteristics of decentralization and tamper resistance,which can effectively alleviate the disadvantages and problems of traditional covert communication.However,its combination with covert communication thus far has been mostly at the theoretical level.The BLOCCE method,as an early result of the combination of blockchain and covert communication technology,has the problems of low information embedding efficiency,the use of too many Bitcoin addresses,low communication efficiency,and high costs.The present research improved on this method,designed the V-BLOCCE which uses base58 to encrypt the plaintext and reuses the addresses generated by Vanitygen multiple times to embed information.This greatly improves the efficiency of information embedding and decreases the number of Bitcoin addresses used.Under the premise of ensuring the order,the Bitcoin transaction OP_RETURN field is used to store the information required to restore the plaintext and the transactions are issued at the same time to improve the information transmission efficiency.Thus,a more efficient and feasible method for the application of covert communication on the blockchain is proposed.In addition,this paper also provides a more feasible scheme and theoretical support for covert communication in blockchain.
基金the Natural Science Foundation of Heilongjiang Province of China under Grant No.LC2016024Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No.17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX–108.
文摘In the digital era,electronic medical record(EMR)has been a major way for hospitals to store patients’medical data.The traditional centralized medical system and semi-trusted cloud storage are difficult to achieve dynamic balance between privacy protection and data sharing.The storage capacity of blockchain is limited and single blockchain schemes have poor scalability and low throughput.To address these issues,we propose a secure and efficient medical data storage and sharing scheme based on double blockchain.In our scheme,we encrypt the original EMR and store it in the cloud.The storage blockchain stores the index of the complete EMR,and the shared blockchain stores the index of the shared part of the EMR.Users with different attributes can make requests to different blockchains to share different parts according to their own permissions.Through experiments,it was found that cloud storage combined with blockchain not only solved the problem of limited storage capacity of blockchain,but also greatly reduced the risk of leakage of the original EMR.Content Extraction Signature(CES)combined with the double blockchain technology realized the separation of the privacy part and the shared part of the original EMR.The symmetric encryption technology combined with Ciphertext-Policy Attribute-Based Encryption(CP–ABE)not only ensures the safe storage of data in the cloud,but also achieves the consistency and convenience of data update,avoiding redundant backup of data.Safety analysis and performance analysis verified the feasibility and effectiveness of our scheme.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)this work was supported by the Soonchunhyang University Research Fund.
文摘In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core technology to increase the capacity of Radio Networks(RN)and enable Multiple-Input and Multiple-Output(MIMO)of Radio Remote Head(RRH)technology.However,the challenging key issues in unfair radio resource handling remain unsolved when massive requests are occurring concurrently.The imbalance of resource utilization is one of the main issues occurs when there is overloaded connectivity to the closest RRH receiving exceeding requests.To handle this issue effectively,Machine Learning(ML)algorithm plays an important role to tackle the requests of massive IoT devices to RRH with its obvious capacity conditions.This paper proposed a dynamic RRH gateways steering based on a lightweight supervised learning algorithm,namely K-Nearest Neighbor(KNN),to improve the communication Quality of Service(QoS)in real-time IoT networks.KNN supervises the model to classify and recommend the user’s requests to optimal RRHs which preserves higher power.The experimental dataset was generated by using computer software and the simulation results illustrated a remarkable outperformance of the proposed scheme over the conventional methods in terms of multiple significant QoS parameters,including communication reliability,latency,and throughput.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2019-2015-0-00403)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation)this work was supported by the Soonchunhyang University Research Fund.
文摘The Internet of Things(IoT)has enabled various intelligent services,and IoT service range has been steadily extended through long range wide area communication technologies,which enable very long distance wireless data transmission.End-nodes are connected to a gateway with a single hop.They consume very low-power,using very low data rate to deliver data.Since long transmission time is consequently needed for each data packet transmission in long range wide area networks,data transmission should be efficiently performed.Therefore,this paper proposes a multicast uplink data transmission mechanism particularly for bad network conditions.Transmission delay will be increased if only retransmissions are used under bad network conditions.However,employing multicast techniques in bad network conditions can significantly increase packet delivery rate.Thus,retransmission can be reduced and hence transmission efficiency increased.Therefore,the proposed method adopts multicast uplink after network condition prediction.To predict network conditions,the proposed method uses a deep neural network algorithm.The proposed method performance was verified by comparison with uplink unicast transmission only,confirming significantly improved performance.
基金the National Research Foundation of Korea(NRF)grant funded by theKoreaGovernment(MSIT)(No.2021R1C1C1013133)Basic ScienceResearch Programthrough the NationalResearch Foundation ofKorea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)the Soonchunhyang University Research Fund.
文摘Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that delivers data;this enables new services in battery-less domains with massive Internet-of-Things(IoT)devices.Connectivity is highly energy-efficient in the context of massive IoT applications.Outdoors,long-range(LoRa)backscattering facilitates large IoT services.A backscatter network guarantees timeslot-and contention-based transmission.Timeslot-based transmission ensures data transmission,but is not scalable to different numbers of transmission devices.If contention-based transmission is used,collisions are unavoidable.To reduce collisions and increase transmission efficiency,the number of devices transmitting data must be controlled.To control device activation,the RF source range can be modulated by adjusting the RF source power during LoRa backscatter.This reduces the number of transmitting devices,and thus collisions and retransmission,thereby improving transmission efficiency.We performed extensive simulations to evaluate the performance of our method.
基金supported by Basic Science Research Program through the National Research Foundation(NRF)of Korea funded by the Ministry of Education(grant number 2020R1A6A1A03040583,Kangjik Kim,www.nrf.re.kr)this research was also supported by the Soonchunhyang University Research Fund.
文摘Physical contamination of food occurs when it comes into contact with foreign objects.Foreign objects can be introduced to food at any time during food delivery and packaging and can cause serious concerns such as broken teeth or choking.Therefore,a preventive method that can detect and remove foreign objects in advance is required.Several studies have attempted to detect defective products using deep learning networks.Because it is difficult to obtain foreign object-containing food data from industry,most studies on industrial anomaly detection have used unsupervised learning methods.This paper proposes a new method for real-time anomaly detection in packaged food products using a supervised learning network.In this study,a realistic X-ray image training dataset was constructed by augmenting foreign objects with normal product images in a cut-paste manner.Based on the augmented training dataset,we trained YOLOv4,a real-time object detection network,and detected foreign objects in the test data.We evaluated this method on images of pasta,snacks,pistachios,and red beans under the same conditions.The results show that the normal and defective products were classified with an accuracy of at least 94%for all packaged foods.For detecting foreign objects that are typically difficult to detect using the unsupervised learning and traditional methods,the proposed method achieved high-performance realtime anomaly detection.In addition,to eliminate the loss in high-resolution X-ray images,the false positive rate and accuracy could be lowered to 5%with patch-based training and a new post-processing algorithm.
基金This work is supported by the Security Testing Lab established at the University of Engineering&TechnologyPeshawar under the funded project National Center for Cyber Security of the Higher Education Commission(HEC),Pakistan。
文摘The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a large population;however,illiterate and semi-illiterate people face challenges in using them.A major population of Pakistan is illiterate that has little or no practice of computer usage.In this paper,we investigate the challenges of using email applications by illiterate and semi-illiterate people.In addition,we also propose a solution by developing an application tailored to the needs of illiterate/semi-illiterate people.Research shows that illiterate people are good at learning the designs that convey information with pictures instead of text-only,and focus more on one object/action at a time.Our proposed solution is based on designing user interfaces that consist of icons and vocal/audio instructions instead of text.Further,we use background voice/audio which is more helpful than flooding a picture with a lot of information.We tested our application using a large number of users with various skill levels(from no computer knowledge to experts).Our results of the usability tests indicate that the application can be used by illiterate people without any training or third-party’s help.
基金This work was supported by the Technology development Program(S2688148)funded by the Ministry of SMEsStartups(MSS,Korea)and was supported by the Soonchunhyang University Research Fund.
文摘Fingerprint security technology has attracted a great deal of attention in recent years because of its unique biometric information that does not change over an individual’s lifetime and is a highly reliable and secure way to identify a certain individuals.AFIS(Automated Fingerprint Identification System)is a system used by Korean police for identifying a specific person by fingerprint.The AFIS system,however,only selects a list of possible candidates through fingerprints,the exact individual must be found by fingerprint experts.In this paper,we designed a deep learning system using deep convolution network to categorize fingerprints as coming from either the left or right hand.In this paper,we applied the Classic CNN(Convolutional Neural Network),AlexNet,Resnet50(Residual Network),VGG-16,and YOLO(You Only Look Once)networks to this problem,these are deep learning architectures that have been widely used in image analysis research.We used total 9,080 fingerprint images for training and 1,000 fingerprint to test the performance of the proposed model.As a result of our tests,we found the ResNet50 network performed the best at determining if an input fingerprint image came from the left or right hand with an accuracy of 96.80%.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF-2019R1F1A1062752)funded by the Ministry of Education+1 种基金funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.:5199990914048)supported by the Soonchunhyang University Research Fund.
文摘The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied to real-time cloth simulation with three types of springs.However,hard spring cloth simulation using the mass–spring system requires a small integration time-step in order to use a large stiffness coefficient.Furthermore,to obtain stable behavior,constraint enforcement is used instead of maintenance of the force of each spring.Constraint force computation involves a large sparse linear solving operation.Due to the large computation,we implement a cloth simulation using adaptive constraint activation and deactivation techniques that involve the mass-spring system and constraint enforcement method to prevent excessive elongation of cloth.At the same time,when the length of the spring is stretched or compressed over a defined threshold,adaptive constraint activation and deactivation method deactivates the spring and generate the implicit constraint.Traditional method that uses a serial process of the Central Processing Unit(CPU)to solve the system in every frame cannot handle the complex structure of cloth model in real-time.Our simulation utilizes the Graphic Processing Unit(GPU)parallel processing with compute shader in OpenGL Shading Language(GLSL)to solve the system effectively.In this paper,we design and implement parallel method for cloth simulation,and experiment on the performance and behavior comparison of the mass-spring system,constraint enforcement,and adaptive constraint activation and deactivation techniques the using GPU-based parallel method.
基金This research was supported under the framework of an international cooperation program managed by the National Research Foundation of Korea(NRF-2019K1A3A1A20093097)supported by the National Key Research and Development Program of China(2019YFE0107800)was supported by the Soonchunhyang University Research Fund。
文摘Specific medical data has limitations in that there are not many numbers and it is not standardized.to solve these limitations,it is necessary to study how to efficiently process these limited amounts of data.In this paper,deep learning methods for automatically determining cardiovascular diseases are described,and an effective preprocessing method for CT images that can be applied to improve the performance of deep learning was conducted.The cardiac CT images include several parts of the body such as the heart,lungs,spine,and ribs.The preprocessing step proposed in this paper divided CT image data into regions of interest and other regions using K-means clustering and the Grabcut algorithm.We compared the deep learning performance results of original data,data using only K-means clustering,and data using both K-means clustering and the Grabcut algorithm.All data used in this paper were collected at Soonchunhyang University Cheonan Hospital in Korea and the experimental test proceeded with IRB approval.The training was conducted using Resnet 50,VGG,and Inception resnet V2 models,and Resnet 50 had the best accuracy in validation and testing.Through the preprocessing process proposed in this paper,the accuracy of deep learning models was significantly improved by at least 10%and up to 40%.
文摘Movies are the better source of entertainment.Every year,a great percentage of movies are released.People comment on movies in the form of reviews after watching them.Since it is difficult to read all of the reviews for a movie,summarizing all of the reviews will help make this decision without wasting time in reading all of the reviews.Opinion mining also known as sentiment analysis is the process of extracting subjective information from textual data.Opinion mining involves identifying and extracting the opinions of individuals,which can be positive,neutral,or negative.The task of opinion mining also called sentiment analysis is performed to understand people’s emotions and attitudes in movie reviews.Movie reviews are an important source of opinion data because they provide insight into the general public’s opinions about a particular movie.The summary of all reviews can give a general idea about the movie.This study compares baseline techniques,Logistic Regression,Random Forest Classifier,Decision Tree,K-Nearest Neighbor,Gradient Boosting Classifier,and Passive Aggressive Classifier with Linear Support Vector Machines and Multinomial Naïve Bayes on the IMDB Dataset of 50K reviews and Sentiment Polarity Dataset Version 2.0.Before applying these classifiers,in pre-processing both datasets are cleaned,duplicate data is dropped and chat words are treated for better results.On the IMDB Dataset of 50K reviews,Linear Support Vector Machines achieve the highest accuracy of 89.48%,and after hyperparameter tuning,the Passive Aggressive Classifier achieves the highest accuracy of 90.27%,while Multinomial Nave Bayes achieves the highest accuracy of 70.69%and 71.04%after hyperparameter tuning on the Sentiment Polarity Dataset Version 2.0.This study highlights the importance of sentiment analysis as a tool for understanding the emotions and attitudes in movie reviews and predicts the performance of a movie based on the average sentiment of all the reviews.
基金This research is supported and funded by King Khalid University of Saudi Arabia under the Grant Number R.G.P.1/365/42。
文摘In recent years,the significant growth in the Internet of Things(IoT)technology has brought a lot of attention to information and communication industry.Various IoT paradigms like the Internet of Vehicle Things(IoVT)and the Internet of Health Things(IoHT)create massive volumes of data every day which consume a lot of bandwidth and storage.However,to process such large volumes of data,the existing cloud computing platforms offer limited resources due to their distance from IoT devices.Consequently,cloudcomputing systems produce intolerable latency problems for latency-sensitive real-time applications.Therefore,a newparadigm called fog computingmakes use of computing nodes in the form of mobile devices,which utilize and process the real-time IoT devices data in orders of milliseconds.This paper proposes workload-aware efficient resource allocation and load balancing in the fog-computing environment for the IoHT.The proposed algorithmic framework consists of the following components:task sequencing,dynamic resource allocation,and load balancing.We consider electrocardiography(ECG)sensors for patient’s critical tasks to achieve maximum load balancing among fog nodes and to measure the performance of end-to-end delay,energy,network consumption and average throughput.The proposed algorithm has been evaluated using the iFogSim tool,and results with the existing approach have been conducted.The experimental results exhibit that the proposed technique achieves a 45%decrease in delay,37%reduction in energy consumption,and 25%decrease in network bandwidth consumption compared to the existing studies.