期刊文献+
共找到2,864篇文章
< 1 2 144 >
每页显示 20 50 100
A Robust 3-D Medical Watermarking Based on Wavelet Transform for Data Protection 被引量:71
1
作者 Xiaorui Zhang Wenfang Zhang +2 位作者 Wei Sun Xingming Sun Sunil Kumar Jha 《Computer systems science & engineering SCIE EI 2022年第6期1043-1056,共14页
In a telemedicine diagnosis system,the emergence of 3D imaging enables doctors to make clearer judgments,and its accuracy also directly affects doctors’diagnosis of the disease.In order to ensure the safe transmissio... In a telemedicine diagnosis system,the emergence of 3D imaging enables doctors to make clearer judgments,and its accuracy also directly affects doctors’diagnosis of the disease.In order to ensure the safe transmission and storage of medical data,a 3D medical watermarking algorithm based on wavelet transform is proposed in this paper.The proposed algorithm employs the principal component analysis(PCA)transform to reduce the data dimension,which can minimize the error between the extracted components and the original data in the mean square sense.Especially,this algorithm helps to create a bacterial foraging model based on particle swarm optimization(BF-PSO),by which the optimal wavelet coefficient is found for embedding and is used as the absolute feature of watermark embedding,thereby achieving the optimal balance between embedding capacity and imperceptibility.A series of experimental results from MATLAB software based on the standard MRI brain volume dataset demonstrate that the proposed algorithm has strong robustness and make the 3D model have small deformation after embedding the watermark. 展开更多
关键词 3-D medical watermarking robust watermarking PCA BF-PSO
在线阅读 下载PDF
Faster RCNN Target Detection Algorithm Integrating CBAM and FPN 被引量:8
2
作者 Wenshun Sheng Xiongfeng Yu +1 位作者 Jiayan Lin Xin Chen 《Computer systems science & engineering SCIE EI 2023年第11期1549-1569,共21页
Small targets and occluded targets will inevitably appear in the image during the shooting process due to the influence of angle,distance,complex scene,illumination intensity,and other factors.These targets have few e... Small targets and occluded targets will inevitably appear in the image during the shooting process due to the influence of angle,distance,complex scene,illumination intensity,and other factors.These targets have few effective pixels,few features,and no apparent features,which makes extracting their efficient features difficult and easily leads to false detection,missed detection,and repeated detection,affecting the performance of target detection models.An improved faster region convolutional neural network(RCNN)algorithm(CF-RCNN)integrating convolutional block attention module(CBAM)and feature pyramid networks(FPN)is proposed to improve the detection and recognition accuracy of small-size objects,occluded or truncated objects in complex scenes.Firstly,the CBAM mechanism is integrated into the feature extraction network to improve the detection ability of occluded or truncated objects.Secondly,the FPN-featured pyramid structure is introduced to obtain high-resolution and vital semantic data to enhance the detection effect of small-size objects.The experimental results show that the mean average precision of target detection of the improved algorithm on PASCAL VOC2012 is improved to 76.1%,which is 13.8 percentage points higher than that of the commonly used Faster RCNN and other algorithms.Furthermore,it is better than the commonly used small sample target detection algorithm. 展开更多
关键词 Target detection attention mechanism CBAM FPN CF-RCNN
在线阅读 下载PDF
CNN Based Automated Weed Detection System Using UAV Imagery 被引量:7
3
作者 Mohd Anul Haq 《Computer systems science & engineering SCIE EI 2022年第8期837-849,共13页
The problem of weeds in crops is a natural problem for farmers.Machine Learning(ML),Deep Learning(DL),and Unmanned Aerial Vehicles(UAV)are among the advanced technologies that should be used in order to reduce the use... The problem of weeds in crops is a natural problem for farmers.Machine Learning(ML),Deep Learning(DL),and Unmanned Aerial Vehicles(UAV)are among the advanced technologies that should be used in order to reduce the use of pesticides while also protecting the environment and ensuring the safety of crops.Deep Learning-based crop and weed identification systems have the potential to save money while also reducing environmental stress.The accuracy of ML/DL models has been proven to be restricted in the past due to a variety of factors,including the selection of an efficient wavelength,spatial resolution,and the selection and tuning of hyperparameters.The purpose of the current research is to develop a new automated weed detecting system that uses Convolution Neural Network(CNN)classification for a real dataset of 4400 UAV pictures with 15336 segments.Snapshots were used to choose the optimal parameters for the proposed CNN LVQ model.The soil class achieved the user accuracy of 100%with the proposed CNN LVQ model,followed by soybean(99.79%),grass(98.58%),and broadleaf(98.32%).The developed CNN LVQ model showed an overall accuracy of 99.44%after rigorous hyperparameter tuning for weed detection,significantly higher than previously reported studies. 展开更多
关键词 CNN WEED DETECTION CLASSIFICATION UAV
在线阅读 下载PDF
Aero-Engine Surge Fault Diagnosis Using Deep Neural Network 被引量:6
4
作者 Kexin Zhang Bin Lin +4 位作者 Jixin Chen Xinlong Wu Chao Lu Desheng Zheng Lulu Tian 《Computer systems science & engineering SCIE EI 2022年第7期351-360,共10页
Deep learning techniques have outstanding performance in feature extraction and modelfitting.In thefield of aero-engine fault diagnosis,the intro-duction of deep learning technology is of great significance.The aero-engi... Deep learning techniques have outstanding performance in feature extraction and modelfitting.In thefield of aero-engine fault diagnosis,the intro-duction of deep learning technology is of great significance.The aero-engine is the heart of the aircraft,and its stable operation is the primary guarantee of the aircraft.In order to ensure the normal operation of the aircraft,it is necessary to study and diagnose the faults of the aero-engine.Among the many engine fail-ures,the one that occurs more frequently and is more hazardous is the wheeze,which often poses a great threat toflight safety.On the basis of analyzing the mechanism of aero-engine surge,an aero-engine surge fault diagnosis method based on deep learning technology is proposed.In this paper,key sensor data are obtained by analyzing different engine sensor data.An aero-engine surge data-set acquisition algorithm(ASDA)is proposed to sample the fault and normal points to generate the training set,validation set and test set.Based on neural net-work models such as one-dimensional convolutional neural network(1D-CNN),convolutional neural network(RNN),and long-short memory neural network(LSTM),different neural network optimization algorithms are selected to achieve fault diagnosis and classification.The experimental results show that the deep learning technique has good effect in aero-engine surge fault diagnosis.The aero-engine surge fault diagnosis network(ASFDN)proposed in this paper achieves better results.Through training,the network achieves more than 99%classification accuracy for the test set. 展开更多
关键词 AERO-ENGINE fault diagnosis SURGE vibration signal classification deep learning
在线阅读 下载PDF
A Novel Post-Quantum Blind Signature for Log System in Blockchain 被引量:5
5
作者 Gang Xu Yibo Cao +4 位作者 Shiyuan Xu Ke Xiao Xin Liu Xiubo Chen Mianxiong Dong 《Computer systems science & engineering SCIE EI 2022年第6期945-958,共14页
In recent decades, log system management has been widely studied fordata security management. System abnormalities or illegal operations can befound in time by analyzing the log and provide evidence for intrusions. In... In recent decades, log system management has been widely studied fordata security management. System abnormalities or illegal operations can befound in time by analyzing the log and provide evidence for intrusions. In orderto ensure the integrity of the log in the current system, many researchers havedesigned it based on blockchain. However, the emerging blockchain is facing significant security challenges with the increment of quantum computers. An attackerequipped with a quantum computer can extract the user's private key from thepublic key to generate a forged signature, destroy the structure of the blockchain,and threaten the security of the log system. Thus, blind signature on the lattice inpost-quantum blockchain brings new security features for log systems. In ourpaper, to address these, firstly, we propose a novel log system based on post-quantum blockchain that can resist quantum computing attacks. Secondly, we utilize apost-quantum blind signature on the lattice to ensure both security and blindnessof log system, which makes the privacy of log information to a large extent.Lastly, we enhance the security level of lattice-based blind signature under therandom oracle model, and the signature size grows slowly compared with others.We also implement our protocol and conduct an extensive analysis to prove theideas. The results show that our scheme signature size edges up subtly comparedwith others with the improvement of security level. 展开更多
关键词 Log system post-quantum blockchain LATTICE blind signature privacy protection
在线阅读 下载PDF
Facial Expression Recognition Using Enhanced Convolution Neural Network with Attention Mechanism 被引量:5
6
作者 K.Prabhu S.SathishKumar +2 位作者 M.Sivachitra S.Dineshkumar P.Sathiyabama 《Computer systems science & engineering SCIE EI 2022年第4期415-426,共12页
Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER hav... Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images. 展开更多
关键词 Facial expression recognition linear discriminant analysis animal migration optimization regions of interest enhanced convolution neural network with attention mechanism
在线阅读 下载PDF
Lightweight and Secure Mutual Authentication Scheme for IoT Devices Using CoAP Protocol 被引量:5
7
作者 S.Gladson Oliver T.Purusothaman 《Computer systems science & engineering SCIE EI 2022年第5期767-780,共14页
Internet of things enables every real world objects to be seamlessly integrated with traditional internet.Heterogeneous objects of real world are enhanced with capability to communicate,computing capabilities and stan... Internet of things enables every real world objects to be seamlessly integrated with traditional internet.Heterogeneous objects of real world are enhanced with capability to communicate,computing capabilities and standards to interoperate with existing network and these entities are resource constrained and vulnerable to various security attacks.Huge number of research works are being carried out to analyze various possible attacks and to propose standards for securing communication between devices in internet of things(IoT).In this article,a robust and lightweight authentication scheme for mutual authentication between client and server using constrained application protocol is proposed.Internet of things enables devices with different characteristics and capabilities to be integrated with internet.These heterogeneous devices should interoperate with each other to accumulate,process and transmit data for facilitating smart services.The growth of IoT applications leads to the rapid growth of IoT devices incorporated to the global network and network traffic over the traditional network.This scheme greatly reduces the authentication overhead between the devices by reducing the packet size of messages,number of messages transmitted and processing overhead on communicating devices.Efficiency of this authentication scheme against attacks such as DoS(denial of service),replay attacks and attacks to exhaust the resources are also examined.Message transmission time reduced upto 50%of using proposed techniques. 展开更多
关键词 IOT CoAP AES ENCRYPTION message transmisssion
在线阅读 下载PDF
Real-time Safety Helmet-wearing Detection Based on Improved YOLOv5 被引量:5
8
作者 Yanman Li Jun Zhang +2 位作者 Yang Hu Yingnan Zhao Yi Cao 《Computer systems science & engineering SCIE EI 2022年第12期1219-1230,共12页
Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient... Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient detectormodel. The underlying core algorithm of this model adopts the YOLOv5 (YouOnly Look Once version 5) network with the best comprehensive detection performance. It is improved by adding an attention mechanism, a CIoU (CompleteIntersection Over Union) Loss function, and the Mish activation function. First,it applies the attention mechanism in the feature extraction. The network can learnthe weight of each channel independently and enhance the information dissemination between features. Second, it adopts CIoU loss function to achieve accuratebounding box regression. Third, it utilizes Mish activation function to improvedetection accuracy and generalization ability. It builds a safety helmet-wearingdetection data set containing more than 10,000 images collected from the Internetfor preprocessing. On the self-made helmet wearing test data set, the averageaccuracy of the helmet detection of the proposed algorithm is 96.7%, which is1.9% higher than that of the YOLOv5 algorithm. It meets the accuracy requirements of the helmet-wearing detection under construction scenarios. 展开更多
关键词 Safety helmet wearing detection object detection deep learning YOLOv5 Attention Mechanism
在线阅读 下载PDF
A Storage Optimization Scheme for Blockchain Transaction Databases 被引量:7
9
作者 Jingyu Zhang Siqi Zhong +2 位作者 Jin Wang Xiaofeng Yu Osama Alfarraj 《Computer systems science & engineering SCIE EI 2021年第3期521-535,共15页
As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g... As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g., bitcoinsystem), the node does not rely on any central organization, and every node keepsan entire copy of the transaction database. However, this feature determines thatthe size of blockchain transaction database is growing rapidly. Therefore, with thecontinuous system operations, the node memory also needs to be expanded tosupport the system running. Especially in the big data era, the increasing networktraffic will lead to faster transaction growth rate. This paper analyzes blockchaintransaction databases and proposes a storage optimization scheme. The proposedscheme divides blockchain transaction database into cold zone and hot zone usingexpiration recognition method based on Least Recently Used (LRU) algorithm. Itcan achieve storage optimization by moving unspent transaction outputs outsidethe in-memory transaction databases. We present the theoretical analysis on theoptimization method to validate the effectiveness. Extensive experiments showour proposed method outperforms the current mechanism for the blockchaintransaction databases. 展开更多
关键词 Blockchain distributed systems transaction databases
在线阅读 下载PDF
Deep Learning for Object Detection:A Survey 被引量:6
10
作者 Jun Wang Tingjuan Zhang +1 位作者 Yong Cheng Najla Al-Nabhan 《Computer systems science & engineering SCIE EI 2021年第8期165-182,共18页
Object detection is one of the most important and challenging branches of computer vision,which has been widely applied in people s life,such as monitoring security,autonomous driving and so on,with the purpose of loc... Object detection is one of the most important and challenging branches of computer vision,which has been widely applied in people s life,such as monitoring security,autonomous driving and so on,with the purpose of locating instances of semantic objects of a certain class.With the rapid development of deep learning algorithms for detection tasks,the performance of object detectors has been greatly improved.In order to understand the main development status of target detection,a comprehensive literature review of target detection and an overall discussion of the works closely related to it are presented in this paper.This paper various object detection methods,including one-stage and two-stage detectors,are systematically summarized,and the datasets and evaluation criteria used in object detection are introduced.In addition,the development of object detection technology is reviewed.Finally,based on the understanding of the current development of target detection,we discuss the main research directions in the future. 展开更多
关键词 Object detection convolutional neural network computer vision
在线阅读 下载PDF
Computer Vision and Deep Learning-enabled Weed Detection Model for Precision Agriculture 被引量:4
11
作者 R.Punithavathi A.Delphin Carolina Rani +4 位作者 K.R.Sughashinir Chinnarao Kurangit M.Nirmala Hasmath Farhana Thariq Ahmed S.P.Balamurugan 《Computer systems science & engineering SCIE EI 2023年第3期2759-2774,共16页
Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital ... Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures. 展开更多
关键词 Precision agriculture smart farming weed detection computer vision deep learning
在线阅读 下载PDF
RDA- CNN: Enhanced Super Resolution Method for Rice Plant Disease Classification 被引量:3
12
作者 K.Sathya M.Rajalakshmi 《Computer systems science & engineering SCIE EI 2022年第7期33-47,共15页
In thefield of agriculture,the development of an early warning diagnostic system is essential for timely detection and accurate diagnosis of diseases in rice plants.This research focuses on identifying the plant diseas... In thefield of agriculture,the development of an early warning diagnostic system is essential for timely detection and accurate diagnosis of diseases in rice plants.This research focuses on identifying the plant diseases and detecting them promptly through the advancements in thefield of computer vision.The images obtained from in-field farms are typically with less visual information.However,there is a significant impact on the classification accuracy in the disease diagnosis due to the lack of high-resolution crop images.We propose a novel Reconstructed Disease Aware–Convolutional Neural Network(RDA-CNN),inspired by recent CNN architectures,that integrates image super resolution and classification into a single model for rice plant disease classification.This network takes low-resolution images of rice crops as input and employs the super resolution layers to transform low-resolution images to super-resolution images to recover appearance such as spots,rot,and lesion on different parts of the rice plants.Extensive experimental results indicated that the proposed RDA-CNN method performs well under diverse aspects generating visually pleasing images and outperforms better than other con-ventional Super Resolution(SR)methods.Furthermore,these super-resolution images are subsequently passed through deep classification layers for disease classi-fication.The results demonstrate that the RDA-CNN significantly boosts the clas-sification performance by nearly 4–6%compared with the baseline architectures. 展开更多
关键词 SUPER-RESOLUTION deep learning INTERPOLATION convolutional neural network AGRICULTURE rice plant disease classification
在线阅读 下载PDF
Future Event Prediction Based on Temporal Knowledge Graph Embedding 被引量:4
13
作者 Zhipeng Li Shanshan Feng +6 位作者 Jun Shi Yang Zhou Yong Liao Yangzhao Yang Yangyang Li Nenghai Yu Xun Shao 《Computer systems science & engineering SCIE EI 2023年第3期2411-2423,共13页
Accurate prediction of future events brings great benefits and reduces losses for society in many domains,such as civil unrest,pandemics,and crimes.Knowledge graph is a general language for describing and modeling com... Accurate prediction of future events brings great benefits and reduces losses for society in many domains,such as civil unrest,pandemics,and crimes.Knowledge graph is a general language for describing and modeling complex systems.Different types of events continually occur,which are often related to historical and concurrent events.In this paper,we formalize the future event prediction as a temporal knowledge graph reasoning problem.Most existing studies either conduct reasoning on static knowledge graphs or assume knowledges graphs of all timestamps are available during the training process.As a result,they cannot effectively reason over temporal knowledge graphs and predict events happening in the future.To address this problem,some recent works learn to infer future events based on historical eventbased temporal knowledge graphs.However,these methods do not comprehensively consider the latent patterns and influences behind historical events and concurrent events simultaneously.This paper proposes a new graph representation learning model,namely Recurrent Event Graph ATtention Network(RE-GAT),based on a novel historical and concurrent events attention-aware mechanism by modeling the event knowledge graph sequence recurrently.More specifically,our RE-GAT uses an attention-based historical events embedding module to encode past events,and employs an attention-based concurrent events embedding module to model the associations of events at the same timestamp.A translation-based decoder module and a learning objective are developed to optimize the embeddings of entities and relations.We evaluate our proposed method on four benchmark datasets.Extensive experimental results demonstrate the superiority of our RE-GAT model comparing to various base-lines,which proves that our method can more accurately predict what events are going to happen. 展开更多
关键词 Event prediction temporal knowledge graph graph representation learning knowledge embedding
在线阅读 下载PDF
TLSmell: Direct Identification on Malicious HTTPs Encryption Traffic withSimple Connection-Specific Indicators 被引量:4
14
作者 Zhengqiu Weng Timing Chen +3 位作者 Tiantian Zhu Hang Dong Dan Zhou Osama Alfarraj 《Computer systems science & engineering SCIE EI 2021年第4期105-119,共15页
Internet traffic encryption is a very common traffic protection method.Most internet traffic is protected by the encryption protocol called transport layersecurity (TLS). Although traffic encryption can ensure the sec... Internet traffic encryption is a very common traffic protection method.Most internet traffic is protected by the encryption protocol called transport layersecurity (TLS). Although traffic encryption can ensure the security of communication, it also enables malware to hide its information and avoid being detected.At present, most of the malicious traffic detection methods are aimed at the unencrypted ones. There are some problems in the detection of encrypted traffic, suchas high false positive rate, difficulty in feature extraction, and insufficient practicability. The accuracy and effectiveness of existing methods need to be improved.In this paper, we present TLSmell, a framework that conducts maliciousencrypted HTTPs traffic detection with simple connection-specific indicators byusing different classifiers based online training. We perform deep packet analysisof encrypted traffic through data pre-processing to extract effective features, andthen the online training algorithm is used for training and prediction. Withoutdecrypting the original traffic, high-precision malicious traffic detection and analysis are realized, which can guarantee user privacy and communication security.At the same time, since there is no need to decrypt the traffic in advance, the effi-ciency of detecting malicious HTTPs traffic will be greatly improved. Combinedwith the traditional detection and analysis methods, malicious HTTPs traffic isscreened, and suspicious traffic is further analyzed by the expert through the context of suspicious behaviors, thereby improving the overall performance of malicious encrypted traffic detection. 展开更多
关键词 Cyber security malware detection TLS feature engineering
在线阅读 下载PDF
A Data Security Framework for Cloud Computing Services 被引量:3
15
作者 Luis-Eduardo Bautista-Villalpando Alain Abran 《Computer systems science & engineering SCIE EI 2021年第5期203-218,共16页
Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industr... Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industry adoption and migration of traditional computing services to the cloud,one of the main challenges in cybersecurity is to provide mechanisms to secure these technologies.This work proposes a Data Security Framework for cloud computing services(CCS)that evaluates and improves CCS data security from a software engineering perspective by evaluating the levels of security within the cloud computing paradigm using engineering methods and techniques applied to CCS.This framework is developed by means of a methodology based on a heuristic theory that incorporates knowledge generated by existing works as well as the experience of their implementation.The paper presents the design details of the framework,which consists of three stages:identification of data security requirements,management of data security risks and evaluation of data security performance in CCS. 展开更多
关键词 Cloud computing SERVICES computer security data security data security requirements data risk data security measurement
在线阅读 下载PDF
Deep-BERT:Transfer Learning for Classifying Multilingual Offensive Texts on Social Media 被引量:4
16
作者 Md.Anwar Hussen Wadud M.F.Mridha +2 位作者 Jungpil Shin Kamruddin Nur Aloke Kumar Saha 《Computer systems science & engineering SCIE EI 2023年第2期1775-1791,共17页
Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze ... Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts. 展开更多
关键词 Offensive text classification deep convolutional neural network(DCNN) bidirectional encoder representations from transformers(BERT) natural language processing(NLP)
在线阅读 下载PDF
Energy Efficient QoS Aware Cluster Based Multihop Routing Protocol for WSN 被引量:3
17
作者 M.S.Maharajan T.Abirami 《Computer systems science & engineering SCIE EI 2022年第6期1173-1189,共17页
Wireless sensor networks (WSN) have become a hot research areaowing to the unique characteristics and applicability in diverse application areas.Clustering and routing techniques can be considered as an NP hard optimi... Wireless sensor networks (WSN) have become a hot research areaowing to the unique characteristics and applicability in diverse application areas.Clustering and routing techniques can be considered as an NP hard optimizationproblem, which can be addressed by metaheuristic optimization algorithms. Withthis motivation, this study presents a chaotic sandpiper optimization algorithmbased clustering with groundwater flow optimization based routing technique(CSPOC-GFLR). The goal of the CSOC-GFLR technique is to cluster the sensornodes in WSN and elect an optimal set of routes with an intention of achievingenergy efficiency and maximizing network lifetime. The CSPOC algorithm isderived by incorporating the concepts of chaos theory to boost the global optimization capability of the SPOC algorithm. The CSPOC technique elects an optimum set of cluster heads (CH) whereas the other sensors are allocated to thenearer CH. Extensive experimentation portrayed the promising performance ofthe CSPOC-GFLR technique by achieving reduced energy utilization, improvedlifetime, and prolonged stability over the existing techniques. 展开更多
关键词 CLUSTERING ROUTING wireless sensor networks energy efficiency network lifetime metaheuristics
在线阅读 下载PDF
A Quantum Spatial Graph Convolutional Network for Text Classification 被引量:3
18
作者 Syed Mustajar Ahmad Shah Hongwei Ge +5 位作者 Sami Ahmed Haider Muhammad Irshad Sohail M.Noman Jehangir Arshad Asfandeyar Ahmad Talha Younas 《Computer systems science & engineering SCIE EI 2021年第2期369-382,共14页
The data generated from non-Euclidean domains and its graphical representation(with complex-relationship object interdependence)applications has observed an exponential growth.The sophistication of graph data has pose... The data generated from non-Euclidean domains and its graphical representation(with complex-relationship object interdependence)applications has observed an exponential growth.The sophistication of graph data has posed consequential obstacles to the existing machine learning algorithms.In this study,we have considered a revamped version of a semi-supervised learning algorithm for graph-structured data to address the issue of expanding deep learning approaches to represent the graph data.Additionally,the quantum information theory has been applied through Graph Neural Networks(GNNs)to generate Riemannian metrics in closed-form of several graph layers.In further,to pre-process the adjacency matrix of graphs,a new formulation is established to incorporate high order proximities.The proposed scheme has shown outstanding improvements to overcome the deficiencies in Graph Convolutional Network(GCN),particularly,the information loss and imprecise information representation with acceptable computational overhead.Moreover,the proposed Quantum Graph Convolutional Network(QGCN)has significantly strengthened the GCN on semi-supervised node classification tasks.In parallel,it expands the generalization process with a significant difference by making small random perturbationsG of the graph during the training process.The evaluation results are provided on three benchmark datasets,including Citeseer,Cora,and PubMed,that distinctly delineate the superiority of the proposed model in terms of computational accuracy against state-of-the-art GCN and three other methods based on the same algorithms in the existing literature. 展开更多
关键词 Text classification deep learning graph convolutional networks semi-supervised learning GPUS performance improvements
在线阅读 下载PDF
A Review of Dynamic Resource Management in Cloud Computing Environments 被引量:3
19
作者 Mohammad Aldossary 《Computer systems science & engineering SCIE EI 2021年第3期461-476,共16页
In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order tor... In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources. 展开更多
关键词 Cloud computing resource management VM consolidation live migration resource provisioning auto-scaling
在线阅读 下载PDF
Data Security Storage Model of the Internet of Things Based on Blockchain 被引量:3
20
作者 Pingshui Wang Willy Susilo 《Computer systems science & engineering SCIE EI 2021年第1期213-224,共12页
With the development of information technology,the Internet of Things(IoT)has gradually become the third wave of the worldwide information industry revolution after the computer and the Internet.The application of the... With the development of information technology,the Internet of Things(IoT)has gradually become the third wave of the worldwide information industry revolution after the computer and the Internet.The application of the IoT has brought great convenience to people’s production and life.However,the potential information security problems in various IoT applications are gradually exposed and people pay more attention to them.The traditional centralized data storage and management model of the IoT is easy to cause transmission delay,single point of failure,privacy disclosure and other problems,and eventually leads to unpredictable behavior of the system.Blockchain technology can effectively improve the operation and data security status of the IoT.Referring to the storage model of the Fabric blockchain project,this paper designs a data security storage model suitable for the IoT system.The simulation results show that the model is not only effective and extensible,but also can better protect the data security of the Internet of Things. 展开更多
关键词 Internet of Things(IoT) blockchain data security digital signatures ENCRYPTION MODEL
在线阅读 下载PDF
上一页 1 2 144 下一页 到第
使用帮助 返回顶部