The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G en...The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G envisions a hyper-connected environment that supports ubiquitous intelligence through ultra-low latency,high throughput,massive device connectivity,and integrated sensing and communication.On the other hand,generative AI,powered by large foundation models,has emerged as a powerful paradigm capable of creating.展开更多
Intelligent techniques foster the dissemination of new discoveries and novel technologies that advance the ability of robots to assist and support humans. The human-centered intelligent robot has become an important r...Intelligent techniques foster the dissemination of new discoveries and novel technologies that advance the ability of robots to assist and support humans. The human-centered intelligent robot has become an important research field that spans all of the robot capabilities including navigation, intelligent control, pattern recognition and human-robot interaction. This paper focuses on the recent achievements and presents a survey of existing works on human-centered robots. Furthermore, we provide a comprehensive survey of the recent development of the human-centered intelligent robot and discuss the issues and challenges in the field.展开更多
In recent years,the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks.Such challenges can be potentially overcome by integrating c...In recent years,the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks.Such challenges can be potentially overcome by integrating communication,computing,caching,and control(i4C)technologies.In this survey,we first give a snapshot of different aspects of the i4C,comprising background,motivation,leading technological enablers,potential applications,and use cases.Next,we describe different models of communication,computing,caching,and control(4C)to lay the foundation of the integration approach.We review current stateof-the-art research efforts related to the i4C,focusing on recent trends of both conventional and artificial intelligence(AI)-based integration approaches.We also highlight the need for intelligence in resources integration.Then,we discuss the integration of sensing and communication(ISAC)and classify the integration approaches into various classes.Finally,we propose open challenges and present future research directions for beyond 5G networks,such as 6G.展开更多
Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital ...Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.展开更多
Oceanic autonomous surface vehicles(ASVs) are one kind of autonomous marine robots that have advantages of energy saving and is flexible to use. Nowadays, ASVs are playing an important role in marine science, maritime...Oceanic autonomous surface vehicles(ASVs) are one kind of autonomous marine robots that have advantages of energy saving and is flexible to use. Nowadays, ASVs are playing an important role in marine science, maritime industry, and national defense. It could improve the efficiency of oceanic data collection, ensure marine transportation safety, and protect national security. One of the core challenges for ASVs is how to plan a safe navigation autonomously under the complicated ocean environment. Based on the type of marine vehicles, ASVs could be divided into two categories: autonomous sailboats and autonomous vessels. In this article, we review the challenges and related solutions of ASVs' autonomous navigation, including modeling analysis, path planning and implementation. Finally, we make a summary of all of those in four tables and discuss about the future research directions.展开更多
The 14-3-3 protein family is among the most extensively studied, yet still largely mysterious protein families in mammals to date. As they are well recognized for their roles in apoptosis, cell cycle regulation, and p...The 14-3-3 protein family is among the most extensively studied, yet still largely mysterious protein families in mammals to date. As they are well recognized for their roles in apoptosis, cell cycle regulation, and proliferation in healthy cells, aberrant 14-3-3 expression has unsurprisingly emerged as instrumentalin the development of many cancers and in prognosis. Interestingly, while the seven known 14-3-3 isoforms in humans have many similar functions across cell types, evidence of isoform-specific functions and localization has been observed in both healthy and diseased cells The strikingly high similarity among 14-3-3 isoforms has made it difficult to delineate isoform-specific functions and for isoform-specific targeting. Here, we review our knowledge of 14-3-3 interactome(s) generated by high- throughput techniques, bioinformatics, structural genomics and chemical genornics and point out that integrating the information with molecular dynamics (MD) simulations may bring us new opportunity to the design of isoform-specific inhibitors, which can not only be used as powerful research tools for delineating distinct interactomes of individual 14-3-3 isoforms, but also can serve as potential new anti-cancer drugs that selectively target aberrant 14-3-3 isoform.展开更多
Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face ...Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face recognition,differences in expression,angle,posture,and lighting conditions have become key factors that affect recognition accuracy.Essentially,there may be significant differences between different image samples of the same face,which makes image classification very difficult.Therefore,how to build a robust virtual image representation becomes a vital issue.To solve the above problems,this paper proposes a novel image classification algorithm.First,to better retain the global features and contour information of the original sample,the algorithm uses an improved non‐linear image representation method to highlight the low‐intensity and high‐intensity pixels of the original training sample,thus generating a virtual sample.Second,by the principle of sparse representation,the linear expression coefficients of the original sample and the virtual sample can be calculated,respectively.After obtaining these two types of coefficients,calculate the distances between the original sample and the test sample and the distance between the virtual sample and the test sample.These two distances are converted into distance scores.Finally,a simple and effective weight fusion scheme is adopted to fuse the classification scores of the original image and the virtual image.The fused score will determine the final classification result.The experimental results show that the proposed method outperforms other typical sparse representation classification methods.展开更多
The Estrada index of a graph G on n vertices is defined by EE(G)=∑^(n)_(i=1)^(eλ_(i)),whereλ_(1),λ_(2),···,λ_(n)are the adjacency eigenvalues of G.We define two general types of dynamic graphs evol...The Estrada index of a graph G on n vertices is defined by EE(G)=∑^(n)_(i=1)^(eλ_(i)),whereλ_(1),λ_(2),···,λ_(n)are the adjacency eigenvalues of G.We define two general types of dynamic graphs evolving according to continuous-time Markov processes with their stationary distributions matching the Erd¨os-R´enyi random graph and the random graph with given expected degrees,respectively.We formulate some new estimates and upper and lower bounds for the Estrada indices of these dynamic graphs.展开更多
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.展开更多
Every application in a smart city environment like the smart grid,health monitoring, security, and surveillance generates non-stationary datastreams. Due to such nature, the statistical properties of data changes over...Every application in a smart city environment like the smart grid,health monitoring, security, and surveillance generates non-stationary datastreams. Due to such nature, the statistical properties of data changes overtime, leading to class imbalance and concept drift issues. Both these issuescause model performance degradation. Most of the current work has beenfocused on developing an ensemble strategy by training a new classifier on thelatest data to resolve the issue. These techniques suffer while training the newclassifier if the data is imbalanced. Also, the class imbalance ratio may changegreatly from one input stream to another, making the problem more complex.The existing solutions proposed for addressing the combined issue of classimbalance and concept drift are lacking in understating of correlation of oneproblem with the other. This work studies the association between conceptdrift and class imbalance ratio and then demonstrates how changes in classimbalance ratio along with concept drift affect the classifier’s performance.We analyzed the effect of both the issues on minority and majority classesindividually. To do this, we conducted experiments on benchmark datasetsusing state-of-the-art classifiers especially designed for data stream classification.Precision, recall, F1 score, and geometric mean were used to measure theperformance. Our findings show that when both class imbalance and conceptdrift problems occur together the performance can decrease up to 15%. Ourresults also show that the increase in the imbalance ratio can cause a 10% to15% decrease in the precision scores of both minority and majority classes.The study findings may help in designing intelligent and adaptive solutionsthat can cope with the challenges of non-stationary data streams like conceptdrift and class imbalance.展开更多
The use of mobile phone technologies in the education sector is getting more attention nowadays. This is due to the advancement of technologies equipped in majority of the mobile phones which makes the devices become ...The use of mobile phone technologies in the education sector is getting more attention nowadays. This is due to the advancement of technologies equipped in majority of the mobile phones which makes the devices become more capable of supporting the learning and teaching activities. Mobile learning (m-learning) is a learning tool which can be run on mobile devices. It can be considered as an enhancement to the electronic learning (e-learning). M-learning overcomes several limitations of e-learning especially in term of mobility. It provides more independent way of learning whereby learners can use the application to do the learning activities at anytime and any place. However, as with other learning and teaching applications, applications to be developed for mobile learning must also be developed based on certain learning theories and guidelines in order for them to be effective as well as usable. Therefore, in this paper, the development process of a mobile learning course content application called Mobile System Analysis and Design (MOSAD) as a revision tool will be shared and its testing's conduct and results will also be presented and discussed. MOSAD was developed with the content of a topic from the System Analysis and Design (SAD) course conducted at Universiti Teknologi PETRONAS (UTP). A heuristic test involving 5 experts in the area of Human Computer Interaction (HCI) were conducted after the first version of MOSAD was completed to strengthen its functionality and usability, followed by a Post Test Quasi Experimental Design which was conducted to 116 UTP second year students who took the SAD course to test the effectiveness and usability of MOSAD after it was revised. As a result from the post test, the students who had used MOSAD (66 out of the 116 students) as their revision tool for answering ten quiz questions obtained a mean score of 7.7576 as compared to 5.160 obtained by the other group of students (50 out of the 116 students) who used traditional methods of revision. Besides, usability test which tested on consistency, leamability, flexibility, minimal action and minimal memory load of MOSAD gave results above 3.5 for each metric based on the rating of 1 to 5. Thus, both results indicate that MOSAD is effective and usable as a revision tool for the higher education students.展开更多
A novel example-based process for Automated Colorization of grayscale images using Texture Descriptors (ACTD) without any human intervention is proposed. By analyzing a set of sample color images, coherent regions of ...A novel example-based process for Automated Colorization of grayscale images using Texture Descriptors (ACTD) without any human intervention is proposed. By analyzing a set of sample color images, coherent regions of homogeneous textures are extracted. A multi-channel filtering technique is used for texture-based image segmentation, combined with a modified Fuzzy C-means (FCM) clustering algorithm. This modified FCM clustering algorithm includes both the local spatial information from neighboring pixels, and the spatial Euclidian distance to the cluster’s center of gravity. For each area of interest, state-of-the-art texture descriptors are then computed and stored, along with corresponding color information. These texture descriptors and the color information are used for colorization of a grayscale image with similar textures. Given a grayscale image to be colorized, the segmentation and feature extraction processes are repeated. The texture descriptors are used to perform Content-Based Image Retrieval (CBIR). The colorization process is performed by Chroma replacement. This research finds numerous applications, ranging from classic film restoration and enhancement, to adding valuable information into medical and satellite imaging. Also, this can be used to enhance the detection of objects from x-ray images at the airports.展开更多
In this paper, we proposed a new concept: depth of drowsiness, which can more precisely describe the drowsiness than existing binary description. A set of effective markers for drowsiness: normalized band norm was suc...In this paper, we proposed a new concept: depth of drowsiness, which can more precisely describe the drowsiness than existing binary description. A set of effective markers for drowsiness: normalized band norm was successfully developed. These markers are invariant from voltage amplitude of brain waves, eliminating the need for calibrating the voltage output of the brain-computer interface devices. A new polling algorithm was designed and implemented for computing the depth of drowsiness. The time cost of data acquisition and processing for each estimate is about one second, which is well suited for real-time applications. Test results with a portable brain-computer interface device show that the depth of drowsiness computed by the method in this paper is generally invariant from ages of test subjects and sensor channels (P3 and C4). The comparison between experiment and computing results indicate that the new method is noticeably better than one of the recent methods in terms of accuracy for predicting the drowsiness.展开更多
This study compares websites that take live data into account using search engine optimization(SEO).A series of steps called search engine optimization can help a website rank highly in search engine results.Static we...This study compares websites that take live data into account using search engine optimization(SEO).A series of steps called search engine optimization can help a website rank highly in search engine results.Static websites and dynamic websites are two different types of websites.Static websites must have the necessary expertise in programming compatible with SEO.Whereas in dynamic websites,one can utilize readily available plugins/modules.The fundamental issue of all website holders is the lower level of page rank,congestion,utilization,and exposure of the website on the search engine.Here,the authors have studied the live data of four websites as the real-time data would indicate how the SEO strategy may be applied to website page rank,page difficulty removal,and brand query,etc.It is also necessary to choose relevant keywords on any website.The right keyword might assist to increase the brand query while also lowering the page difficulty both on and off the page.In order to calculate Off-page SEO,On-page SEO,and SEO Difficulty,the authors examined live data in this study and chose four well-known Indian university and institute websites for this study:www.caluniv.ac.in,www.jnu.ac.in,www.iima.ac.in,and www.iitb.ac.in.Using live data and SEO,the authors estimated the Off-page SEO,On-page SEO,and SEO Difficulty.It has been shown that the Off-page SEO of www.caluniv.ac.in is lower than that of www.jnu.ac.in,www.iima.ac.in,and www.iitb.ac.in by 9%,7%,and 7%,respectively.On-page SEO is,in comparison,4%,1%,and 1%more.Every university has continued to keep up its own brand query.Additionally,www.caluniv.ac.in has slightly less SEO Difficulty compared to other websites.The final computed results have been displayed and compared.展开更多
With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object si...With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object significant challenges have been presented in accurately segmenting melanomas in dermoscopic images due to the objects that could interfere human observations,such as bubbles and scales.To address these challenges,we propose a dual U-Net network framework for skin melanoma segmentation.In our proposed architecture,we introduce several innovative components that aim to enhance the performance and capabilities of the traditional U-Net.First,we establish a novel framework that links two simplified U-Nets,enabling more comprehensive information exchange and feature integration throughout the network.Second,after cascading the second U-Net,we introduce a skip connection between the decoder and encoder networks,and incorporate a modified receptive field block(MRFB),which is designed to capture multi-scale spatial information.Third,to further enhance the feature representation capabilities,we add a multi-path convolution block attention module(MCBAM)to the first two layers of the first U-Net encoding,and integrate a new squeeze-and-excitation(SE)mechanism with residual connections in the second U-Net.To illustrate the performance of our proposed model,we conducted comprehensive experiments on widely recognized skin datasets.On the ISIC-2017 dataset,the IoU value of our proposed model increased from 0.6406 to 0.6819 and the Dice coefficient increased from 0.7625 to 0.8023.On the ISIC-2018 dataset,the IoU value of proposed model also improved from 0.7138 to 0.7709,while the Dice coefficient increased from 0.8285 to 0.8665.Furthermore,the generalization experiments conducted on the jaw cyst dataset from Quzhou People’s Hospital further verified the outstanding segmentation performance of the proposed model.These findings collectively affirm the potential of our approach as a valuable tool in supporting clinical decision-making in the field of skin cancer detection,as well as advancing research in medical image analysis.展开更多
Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or...Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.展开更多
Since the beginning of the 21st century,modern medical technology has advanced rapidly,and the cryomedicine has also seen significant progress.Notable developments include the application of cryomedicine in assisted r...Since the beginning of the 21st century,modern medical technology has advanced rapidly,and the cryomedicine has also seen significant progress.Notable developments include the application of cryomedicine in assisted reproduction and the cryopreservation of sperm,eggs and embryos,as well as the preservation of skin,fingers,and other isolated tissues.However,cryopreservation of large and complex tissues or organs remains highly challenging.In addition to the damage caused by the freezing and rewarming processes and the inherent complexity of tissues and organs,there is an urgent need to address issues related to damage detection and the investigation of injury mechanisms.It provides a retrospective analysis of existing methods for assessing tissue and organ viability.Although current techniques can detect damage to some extent,they tend to be relatively simple,time-consuming,and limited in their ability to provide timely and comprehensive assessments of viability.By summarizing and evaluating these approaches,our study aims to contribute to the improvement of viability detection methods and to promote further development in this critical area.展开更多
In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation ...In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms.展开更多
The growing sophistication of cyberthreats,among others the Distributed Denial of Service attacks,has exposed limitations in traditional rule-based Security Information and Event Management systems.While machine learn...The growing sophistication of cyberthreats,among others the Distributed Denial of Service attacks,has exposed limitations in traditional rule-based Security Information and Event Management systems.While machine learning–based intrusion detection systems can capture complex network behaviours,their“black-box”nature often limits trust and actionable insight for security operators.This study introduces a novel approach that integrates Explainable Artificial Intelligence—xAI—with the Random Forest classifier to derive human-interpretable rules,thereby enhancing the detection of Distributed Denial of Service(DDoS)attacks.The proposed framework combines traditional static rule formulation with advanced xAI techniques—SHapley Additive exPlanations and Scoped Rules-to extract decision criteria from a fully trained model.The methodology was validated on two benchmark datasets,CICIDS2017 and WUSTL-IIOT-2021.Extracted rules were evaluated against conventional Security Information and Event Management Systems rules with metrics such as precision,recall,accuracy,balanced accuracy,and Matthews Correlation Coefficient.Experimental results demonstrate that xAI-derived rules consistently outperform traditional static rules.Notably,the most refined xAI-generated rule achieved near-perfect performance with significantly improved detection of DDoS traffic while maintaining high accuracy in classifying benign traffic across both datasets.展开更多
文摘The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G envisions a hyper-connected environment that supports ubiquitous intelligence through ultra-low latency,high throughput,massive device connectivity,and integrated sensing and communication.On the other hand,generative AI,powered by large foundation models,has emerged as a powerful paradigm capable of creating.
基金supported in part by the National Natural Science Foundation of China(61573147,91520201,61625303,61522302,61761130080)Guangzhou Research Collaborative Innovation Projects(2014Y2-00507)+2 种基金Guangdong Science and Technology Research Collaborative Innovation Projects(20138010102010,20148090901056,20158020214003)Guangdong Science and Technology Plan Project(Application Technology Research Foundation)(2015B020233006)National High-Tech Research and De-velopment Program of China(863 Program)(2015AA042303)
文摘Intelligent techniques foster the dissemination of new discoveries and novel technologies that advance the ability of robots to assist and support humans. The human-centered intelligent robot has become an important research field that spans all of the robot capabilities including navigation, intelligent control, pattern recognition and human-robot interaction. This paper focuses on the recent achievements and presents a survey of existing works on human-centered robots. Furthermore, we provide a comprehensive survey of the recent development of the human-centered intelligent robot and discuss the issues and challenges in the field.
基金supported in part by National Key R&D Program of China(2019YFE0196400)Key Research and Development Program of Shaanxi(2022KWZ09)+4 种基金National Natural Science Foundation of China(61771358,61901317,62071352)Fundamental Research Funds for the Central Universities(JB190104)Joint Education Project between China and Central-Eastern European Countries(202005)the 111 Project(B08038)。
文摘In recent years,the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks.Such challenges can be potentially overcome by integrating communication,computing,caching,and control(i4C)technologies.In this survey,we first give a snapshot of different aspects of the i4C,comprising background,motivation,leading technological enablers,potential applications,and use cases.Next,we describe different models of communication,computing,caching,and control(4C)to lay the foundation of the integration approach.We review current stateof-the-art research efforts related to the i4C,focusing on recent trends of both conventional and artificial intelligence(AI)-based integration approaches.We also highlight the need for intelligence in resources integration.Then,we discuss the integration of sensing and communication(ISAC)and classify the integration approaches into various classes.Finally,we propose open challenges and present future research directions for beyond 5G networks,such as 6G.
文摘Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.
基金partially supported by the National Key R&D Program (No.2016YFC1401900)the China Postdoctoral Science Foundation (No.2017M620293)+4 种基金the Fundamental Research Funds for the Central Universities (No.201713016)Qingdao National Labor for Marine Science and Technology Open Research Project (No.QNLM2016ORP0405)the Natural Science Foundation of Shandong (No.ZR2018BF006)partially supported by the National Natural Science Foundation of China (No.61572347)the U.S.Department of Transportation Center for Advanced Multimodal Mobility Solutions and Education (No.69A3351747133)。
文摘Oceanic autonomous surface vehicles(ASVs) are one kind of autonomous marine robots that have advantages of energy saving and is flexible to use. Nowadays, ASVs are playing an important role in marine science, maritime industry, and national defense. It could improve the efficiency of oceanic data collection, ensure marine transportation safety, and protect national security. One of the core challenges for ASVs is how to plan a safe navigation autonomously under the complicated ocean environment. Based on the type of marine vehicles, ASVs could be divided into two categories: autonomous sailboats and autonomous vessels. In this article, we review the challenges and related solutions of ASVs' autonomous navigation, including modeling analysis, path planning and implementation. Finally, we make a summary of all of those in four tables and discuss about the future research directions.
文摘The 14-3-3 protein family is among the most extensively studied, yet still largely mysterious protein families in mammals to date. As they are well recognized for their roles in apoptosis, cell cycle regulation, and proliferation in healthy cells, aberrant 14-3-3 expression has unsurprisingly emerged as instrumentalin the development of many cancers and in prognosis. Interestingly, while the seven known 14-3-3 isoforms in humans have many similar functions across cell types, evidence of isoform-specific functions and localization has been observed in both healthy and diseased cells The strikingly high similarity among 14-3-3 isoforms has made it difficult to delineate isoform-specific functions and for isoform-specific targeting. Here, we review our knowledge of 14-3-3 interactome(s) generated by high- throughput techniques, bioinformatics, structural genomics and chemical genornics and point out that integrating the information with molecular dynamics (MD) simulations may bring us new opportunity to the design of isoform-specific inhibitors, which can not only be used as powerful research tools for delineating distinct interactomes of individual 14-3-3 isoforms, but also can serve as potential new anti-cancer drugs that selectively target aberrant 14-3-3 isoform.
基金supported by the Research Foundation for Advanced Talents of Guizhou University under Grant(2016)No.49,Key Disciplines of Guizhou Province Computer Science and Technology(ZDXK[2018]007)Research Projects of Innovation Group of Education(QianJiaoHeKY[2021]022)supported by the National Natural Science Foundation of China(62062023).
文摘Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face recognition,differences in expression,angle,posture,and lighting conditions have become key factors that affect recognition accuracy.Essentially,there may be significant differences between different image samples of the same face,which makes image classification very difficult.Therefore,how to build a robust virtual image representation becomes a vital issue.To solve the above problems,this paper proposes a novel image classification algorithm.First,to better retain the global features and contour information of the original sample,the algorithm uses an improved non‐linear image representation method to highlight the low‐intensity and high‐intensity pixels of the original training sample,thus generating a virtual sample.Second,by the principle of sparse representation,the linear expression coefficients of the original sample and the virtual sample can be calculated,respectively.After obtaining these two types of coefficients,calculate the distances between the original sample and the test sample and the distance between the virtual sample and the test sample.These two distances are converted into distance scores.Finally,a simple and effective weight fusion scheme is adopted to fuse the classification scores of the original image and the virtual image.The fused score will determine the final classification result.The experimental results show that the proposed method outperforms other typical sparse representation classification methods.
基金Supported by a starting grant of Northumbria University.
文摘The Estrada index of a graph G on n vertices is defined by EE(G)=∑^(n)_(i=1)^(eλ_(i)),whereλ_(1),λ_(2),···,λ_(n)are the adjacency eigenvalues of G.We define two general types of dynamic graphs evolving according to continuous-time Markov processes with their stationary distributions matching the Erd¨os-R´enyi random graph and the random graph with given expected degrees,respectively.We formulate some new estimates and upper and lower bounds for the Estrada indices of these dynamic graphs.
基金supported by Scientific Research Deanship at University of Ha’il,Saudi Arabia through project number RG-23137.
文摘The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.
基金The authors would like to extend their gratitude to Universiti Teknologi PETRONAS (Malaysia)for funding this research through grant number (015LA0-037).
文摘Every application in a smart city environment like the smart grid,health monitoring, security, and surveillance generates non-stationary datastreams. Due to such nature, the statistical properties of data changes overtime, leading to class imbalance and concept drift issues. Both these issuescause model performance degradation. Most of the current work has beenfocused on developing an ensemble strategy by training a new classifier on thelatest data to resolve the issue. These techniques suffer while training the newclassifier if the data is imbalanced. Also, the class imbalance ratio may changegreatly from one input stream to another, making the problem more complex.The existing solutions proposed for addressing the combined issue of classimbalance and concept drift are lacking in understating of correlation of oneproblem with the other. This work studies the association between conceptdrift and class imbalance ratio and then demonstrates how changes in classimbalance ratio along with concept drift affect the classifier’s performance.We analyzed the effect of both the issues on minority and majority classesindividually. To do this, we conducted experiments on benchmark datasetsusing state-of-the-art classifiers especially designed for data stream classification.Precision, recall, F1 score, and geometric mean were used to measure theperformance. Our findings show that when both class imbalance and conceptdrift problems occur together the performance can decrease up to 15%. Ourresults also show that the increase in the imbalance ratio can cause a 10% to15% decrease in the precision scores of both minority and majority classes.The study findings may help in designing intelligent and adaptive solutionsthat can cope with the challenges of non-stationary data streams like conceptdrift and class imbalance.
文摘The use of mobile phone technologies in the education sector is getting more attention nowadays. This is due to the advancement of technologies equipped in majority of the mobile phones which makes the devices become more capable of supporting the learning and teaching activities. Mobile learning (m-learning) is a learning tool which can be run on mobile devices. It can be considered as an enhancement to the electronic learning (e-learning). M-learning overcomes several limitations of e-learning especially in term of mobility. It provides more independent way of learning whereby learners can use the application to do the learning activities at anytime and any place. However, as with other learning and teaching applications, applications to be developed for mobile learning must also be developed based on certain learning theories and guidelines in order for them to be effective as well as usable. Therefore, in this paper, the development process of a mobile learning course content application called Mobile System Analysis and Design (MOSAD) as a revision tool will be shared and its testing's conduct and results will also be presented and discussed. MOSAD was developed with the content of a topic from the System Analysis and Design (SAD) course conducted at Universiti Teknologi PETRONAS (UTP). A heuristic test involving 5 experts in the area of Human Computer Interaction (HCI) were conducted after the first version of MOSAD was completed to strengthen its functionality and usability, followed by a Post Test Quasi Experimental Design which was conducted to 116 UTP second year students who took the SAD course to test the effectiveness and usability of MOSAD after it was revised. As a result from the post test, the students who had used MOSAD (66 out of the 116 students) as their revision tool for answering ten quiz questions obtained a mean score of 7.7576 as compared to 5.160 obtained by the other group of students (50 out of the 116 students) who used traditional methods of revision. Besides, usability test which tested on consistency, leamability, flexibility, minimal action and minimal memory load of MOSAD gave results above 3.5 for each metric based on the rating of 1 to 5. Thus, both results indicate that MOSAD is effective and usable as a revision tool for the higher education students.
文摘A novel example-based process for Automated Colorization of grayscale images using Texture Descriptors (ACTD) without any human intervention is proposed. By analyzing a set of sample color images, coherent regions of homogeneous textures are extracted. A multi-channel filtering technique is used for texture-based image segmentation, combined with a modified Fuzzy C-means (FCM) clustering algorithm. This modified FCM clustering algorithm includes both the local spatial information from neighboring pixels, and the spatial Euclidian distance to the cluster’s center of gravity. For each area of interest, state-of-the-art texture descriptors are then computed and stored, along with corresponding color information. These texture descriptors and the color information are used for colorization of a grayscale image with similar textures. Given a grayscale image to be colorized, the segmentation and feature extraction processes are repeated. The texture descriptors are used to perform Content-Based Image Retrieval (CBIR). The colorization process is performed by Chroma replacement. This research finds numerous applications, ranging from classic film restoration and enhancement, to adding valuable information into medical and satellite imaging. Also, this can be used to enhance the detection of objects from x-ray images at the airports.
文摘In this paper, we proposed a new concept: depth of drowsiness, which can more precisely describe the drowsiness than existing binary description. A set of effective markers for drowsiness: normalized band norm was successfully developed. These markers are invariant from voltage amplitude of brain waves, eliminating the need for calibrating the voltage output of the brain-computer interface devices. A new polling algorithm was designed and implemented for computing the depth of drowsiness. The time cost of data acquisition and processing for each estimate is about one second, which is well suited for real-time applications. Test results with a portable brain-computer interface device show that the depth of drowsiness computed by the method in this paper is generally invariant from ages of test subjects and sensor channels (P3 and C4). The comparison between experiment and computing results indicate that the new method is noticeably better than one of the recent methods in terms of accuracy for predicting the drowsiness.
文摘This study compares websites that take live data into account using search engine optimization(SEO).A series of steps called search engine optimization can help a website rank highly in search engine results.Static websites and dynamic websites are two different types of websites.Static websites must have the necessary expertise in programming compatible with SEO.Whereas in dynamic websites,one can utilize readily available plugins/modules.The fundamental issue of all website holders is the lower level of page rank,congestion,utilization,and exposure of the website on the search engine.Here,the authors have studied the live data of four websites as the real-time data would indicate how the SEO strategy may be applied to website page rank,page difficulty removal,and brand query,etc.It is also necessary to choose relevant keywords on any website.The right keyword might assist to increase the brand query while also lowering the page difficulty both on and off the page.In order to calculate Off-page SEO,On-page SEO,and SEO Difficulty,the authors examined live data in this study and chose four well-known Indian university and institute websites for this study:www.caluniv.ac.in,www.jnu.ac.in,www.iima.ac.in,and www.iitb.ac.in.Using live data and SEO,the authors estimated the Off-page SEO,On-page SEO,and SEO Difficulty.It has been shown that the Off-page SEO of www.caluniv.ac.in is lower than that of www.jnu.ac.in,www.iima.ac.in,and www.iitb.ac.in by 9%,7%,and 7%,respectively.On-page SEO is,in comparison,4%,1%,and 1%more.Every university has continued to keep up its own brand query.Additionally,www.caluniv.ac.in has slightly less SEO Difficulty compared to other websites.The final computed results have been displayed and compared.
基金funded by Zhejiang Basic Public Welfare Research Project,grant number LZY24E060001supported by Guangzhou Development Zone Science and Technology(2021GH10,2020GH10,2023GH02)+1 种基金the University of Macao(MYRG2022-00271-FST)the Science and Technology Development Fund(FDCT)of Macao(0032/2022/A).
文摘With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object significant challenges have been presented in accurately segmenting melanomas in dermoscopic images due to the objects that could interfere human observations,such as bubbles and scales.To address these challenges,we propose a dual U-Net network framework for skin melanoma segmentation.In our proposed architecture,we introduce several innovative components that aim to enhance the performance and capabilities of the traditional U-Net.First,we establish a novel framework that links two simplified U-Nets,enabling more comprehensive information exchange and feature integration throughout the network.Second,after cascading the second U-Net,we introduce a skip connection between the decoder and encoder networks,and incorporate a modified receptive field block(MRFB),which is designed to capture multi-scale spatial information.Third,to further enhance the feature representation capabilities,we add a multi-path convolution block attention module(MCBAM)to the first two layers of the first U-Net encoding,and integrate a new squeeze-and-excitation(SE)mechanism with residual connections in the second U-Net.To illustrate the performance of our proposed model,we conducted comprehensive experiments on widely recognized skin datasets.On the ISIC-2017 dataset,the IoU value of our proposed model increased from 0.6406 to 0.6819 and the Dice coefficient increased from 0.7625 to 0.8023.On the ISIC-2018 dataset,the IoU value of proposed model also improved from 0.7138 to 0.7709,while the Dice coefficient increased from 0.8285 to 0.8665.Furthermore,the generalization experiments conducted on the jaw cyst dataset from Quzhou People’s Hospital further verified the outstanding segmentation performance of the proposed model.These findings collectively affirm the potential of our approach as a valuable tool in supporting clinical decision-making in the field of skin cancer detection,as well as advancing research in medical image analysis.
基金funded by Scientific Research Deanship at University of Hail-Saudi Arabia through Project Number RG-23092.
文摘Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.
文摘Since the beginning of the 21st century,modern medical technology has advanced rapidly,and the cryomedicine has also seen significant progress.Notable developments include the application of cryomedicine in assisted reproduction and the cryopreservation of sperm,eggs and embryos,as well as the preservation of skin,fingers,and other isolated tissues.However,cryopreservation of large and complex tissues or organs remains highly challenging.In addition to the damage caused by the freezing and rewarming processes and the inherent complexity of tissues and organs,there is an urgent need to address issues related to damage detection and the investigation of injury mechanisms.It provides a retrospective analysis of existing methods for assessing tissue and organ viability.Although current techniques can detect damage to some extent,they tend to be relatively simple,time-consuming,and limited in their ability to provide timely and comprehensive assessments of viability.By summarizing and evaluating these approaches,our study aims to contribute to the improvement of viability detection methods and to promote further development in this critical area.
基金co-supported by the National Natural Science Foundation of China(No.62271093)the Natural Science Foundation of Chongqing,China(No.CSTB2023NSCQ-LZX0108)the Chongqing Graduate Research Innovation Project,China(No.CYS23093).
文摘In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms.
基金funded under the Horizon Europe AI4CYBER Projectwhich has received funding from the European Union’s Horizon Europe Research and Innovation Programme under grant agreement No.101070450.
文摘The growing sophistication of cyberthreats,among others the Distributed Denial of Service attacks,has exposed limitations in traditional rule-based Security Information and Event Management systems.While machine learning–based intrusion detection systems can capture complex network behaviours,their“black-box”nature often limits trust and actionable insight for security operators.This study introduces a novel approach that integrates Explainable Artificial Intelligence—xAI—with the Random Forest classifier to derive human-interpretable rules,thereby enhancing the detection of Distributed Denial of Service(DDoS)attacks.The proposed framework combines traditional static rule formulation with advanced xAI techniques—SHapley Additive exPlanations and Scoped Rules-to extract decision criteria from a fully trained model.The methodology was validated on two benchmark datasets,CICIDS2017 and WUSTL-IIOT-2021.Extracted rules were evaluated against conventional Security Information and Event Management Systems rules with metrics such as precision,recall,accuracy,balanced accuracy,and Matthews Correlation Coefficient.Experimental results demonstrate that xAI-derived rules consistently outperform traditional static rules.Notably,the most refined xAI-generated rule achieved near-perfect performance with significantly improved detection of DDoS traffic while maintaining high accuracy in classifying benign traffic across both datasets.