BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e...BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification.展开更多
Flexible electronics face critical challenges in achieving monolithic three-dimensional(3D)integration,including material compatibility,structural stability,and scalable fabrication methods.Inspired by the tactile sen...Flexible electronics face critical challenges in achieving monolithic three-dimensional(3D)integration,including material compatibility,structural stability,and scalable fabrication methods.Inspired by the tactile sensing mechanism of the human skin,we have developed a flexible monolithic 3D-integrated tactile sensing system based on a holey MXene paste,where each vertical one-body unit simultaneously functions as a microsupercapacitor and pressure sensor.The in-plane mesopores of MXene significantly improve ion accessibility,mitigate the self-stacking of nanosheets,and allow the holey MXene to multifunctionally act as a sensing material,an active electrode,and a conductive interconnect,thus drastically reducing the interface mismatch and enhancing the mechanical robustness.Furthermore,we fabricate a large-scale device using a blade-coating and stamping method,which demonstrates excellent mechanical flexibility,low-power consumption,rapid response,and stable long-term operation.As a proof-of-concept application,we integrate our sensing array into a smart access control system,leveraging deep learning to accurately identify users based on their unique pressing behaviors.This study provides a promising approach for designing highly integrated,intelligent,and flexible electronic systems for advanced human-computer interactions and personalized electronics.展开更多
Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree he...Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.展开更多
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes...Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region.展开更多
Due to the inconsistency of rice variety,agricultural industry faces an important challenge of rice grading and classification by the traditional grading system.The existing grading system is manual,which introduces s...Due to the inconsistency of rice variety,agricultural industry faces an important challenge of rice grading and classification by the traditional grading system.The existing grading system is manual,which introduces stress and strain to humans due to visual inspection.Automated rice grading system development has been proposed as a promising research area in computer vision.In this study,an accurate deep learning-based non-contact and cost-effective rice grading system was developed by rice appearance and characteristics.The proposed system provided real-time processing by using a NI-myRIO with a high-resolution camera and user interface.We firstly trained the network by a rice public dataset to extract rice discriminative features.Secondly,by using transfer learning,the pre-trained network was used to locate the region by extracting a feature map.The proposed deep learning model was tested using two public standard datasets and a prototype real-time scanning system.Using AlexNet architecture,we obtained an average accuracy of 98.2%with 97.6%sensitivity and 96.4%specificity.To validate the real-time performance of proposed rice grading classification system,various performance indices were calculated and compared with the existing classifier.Both simulation and real-time experiment evaluations confirmed the robustness and reliability of the proposed rice grading system.展开更多
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict...The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.展开更多
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ...The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions.展开更多
At present,the proportion of new energy in the power grid is increasing,and the random fluctuations in power output increase the risk of cascading failures in the power grid.In this paper,we propose a method for ident...At present,the proportion of new energy in the power grid is increasing,and the random fluctuations in power output increase the risk of cascading failures in the power grid.In this paper,we propose a method for identifying high-risk scenarios of interlocking faults in new energy power grids based on a deep embedding clustering(DEC)algorithm and apply it in a risk assessment of cascading failures in different operating scenarios for new energy power grids.First,considering the real-time operation status and system structure of new energy power grids,the scenario cascading failure risk indicator is established.Based on this indicator,the risk of cascading failure is calculated for the scenario set,the scenarios are clustered based on the DEC algorithm,and the scenarios with the highest indicators are selected as the significant risk scenario set.The results of simulations with an example power grid show that our method can effectively identify scenarios with a high risk of cascading failures from a large number of scenarios.展开更多
The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the mai...The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.展开更多
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the...With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,thefinal stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.展开更多
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the...With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,the final stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.展开更多
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st...At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.展开更多
Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall ...Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall short in capturing complex,nonlinear,and real-time environmental dynamics.In recent years,machine learning(ML)and deep learning(DL)techniques have emerged as promising alternatives for enhancing the accuracy,speed,and scalability of EWS.This review critically evaluates the evolution of ML models—such as Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—in coastal flood prediction,highlighting their architectures,data requirements,performance metrics,and implementation challenges.A unique contribution of this work is the synthesis of real-time deployment challenges including latency,edge-cloud tradeoffs,and policy-level integration,areas often overlooked in prior literature.Furthermore,the review presents a comparative framework of model performance across different geographic and hydrologic settings,offering actionable insights for researchers and practitioners.Limitations of current AI-driven models,such as interpretability,data scarcity,and generalization across regions,are discussed in detail.Finally,the paper outlines future research directions including hybrid modelling,transfer learning,explainable AI,and policy-aware alert systems.By bridging technical performance and operational feasibility,this review aims to guide the development of next-generation intelligent EWS for resilient and adaptive coastal management.展开更多
In hybrid beamforming design using the conventional gradient projection(GP)algorithm,it is common to use a fixed step size,which results in a slow convergence rate and unsatisfactory achievable rate performance.This p...In hybrid beamforming design using the conventional gradient projection(GP)algorithm,it is common to use a fixed step size,which results in a slow convergence rate and unsatisfactory achievable rate performance.This paper employs a deep unfolding algorithm within a small fixed number of iterations to tackle the hybrid beamforming optimization problem.The optimal step size is obtained by combining the conventional GP algorithm with the deep learning technique,and every step in deep learning is explainable.Simulation results show that the proposed deep unfolding algorithm demonstrates a lower computational time and superior achievable rate performance than the conventional GP algorithm.展开更多
Cervical cancer is a severe threat to women’s health.The majority of cervical cancer cases occur in developing countries.The WHO has proposed screening 70%of women with high-performance tests between 35 and 45 years ...Cervical cancer is a severe threat to women’s health.The majority of cervical cancer cases occur in developing countries.The WHO has proposed screening 70%of women with high-performance tests between 35 and 45 years of age by 2030 to accelerate the elimination of cervical cancer.Due to an inadequate health infrastructure and organized screening strategy,most low-and middle-income countries are still far from achieving this goal.As part of the efforts to increase performance of cervical cancer screening,it is necessary to investigate the most accurate,efficient,and effective methods and strategies.Artificial intelligence(AI)is rapidly expanding its application in cancer screening and diagnosis and deep learning algorithms have offered human-like interpretation capabilities on various medical images.AI will soon have a more significant role in improving the implementation of cervical cancer screening,management,and follow-up.This review aims to report the state of AI with respect to cervical cancer screening.We discuss the primary AI applications and development of AI technology for image recognition applied to detection of abnormal cytology and cervical neoplastic diseases,as well as the challenges that we anticipate in the future.展开更多
With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.Howeve...With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.展开更多
With the recent increase in the utilization of logistics and courier services,it is time for research on logistics systems fused with the fourth industry sector.Algorithm studies related to object recognition have bee...With the recent increase in the utilization of logistics and courier services,it is time for research on logistics systems fused with the fourth industry sector.Algorithm studies related to object recognition have been actively conducted in convergence with the emerging artificial intelligence field,but so far,algorithms suitable for automatic unloading devices that need to identify a number of unstructured cargoes require further development.In this study,the object recognition algorithm of the automatic loading device for cargo was selected as the subject of the study,and a cargo object recognition algorithm applicable to the automatic loading device is proposed to improve the amorphous cargo identification performance.The fuzzy convergence algorithm is an algorithm that applies Fuzzy C Means to existing algorithm forms that fuse YOLO(You Only Look Once)and Mask R-CNN(Regions with Convolutional Neuron Networks).Experiments conducted using the fuzzy convergence algorithm showed an average of 33 FPS(Frames Per Second)and a recognition rate of 95%.In addition,there were significant improvements in the range of actual box recognition.The results of this study can contribute to improving the performance of identifying amorphous cargoes in automatic loading devices.展开更多
With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to qui...With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to quickly search and access moving images but also facilitate staff to store and manage moving image data and contribute to the intellectual development of the sports industry.In this paper,a method of table tennis identification and positioning based on a convolutional neural network is proposed,which solves the problem that the identification and positioning method based on color features and contour features is not adaptable in various environments.At the same time,the learning methods and techniques of table tennis detection,positioning,and trajectory prediction are studied.A deep learning framework for recognition learning of rotating flying table tennis is put forward.The mechanism and methods of positioning,trajectory prediction,and intelligent automatic processing of moving images are studied,and the self-built data sets are trained and verified.展开更多
The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents ...The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process.展开更多
基金Supported by Beijing Hospitals Authority Youth Programme,No.QML20200505.
文摘BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification.
基金supported by the National Natural Science Foundation of China(52272177,12204010)the Foundation for the Introduction of High-Level Talents of Anhui University(S020118002/097)+1 种基金the University Synergy Innovation Program of Anhui Province(GXXT-2023-066)the Scientific Research Project of Anhui Provincial Higher Education Institution(2023AH040008)。
文摘Flexible electronics face critical challenges in achieving monolithic three-dimensional(3D)integration,including material compatibility,structural stability,and scalable fabrication methods.Inspired by the tactile sensing mechanism of the human skin,we have developed a flexible monolithic 3D-integrated tactile sensing system based on a holey MXene paste,where each vertical one-body unit simultaneously functions as a microsupercapacitor and pressure sensor.The in-plane mesopores of MXene significantly improve ion accessibility,mitigate the self-stacking of nanosheets,and allow the holey MXene to multifunctionally act as a sensing material,an active electrode,and a conductive interconnect,thus drastically reducing the interface mismatch and enhancing the mechanical robustness.Furthermore,we fabricate a large-scale device using a blade-coating and stamping method,which demonstrates excellent mechanical flexibility,low-power consumption,rapid response,and stable long-term operation.As a proof-of-concept application,we integrate our sensing array into a smart access control system,leveraging deep learning to accurately identify users based on their unique pressing behaviors.This study provides a promising approach for designing highly integrated,intelligent,and flexible electronic systems for advanced human-computer interactions and personalized electronics.
文摘Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.
基金supported by the National Natural Science Foundation of China(41601369)the Young Talents Program of Institute of Crop Sciences,Chinese Academy of Agricultural Sciences(S2019YC04)
文摘Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region.
基金the Indian National Academy of Science, New Delhi for providing research fellowship in the Department of Electrical Engineering, Indian Institute of Technology, New Delhi and Department of Electrical and Electronics Engineering, Mepco Schlenk Engineering College, Sivakasi, India for providing the necessary research facilities
文摘Due to the inconsistency of rice variety,agricultural industry faces an important challenge of rice grading and classification by the traditional grading system.The existing grading system is manual,which introduces stress and strain to humans due to visual inspection.Automated rice grading system development has been proposed as a promising research area in computer vision.In this study,an accurate deep learning-based non-contact and cost-effective rice grading system was developed by rice appearance and characteristics.The proposed system provided real-time processing by using a NI-myRIO with a high-resolution camera and user interface.We firstly trained the network by a rice public dataset to extract rice discriminative features.Secondly,by using transfer learning,the pre-trained network was used to locate the region by extracting a feature map.The proposed deep learning model was tested using two public standard datasets and a prototype real-time scanning system.Using AlexNet architecture,we obtained an average accuracy of 98.2%with 97.6%sensitivity and 96.4%specificity.To validate the real-time performance of proposed rice grading classification system,various performance indices were calculated and compared with the existing classifier.Both simulation and real-time experiment evaluations confirmed the robustness and reliability of the proposed rice grading system.
文摘The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.
文摘The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions.
基金funded by the State Grid Limited Science and Technology Project of China,Grant Number SGSXDK00DJJS2200144.
文摘At present,the proportion of new energy in the power grid is increasing,and the random fluctuations in power output increase the risk of cascading failures in the power grid.In this paper,we propose a method for identifying high-risk scenarios of interlocking faults in new energy power grids based on a deep embedding clustering(DEC)algorithm and apply it in a risk assessment of cascading failures in different operating scenarios for new energy power grids.First,considering the real-time operation status and system structure of new energy power grids,the scenario cascading failure risk indicator is established.Based on this indicator,the risk of cascading failure is calculated for the scenario set,the scenarios are clustered based on the DEC algorithm,and the scenarios with the highest indicators are selected as the significant risk scenario set.The results of simulations with an example power grid show that our method can effectively identify scenarios with a high risk of cascading failures from a large number of scenarios.
基金supported by the Aeronautical Science Foundation(2017ZC53033).
文摘The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.
文摘With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,thefinal stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.
文摘With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,the final stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.
基金supported by Project No.R-2023-23 of the Deanship of Scientific Research at Majmaah University.
文摘At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.
文摘Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall short in capturing complex,nonlinear,and real-time environmental dynamics.In recent years,machine learning(ML)and deep learning(DL)techniques have emerged as promising alternatives for enhancing the accuracy,speed,and scalability of EWS.This review critically evaluates the evolution of ML models—such as Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—in coastal flood prediction,highlighting their architectures,data requirements,performance metrics,and implementation challenges.A unique contribution of this work is the synthesis of real-time deployment challenges including latency,edge-cloud tradeoffs,and policy-level integration,areas often overlooked in prior literature.Furthermore,the review presents a comparative framework of model performance across different geographic and hydrologic settings,offering actionable insights for researchers and practitioners.Limitations of current AI-driven models,such as interpretability,data scarcity,and generalization across regions,are discussed in detail.Finally,the paper outlines future research directions including hybrid modelling,transfer learning,explainable AI,and policy-aware alert systems.By bridging technical performance and operational feasibility,this review aims to guide the development of next-generation intelligent EWS for resilient and adaptive coastal management.
基金STU Scientific Research Foundation for Talents under Grants NTF21048。
文摘In hybrid beamforming design using the conventional gradient projection(GP)algorithm,it is common to use a fixed step size,which results in a slow convergence rate and unsatisfactory achievable rate performance.This paper employs a deep unfolding algorithm within a small fixed number of iterations to tackle the hybrid beamforming optimization problem.The optimal step size is obtained by combining the conventional GP algorithm with the deep learning technique,and every step in deep learning is explainable.Simulation results show that the proposed deep unfolding algorithm demonstrates a lower computational time and superior achievable rate performance than the conventional GP algorithm.
基金supported by grants from CAMS Innovation Fund for Medical Sciences(Grant No.CAMS 2021-I2M-1-004)from the Bill&Melinda Gates Foundation(Grant No.INV-031449).
文摘Cervical cancer is a severe threat to women’s health.The majority of cervical cancer cases occur in developing countries.The WHO has proposed screening 70%of women with high-performance tests between 35 and 45 years of age by 2030 to accelerate the elimination of cervical cancer.Due to an inadequate health infrastructure and organized screening strategy,most low-and middle-income countries are still far from achieving this goal.As part of the efforts to increase performance of cervical cancer screening,it is necessary to investigate the most accurate,efficient,and effective methods and strategies.Artificial intelligence(AI)is rapidly expanding its application in cancer screening and diagnosis and deep learning algorithms have offered human-like interpretation capabilities on various medical images.AI will soon have a more significant role in improving the implementation of cervical cancer screening,management,and follow-up.This review aims to report the state of AI with respect to cervical cancer screening.We discuss the primary AI applications and development of AI technology for image recognition applied to detection of abnormal cytology and cervical neoplastic diseases,as well as the challenges that we anticipate in the future.
基金supported by the National Natural Science Foundation of China(No.62271199)the Natural Science Foundation of Hunan Province,China(No.2020JJ5170)the Scientific Research Fund of Hunan Provincial Education Department(No.18C0299)。
文摘With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.
基金This work was supported by a grant from R&D program of the Korea Evaluation Institute of Industrial Technology(20015047).
文摘With the recent increase in the utilization of logistics and courier services,it is time for research on logistics systems fused with the fourth industry sector.Algorithm studies related to object recognition have been actively conducted in convergence with the emerging artificial intelligence field,but so far,algorithms suitable for automatic unloading devices that need to identify a number of unstructured cargoes require further development.In this study,the object recognition algorithm of the automatic loading device for cargo was selected as the subject of the study,and a cargo object recognition algorithm applicable to the automatic loading device is proposed to improve the amorphous cargo identification performance.The fuzzy convergence algorithm is an algorithm that applies Fuzzy C Means to existing algorithm forms that fuse YOLO(You Only Look Once)and Mask R-CNN(Regions with Convolutional Neuron Networks).Experiments conducted using the fuzzy convergence algorithm showed an average of 33 FPS(Frames Per Second)and a recognition rate of 95%.In addition,there were significant improvements in the range of actual box recognition.The results of this study can contribute to improving the performance of identifying amorphous cargoes in automatic loading devices.
文摘With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to quickly search and access moving images but also facilitate staff to store and manage moving image data and contribute to the intellectual development of the sports industry.In this paper,a method of table tennis identification and positioning based on a convolutional neural network is proposed,which solves the problem that the identification and positioning method based on color features and contour features is not adaptable in various environments.At the same time,the learning methods and techniques of table tennis detection,positioning,and trajectory prediction are studied.A deep learning framework for recognition learning of rotating flying table tennis is put forward.The mechanism and methods of positioning,trajectory prediction,and intelligent automatic processing of moving images are studied,and the self-built data sets are trained and verified.
基金supported by the Key Research and Development Program of Shaanxi(2022GY-089)the Natural Science Basic Research Program of Shaanxi(2022JQ-593).
文摘The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process.