Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instabili...Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instability,occur frequently in both experimental and operational data.This infrequency causes events to be overlooked by existing prediction models,which lack the precision to accurately predict inclination attitudes in amphibious vehicles.To address this gap in predicting attitudes near extreme inclination points,this study introduces a novel loss function,termed generalized extreme value loss.Subsequently,a deep learning model for improved waterborne attitude prediction,termed iInformer,was developed using a Transformer-based approach.During the embedding phase,a text prototype is created based on the vehicle’s operation log data is constructed to help the model better understand the vehicle’s operating environment.Data segmentation techniques are used to highlight local data variation features.Furthermore,to mitigate issues related to poor convergence and slow training speeds caused by the extreme value loss function,a teacher forcing mechanism is integrated into the model,enhancing its convergence capabilities.Experimental results validate the effectiveness of the proposed method,demonstrating its ability to handle data imbalance challenges.Specifically,the model achieves over a 60%improvement in root mean square error under extreme value conditions,with significant improvements observed across additional metrics.展开更多
In the Internet era,recommendation systems play a crucial role in helping users find relevant information from large datasets.Class imbalance is known to severely affect data quality,and therefore reduce the performan...In the Internet era,recommendation systems play a crucial role in helping users find relevant information from large datasets.Class imbalance is known to severely affect data quality,and therefore reduce the performance of recommendation systems.Due to the imbalance,machine learning algorithms tend to classify inputs into the positive(majority)class every time to achieve high prediction accuracy.Imbalance can be categorized such as by features and classes,but most studies consider only class imbalance.In this paper,we propose a recommendation system that can integrate multiple networks to adapt to a large number of imbalanced features and can deal with highly skewed and imbalanced datasets through a loss function.We propose a loss aware feature attention mechanism(LAFAM)to solve the issue of feature imbalance.The network incorporates an attention mechanism and uses multiple sub-networks to classify and learn features.For better results,the network can learn the weights of sub-networks and assign higher weights to important features.We propose suppression loss to address class imbalance,which favors negative loss by penalizing positive loss,and pays more attention to sample points near the decision boundary.Experiments on two large-scale datasets verify that the performance of the proposed system is greatly improved compared to baseline methods.展开更多
This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to imp...This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.展开更多
Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limi...Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.展开更多
Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome th...Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.展开更多
In this paper we propose an absolute error loss EB estimator for parameter of one-side truncation distribution families. Under some conditions we have proved that the convergence rates of its Bayes risk is o, where 0&...In this paper we propose an absolute error loss EB estimator for parameter of one-side truncation distribution families. Under some conditions we have proved that the convergence rates of its Bayes risk is o, where 0<λ,r≤1,Mn≤lnln n (for large n),Mn→∞ as n→∞.展开更多
Ufmylation is an ubiquitin-like post-translational modification characterized by the covalent binding of mature UFM1 to target proteins.Although the consequences of ufmylation on target proteins are not fully understo...Ufmylation is an ubiquitin-like post-translational modification characterized by the covalent binding of mature UFM1 to target proteins.Although the consequences of ufmylation on target proteins are not fully understood,its importance is evident from the disorders resulting from its dysfunction.Numerous case reports have established a link between biallelic loss-of-function and/or hypomorphic variants in ufmylation-related genes and a spectrum of pediatric neurodevelopmental disorders.展开更多
The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurode...The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurodegenerative diseases are characterized by the progressive loss of neuronal structure and function.展开更多
LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quad...LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.展开更多
Internet of Things(IoT)is a network that connects things in a special union.It embeds a physical entity through an intelligent perception system to obtain information about the component at any time.It connects variou...Internet of Things(IoT)is a network that connects things in a special union.It embeds a physical entity through an intelligent perception system to obtain information about the component at any time.It connects various objects.IoT has the ability of information transmission,information perception,and information processing.The air quality forecasting has always been an urgent problem,which affects people’s quality of life seriously.So far,many air quality prediction algorithms have been proposed,which can be mainly classified into two categories.One is regression-based prediction,the other is deep learning-based prediction.Regression-based prediction is aimed to make use of the classical regression algorithm and the various supervised meteorological characteristics to regress themeteorological value.Deep learning methods usually use convolutional neural networks(CNN)or recurrent neural networks(RNN)to predict the meteorological value.As an excellent feature extractor,CNN has achieved good performance in many scenes.In the same way,as an efficient network for orderly data processing,RNN has also achieved good results.However,few or none of the above methods can meet the current accuracy requirements on prediction.Moreover,there is no way to pay attention to the trend monitoring of air quality data.For the sake of accurate results,this paper proposes a novel predicted-trend-based loss function(PTB),which is used to replace the loss function in RNN.At the same time,the trend of change and the predicted value are constrained to obtain more accurate prediction results of PM_(2.5).In addition,this paper extends the model scenario to the prediction of the whole existing training data features.All the data on the next day of the model is mixed labels,which effectively realizes the prediction of all features.The experiments show that the loss function proposed in this paper is effective.展开更多
Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss...Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.展开更多
The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence...The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.展开更多
According to the World Health Organization,about 50 million people worldwide suffer from epilepsy.The detection and treatment of epilepsy face great challenges.Electroencephalogram(EEG)is a significant research object...According to the World Health Organization,about 50 million people worldwide suffer from epilepsy.The detection and treatment of epilepsy face great challenges.Electroencephalogram(EEG)is a significant research object widely used in diagnosis and treatment of epilepsy.In this paper,an adaptive feature learning model for EEG signals is proposed,which combines Huber loss function with adaptive weight penalty term.Firstly,each EEG signal is decomposed by intrinsic time-scale decomposition.Secondly,the statistical index values are calculated from the instantaneous amplitude and frequency of every component and fed into the proposed model.Finally,the discriminative features learned by the proposed model are used to detect seizures.Our main innovation is to consider a highly flexible penalization based on Huber loss function,which can set different weights according to the influence of different features on epilepsy detection.Besides,the new model can be solved by proximal alternating direction multiplier method,which can effectively ensure the convergence of the algorithm.The performance of the proposed method is evaluated on three public EEG datasets provided by the Bonn University,Childrens Hospital Boston-Massachusetts Institute of Technology,and Neurological and Sleep Center at Hauz Khas,New Delhi(New Delhi Epilepsy data).The recognition accuracy on these two datasets is 98%and 99.05%,respectively,indicating the application value of the new model.展开更多
We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs,...We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.展开更多
This paper studies the problem of robust H∞ control of piecewise-linear chaotic systems with random data loss. The communication links between the plant and the controller are assumed to be imperfect (that is, data ...This paper studies the problem of robust H∞ control of piecewise-linear chaotic systems with random data loss. The communication links between the plant and the controller are assumed to be imperfect (that is, data loss occurs intermittently, which appears typically in a network environment). The data loss is modelled as a random process which obeys a Bernoulli distribution. In the face of random data loss, a piecewise controller is designed to robustly stabilize the networked system in the sense of mean square and also achieve a prescribed H∞ disturbance attenuation performance based on a piecewise-quadratic Lyapunov function. The required H∞ controllers can be designed by solving a set of linear matrix inequalities (LMIs). Chua's system is provided to illustrate the usefulness and applicability of the developed theoretical results.展开更多
Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancem...Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancement.However,the currently prevalent loss functions assign equal weight to each pixel point during loss calculation,which hampers the ability to reflect the roles of different pixel points and fails to exploit the image’s characteristics fully.To address this issue,this study proposes an asymmetric loss function based on the image and data characteristics of the image recovery task.This novel loss function can adjust the weight of the reconstruction loss based on the grey value of different pixel points,thereby effectively optimizing the network training by differentially utilizing the grey information from the original image.Specifically,we calculate a weight factor for each pixel point based on its grey value and combine it with the reconstruction loss to create a new loss function.This ensures that pixel points with smaller grey values receive greater attention,improving network recovery.In order to verify the effectiveness of the proposed asymmetric loss function,we conducted experimental tests in the image super-resolution task.The experimental results show that the model with the introduction of asymmetric loss weights improves all the indexes of the processing results without increasing the training time.In the typical super-resolution network SRCNN,by introducing asymmetric weights,it is possible to improve the peak signal-to-noise ratio(PSNR)by up to about 0.5%,the structural similarity index(SSIM)by up to about 0.3%,and reduce the root-mean-square error(RMSE)by up to about 1.7%with essentially no increase in training time.In addition,we also further tested the performance of the proposed method in the denoising task to verify the potential applicability of the method in the image restoration task.展开更多
Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be di...Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.展开更多
The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology pro...The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.展开更多
Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other m...Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other model parameters,so the economic design method is often not effective in producing charts that can quickly detect small shifts before substantial losses occur;on the other hand,in many cases,only one type of process shift or only one pair of process shifts are taken into consideration,which may not correctly reflect the actual process conditions.To improve the behavior of economic design of control chart,a cost & loss model with Taguchi's loss function for the economic design of X & S control charts is embellished,which is regarded as an optimization problem with multiple statistical constraints.The optimization design is also carried out based on a number of combinations of process shifts collected from the field operation of the conventional control charts,thus more hidden information about the shift combinations is mined and employed to the optimization design of control charts.At the same time,an improved particle swarm optimization(IPSO) is developed to solve such an optimization problem in design of X & S control charts,IPSO is first tested for several benchmark problems from the literature and evaluated with standard performance metrics.Experimental results show that the proposed algorithm has significant advantages on obtaining the optimal design parameters of the charts.The proposed method can substantially reduce the total cost(or loss) of the control charts,and it will be a promising tool for economic design of control charts.展开更多
The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is ...The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is very close in value to th e theoretical surface energy loss function in the lower energy loss region but g radually approaches the theoretical bulk energy loss function in the higher ener gy loss region. Moreover, the intensity corresponding to surface excitation in e ffective energy loss functions decreases with the increase of primary electron e nergy. These facts show that the present effective energy loss function describe s not only surface excitation but also bulk excitation. At last, REELS spectra s imulated by Monte Carlo method based on use of the effective energy loss functio ns has reproduced the experimental REELS spectra with considerable success.展开更多
基金Supported by the National Defense Basic Scientific Research Program of China.
文摘Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instability,occur frequently in both experimental and operational data.This infrequency causes events to be overlooked by existing prediction models,which lack the precision to accurately predict inclination attitudes in amphibious vehicles.To address this gap in predicting attitudes near extreme inclination points,this study introduces a novel loss function,termed generalized extreme value loss.Subsequently,a deep learning model for improved waterborne attitude prediction,termed iInformer,was developed using a Transformer-based approach.During the embedding phase,a text prototype is created based on the vehicle’s operation log data is constructed to help the model better understand the vehicle’s operating environment.Data segmentation techniques are used to highlight local data variation features.Furthermore,to mitigate issues related to poor convergence and slow training speeds caused by the extreme value loss function,a teacher forcing mechanism is integrated into the model,enhancing its convergence capabilities.Experimental results validate the effectiveness of the proposed method,demonstrating its ability to handle data imbalance challenges.Specifically,the model achieves over a 60%improvement in root mean square error under extreme value conditions,with significant improvements observed across additional metrics.
基金supported by the National Key Research and Development Program of China(Grant numbers:2021YFF0901705,2021YFF0901700)the State Key Laboratory of Media Convergence and Communication,Communication University of China+1 种基金the Fundamental Research Funds for the Central Universitiesthe High-Quality and Cutting-Edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
文摘In the Internet era,recommendation systems play a crucial role in helping users find relevant information from large datasets.Class imbalance is known to severely affect data quality,and therefore reduce the performance of recommendation systems.Due to the imbalance,machine learning algorithms tend to classify inputs into the positive(majority)class every time to achieve high prediction accuracy.Imbalance can be categorized such as by features and classes,but most studies consider only class imbalance.In this paper,we propose a recommendation system that can integrate multiple networks to adapt to a large number of imbalanced features and can deal with highly skewed and imbalanced datasets through a loss function.We propose a loss aware feature attention mechanism(LAFAM)to solve the issue of feature imbalance.The network incorporates an attention mechanism and uses multiple sub-networks to classify and learn features.For better results,the network can learn the weights of sub-networks and assign higher weights to important features.We propose suppression loss to address class imbalance,which favors negative loss by penalizing positive loss,and pays more attention to sample points near the decision boundary.Experiments on two large-scale datasets verify that the performance of the proposed system is greatly improved compared to baseline methods.
基金support from the following institutional grant.Internal Grant Agency of the Faculty of Economics and Management,Czech University of Life Sciences Prague,grant no.2023A0004(https://iga.pef.czu.cz/,accessed on 6 June 2025).
文摘This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.
基金supported by Chongqing Municipal Commission of Housing and Urban-Rural Development(Grant No.CKZ2024-87)China Chongqing Municipal Science and Technology Bureau(Grant No.2024TIAD-CYKJCXX0121).
文摘Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.
基金partially funded by the National Natural Science Foundation of China(U2142205)the Guangdong Major Project of Basic and Applied Basic Research(2020B0301030004)+1 种基金the Special Fund for Forecasters of China Meteorological Administration(CMAYBY2020-094)the Graduate Student Research and Innovation Program of Central South University(2023ZZTS0347)。
文摘Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.
文摘In this paper we propose an absolute error loss EB estimator for parameter of one-side truncation distribution families. Under some conditions we have proved that the convergence rates of its Bayes risk is o, where 0<λ,r≤1,Mn≤lnln n (for large n),Mn→∞ as n→∞.
文摘Ufmylation is an ubiquitin-like post-translational modification characterized by the covalent binding of mature UFM1 to target proteins.Although the consequences of ufmylation on target proteins are not fully understood,its importance is evident from the disorders resulting from its dysfunction.Numerous case reports have established a link between biallelic loss-of-function and/or hypomorphic variants in ufmylation-related genes and a spectrum of pediatric neurodevelopmental disorders.
文摘The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurodegenerative diseases are characterized by the progressive loss of neuronal structure and function.
基金Supported by the NNSF of China(71001046)Supported by the NSF of Jiangxi Province(20114BAB211004)
文摘LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.
基金This work is supported by the National Natural Science Foundation of China under Grant 61972207,U1836208,U1836110,61672290the Major Program of the National Social Science Fund of China under Grant No.17ZDA092,by the National Key R&D Program of China under Grant 2018YFB1003205+1 种基金by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,Chinaby the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund。
文摘Internet of Things(IoT)is a network that connects things in a special union.It embeds a physical entity through an intelligent perception system to obtain information about the component at any time.It connects various objects.IoT has the ability of information transmission,information perception,and information processing.The air quality forecasting has always been an urgent problem,which affects people’s quality of life seriously.So far,many air quality prediction algorithms have been proposed,which can be mainly classified into two categories.One is regression-based prediction,the other is deep learning-based prediction.Regression-based prediction is aimed to make use of the classical regression algorithm and the various supervised meteorological characteristics to regress themeteorological value.Deep learning methods usually use convolutional neural networks(CNN)or recurrent neural networks(RNN)to predict the meteorological value.As an excellent feature extractor,CNN has achieved good performance in many scenes.In the same way,as an efficient network for orderly data processing,RNN has also achieved good results.However,few or none of the above methods can meet the current accuracy requirements on prediction.Moreover,there is no way to pay attention to the trend monitoring of air quality data.For the sake of accurate results,this paper proposes a novel predicted-trend-based loss function(PTB),which is used to replace the loss function in RNN.At the same time,the trend of change and the predicted value are constrained to obtain more accurate prediction results of PM_(2.5).In addition,this paper extends the model scenario to the prediction of the whole existing training data features.All the data on the next day of the model is mixed labels,which effectively realizes the prediction of all features.The experiments show that the loss function proposed in this paper is effective.
文摘Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.
文摘The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.
基金Supported by National Natural Science Foundation of China(Grant Nos.11701144,11971149)Henan Province Key and Promotion Special(Science and Technology)Project(Grant No.212102310305).
文摘According to the World Health Organization,about 50 million people worldwide suffer from epilepsy.The detection and treatment of epilepsy face great challenges.Electroencephalogram(EEG)is a significant research object widely used in diagnosis and treatment of epilepsy.In this paper,an adaptive feature learning model for EEG signals is proposed,which combines Huber loss function with adaptive weight penalty term.Firstly,each EEG signal is decomposed by intrinsic time-scale decomposition.Secondly,the statistical index values are calculated from the instantaneous amplitude and frequency of every component and fed into the proposed model.Finally,the discriminative features learned by the proposed model are used to detect seizures.Our main innovation is to consider a highly flexible penalization based on Huber loss function,which can set different weights according to the influence of different features on epilepsy detection.Besides,the new model can be solved by proximal alternating direction multiplier method,which can effectively ensure the convergence of the algorithm.The performance of the proposed method is evaluated on three public EEG datasets provided by the Bonn University,Childrens Hospital Boston-Massachusetts Institute of Technology,and Neurological and Sleep Center at Hauz Khas,New Delhi(New Delhi Epilepsy data).The recognition accuracy on these two datasets is 98%and 99.05%,respectively,indicating the application value of the new model.
文摘We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.
基金Project partially supported by the Young Scientists Fund of the National Natural Science Foundation of China(Grant No.60904004)the Key Youth Science and Technology Foundation of University of Electronic Science and Technology of China (Grant No.L08010201JX0720)
文摘This paper studies the problem of robust H∞ control of piecewise-linear chaotic systems with random data loss. The communication links between the plant and the controller are assumed to be imperfect (that is, data loss occurs intermittently, which appears typically in a network environment). The data loss is modelled as a random process which obeys a Bernoulli distribution. In the face of random data loss, a piecewise controller is designed to robustly stabilize the networked system in the sense of mean square and also achieve a prescribed H∞ disturbance attenuation performance based on a piecewise-quadratic Lyapunov function. The required H∞ controllers can be designed by solving a set of linear matrix inequalities (LMIs). Chua's system is provided to illustrate the usefulness and applicability of the developed theoretical results.
基金supported by the National Natural Science Foundation of China(62201618).
文摘Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancement.However,the currently prevalent loss functions assign equal weight to each pixel point during loss calculation,which hampers the ability to reflect the roles of different pixel points and fails to exploit the image’s characteristics fully.To address this issue,this study proposes an asymmetric loss function based on the image and data characteristics of the image recovery task.This novel loss function can adjust the weight of the reconstruction loss based on the grey value of different pixel points,thereby effectively optimizing the network training by differentially utilizing the grey information from the original image.Specifically,we calculate a weight factor for each pixel point based on its grey value and combine it with the reconstruction loss to create a new loss function.This ensures that pixel points with smaller grey values receive greater attention,improving network recovery.In order to verify the effectiveness of the proposed asymmetric loss function,we conducted experimental tests in the image super-resolution task.The experimental results show that the model with the introduction of asymmetric loss weights improves all the indexes of the processing results without increasing the training time.In the typical super-resolution network SRCNN,by introducing asymmetric weights,it is possible to improve the peak signal-to-noise ratio(PSNR)by up to about 0.5%,the structural similarity index(SSIM)by up to about 0.3%,and reduce the root-mean-square error(RMSE)by up to about 1.7%with essentially no increase in training time.In addition,we also further tested the performance of the proposed method in the denoising task to verify the potential applicability of the method in the image restoration task.
基金This is a Plenary Report on the International Symposium on Approximation Theory and Remote SensingApplications held in Kunming, China in April 2006Supported in part by NSF of China under grants 10571010 , 10171007 and Startup Grant for Doctoral Researchof Beijing University of Technology
文摘Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.
基金Jilin Science and Technology Development Plan Project(No.20200403075SF)Doctoral Research Start-Up Fund of Northeast Electric Power University(No.BSJXM-2018202).
文摘The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.
基金supported by Defense Industrial Technology Development Program of China (Grant No. A2520110003)
文摘Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other model parameters,so the economic design method is often not effective in producing charts that can quickly detect small shifts before substantial losses occur;on the other hand,in many cases,only one type of process shift or only one pair of process shifts are taken into consideration,which may not correctly reflect the actual process conditions.To improve the behavior of economic design of control chart,a cost & loss model with Taguchi's loss function for the economic design of X & S control charts is embellished,which is regarded as an optimization problem with multiple statistical constraints.The optimization design is also carried out based on a number of combinations of process shifts collected from the field operation of the conventional control charts,thus more hidden information about the shift combinations is mined and employed to the optimization design of control charts.At the same time,an improved particle swarm optimization(IPSO) is developed to solve such an optimization problem in design of X & S control charts,IPSO is first tested for several benchmark problems from the literature and evaluated with standard performance metrics.Experimental results show that the proposed algorithm has significant advantages on obtaining the optimal design parameters of the charts.The proposed method can substantially reduce the total cost(or loss) of the control charts,and it will be a promising tool for economic design of control charts.
基金This work was supported by the National Natural Science Foundation of China(No.10025420,No.20075026,No.60306006 and No.90206009)the post-doctoral fellowship provided by a Grant-in-Aid for Creative Scientific Research of Japanese govermment(No.13GS0022).The authors would also like to thank Dr.H.Yoshikawa,National Institute for Materials Science of Japan,and Dr.T.Nagatomi,Osaka University,for their helpful comments.
文摘The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is very close in value to th e theoretical surface energy loss function in the lower energy loss region but g radually approaches the theoretical bulk energy loss function in the higher ener gy loss region. Moreover, the intensity corresponding to surface excitation in e ffective energy loss functions decreases with the increase of primary electron e nergy. These facts show that the present effective energy loss function describe s not only surface excitation but also bulk excitation. At last, REELS spectra s imulated by Monte Carlo method based on use of the effective energy loss functio ns has reproduced the experimental REELS spectra with considerable success.