Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instabili...Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instability,occur frequently in both experimental and operational data.This infrequency causes events to be overlooked by existing prediction models,which lack the precision to accurately predict inclination attitudes in amphibious vehicles.To address this gap in predicting attitudes near extreme inclination points,this study introduces a novel loss function,termed generalized extreme value loss.Subsequently,a deep learning model for improved waterborne attitude prediction,termed iInformer,was developed using a Transformer-based approach.During the embedding phase,a text prototype is created based on the vehicle’s operation log data is constructed to help the model better understand the vehicle’s operating environment.Data segmentation techniques are used to highlight local data variation features.Furthermore,to mitigate issues related to poor convergence and slow training speeds caused by the extreme value loss function,a teacher forcing mechanism is integrated into the model,enhancing its convergence capabilities.Experimental results validate the effectiveness of the proposed method,demonstrating its ability to handle data imbalance challenges.Specifically,the model achieves over a 60%improvement in root mean square error under extreme value conditions,with significant improvements observed across additional metrics.展开更多
In the Internet era,recommendation systems play a crucial role in helping users find relevant information from large datasets.Class imbalance is known to severely affect data quality,and therefore reduce the performan...In the Internet era,recommendation systems play a crucial role in helping users find relevant information from large datasets.Class imbalance is known to severely affect data quality,and therefore reduce the performance of recommendation systems.Due to the imbalance,machine learning algorithms tend to classify inputs into the positive(majority)class every time to achieve high prediction accuracy.Imbalance can be categorized such as by features and classes,but most studies consider only class imbalance.In this paper,we propose a recommendation system that can integrate multiple networks to adapt to a large number of imbalanced features and can deal with highly skewed and imbalanced datasets through a loss function.We propose a loss aware feature attention mechanism(LAFAM)to solve the issue of feature imbalance.The network incorporates an attention mechanism and uses multiple sub-networks to classify and learn features.For better results,the network can learn the weights of sub-networks and assign higher weights to important features.We propose suppression loss to address class imbalance,which favors negative loss by penalizing positive loss,and pays more attention to sample points near the decision boundary.Experiments on two large-scale datasets verify that the performance of the proposed system is greatly improved compared to baseline methods.展开更多
Among cases of spinal cord injury are injuries involving the dorsal column in the cervical spinal cord that interrupt the major cutaneous afferents from the hand to the cuneate nucleus(Cu)in the brainstem.Deprivatio...Among cases of spinal cord injury are injuries involving the dorsal column in the cervical spinal cord that interrupt the major cutaneous afferents from the hand to the cuneate nucleus(Cu)in the brainstem.Deprivation of touch and proprioceptive inputs consequently impair skilled hand use.展开更多
This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to imp...This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.展开更多
Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limi...Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.展开更多
Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome th...Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.展开更多
In this paper we propose an absolute error loss EB estimator for parameter of one-side truncation distribution families. Under some conditions we have proved that the convergence rates of its Bayes risk is o, where 0&...In this paper we propose an absolute error loss EB estimator for parameter of one-side truncation distribution families. Under some conditions we have proved that the convergence rates of its Bayes risk is o, where 0<λ,r≤1,Mn≤lnln n (for large n),Mn→∞ as n→∞.展开更多
Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods ex...Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods exhibit deficiencies in detail recovery and noise suppression,particularly when processing complex landscapes(e.g.,forests,farmlands),leading to artifacts and spectral distortions that limit practical utility.To address this,we propose an enhanced Super-Resolution Generative Adversarial Network(SRGAN)framework featuring three key innovations:(1)Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing;(2)A multi-loss joint optimization strategy dynamically weighting Charbonnier loss(β=0.5),Visual Geometry Group(VGG)perceptual loss(α=1),and adversarial loss(γ=0.1)to synergize pixel-level accuracy and perceptual quality;(3)A multi-scale residual network(MSRN)capturing cross-scale texture features(e.g.,forest canopies,mountain contours).Validated on Sentinel-2(10 m)and SPOT-6/7(2.5 m)datasets covering 904 km2 in Motuo County,Xizang,our method outperforms the SRGAN baseline(SR4RS)with Peak Signal-to-Noise Ratio(PSNR)gains of 0.29 dB and Structural Similarity Index(SSIM)improvements of 3.08%on forest imagery.Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity(LPIPS)increases.The method significantly improves noise robustness and edge retention in complex geomorphology,demonstrating 18%faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring.Future work will integrate spectral constraints and lightweight architectures.展开更多
Ufmylation is an ubiquitin-like post-translational modification characterized by the covalent binding of mature UFM1 to target proteins.Although the consequences of ufmylation on target proteins are not fully understo...Ufmylation is an ubiquitin-like post-translational modification characterized by the covalent binding of mature UFM1 to target proteins.Although the consequences of ufmylation on target proteins are not fully understood,its importance is evident from the disorders resulting from its dysfunction.Numerous case reports have established a link between biallelic loss-of-function and/or hypomorphic variants in ufmylation-related genes and a spectrum of pediatric neurodevelopmental disorders.展开更多
The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurode...The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurodegenerative diseases are characterized by the progressive loss of neuronal structure and function.展开更多
LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quad...LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.展开更多
Internet of Things(IoT)is a network that connects things in a special union.It embeds a physical entity through an intelligent perception system to obtain information about the component at any time.It connects variou...Internet of Things(IoT)is a network that connects things in a special union.It embeds a physical entity through an intelligent perception system to obtain information about the component at any time.It connects various objects.IoT has the ability of information transmission,information perception,and information processing.The air quality forecasting has always been an urgent problem,which affects people’s quality of life seriously.So far,many air quality prediction algorithms have been proposed,which can be mainly classified into two categories.One is regression-based prediction,the other is deep learning-based prediction.Regression-based prediction is aimed to make use of the classical regression algorithm and the various supervised meteorological characteristics to regress themeteorological value.Deep learning methods usually use convolutional neural networks(CNN)or recurrent neural networks(RNN)to predict the meteorological value.As an excellent feature extractor,CNN has achieved good performance in many scenes.In the same way,as an efficient network for orderly data processing,RNN has also achieved good results.However,few or none of the above methods can meet the current accuracy requirements on prediction.Moreover,there is no way to pay attention to the trend monitoring of air quality data.For the sake of accurate results,this paper proposes a novel predicted-trend-based loss function(PTB),which is used to replace the loss function in RNN.At the same time,the trend of change and the predicted value are constrained to obtain more accurate prediction results of PM_(2.5).In addition,this paper extends the model scenario to the prediction of the whole existing training data features.All the data on the next day of the model is mixed labels,which effectively realizes the prediction of all features.The experiments show that the loss function proposed in this paper is effective.展开更多
Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss...Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.展开更多
The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence...The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.展开更多
According to the World Health Organization,about 50 million people worldwide suffer from epilepsy.The detection and treatment of epilepsy face great challenges.Electroencephalogram(EEG)is a significant research object...According to the World Health Organization,about 50 million people worldwide suffer from epilepsy.The detection and treatment of epilepsy face great challenges.Electroencephalogram(EEG)is a significant research object widely used in diagnosis and treatment of epilepsy.In this paper,an adaptive feature learning model for EEG signals is proposed,which combines Huber loss function with adaptive weight penalty term.Firstly,each EEG signal is decomposed by intrinsic time-scale decomposition.Secondly,the statistical index values are calculated from the instantaneous amplitude and frequency of every component and fed into the proposed model.Finally,the discriminative features learned by the proposed model are used to detect seizures.Our main innovation is to consider a highly flexible penalization based on Huber loss function,which can set different weights according to the influence of different features on epilepsy detection.Besides,the new model can be solved by proximal alternating direction multiplier method,which can effectively ensure the convergence of the algorithm.The performance of the proposed method is evaluated on three public EEG datasets provided by the Bonn University,Childrens Hospital Boston-Massachusetts Institute of Technology,and Neurological and Sleep Center at Hauz Khas,New Delhi(New Delhi Epilepsy data).The recognition accuracy on these two datasets is 98%and 99.05%,respectively,indicating the application value of the new model.展开更多
Title:A dual-parameter method for seismic resilience assessment of buildings Authors:LI Shuang;HU Binbin;LIU Wen;ZHAI Changhai Abstract:To quantify the seismic resilience of buildings,a method for evaluating functiona...Title:A dual-parameter method for seismic resilience assessment of buildings Authors:LI Shuang;HU Binbin;LIU Wen;ZHAI Changhai Abstract:To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on postearthquake loss and recovery time is improved.A three-level function tree model is established,which can consider the dynamic changes in weight coefficients of different category of components relative to their functional losses.Bayesian networks are utilized to quantify the impact of weather conditions,construction technology levels,and worker skill levels on component repair time.A method for determining the real-time functional recovery curve of buildings based on the component repair process is proposed.Taking a three-story teaching building as an example,the seismic resilience indices under basic earthquakes and rare earthquakes are calculated.The results show that the seismic resilience grade of the teaching building is comprehensively judged as GradeⅢ,and its resilience grade is more significantly affected by postearthquake loss.The proposed method can be used to predict the seismic resilience of buildings prior to earthquakes,identify weak components within buildings,and provide guidance for taking measures to enhance the seismic resilience of buildings.展开更多
We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs,...We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.展开更多
This paper studies the problem of robust H∞ control of piecewise-linear chaotic systems with random data loss. The communication links between the plant and the controller are assumed to be imperfect (that is, data ...This paper studies the problem of robust H∞ control of piecewise-linear chaotic systems with random data loss. The communication links between the plant and the controller are assumed to be imperfect (that is, data loss occurs intermittently, which appears typically in a network environment). The data loss is modelled as a random process which obeys a Bernoulli distribution. In the face of random data loss, a piecewise controller is designed to robustly stabilize the networked system in the sense of mean square and also achieve a prescribed H∞ disturbance attenuation performance based on a piecewise-quadratic Lyapunov function. The required H∞ controllers can be designed by solving a set of linear matrix inequalities (LMIs). Chua's system is provided to illustrate the usefulness and applicability of the developed theoretical results.展开更多
Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancem...Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancement.However,the currently prevalent loss functions assign equal weight to each pixel point during loss calculation,which hampers the ability to reflect the roles of different pixel points and fails to exploit the image’s characteristics fully.To address this issue,this study proposes an asymmetric loss function based on the image and data characteristics of the image recovery task.This novel loss function can adjust the weight of the reconstruction loss based on the grey value of different pixel points,thereby effectively optimizing the network training by differentially utilizing the grey information from the original image.Specifically,we calculate a weight factor for each pixel point based on its grey value and combine it with the reconstruction loss to create a new loss function.This ensures that pixel points with smaller grey values receive greater attention,improving network recovery.In order to verify the effectiveness of the proposed asymmetric loss function,we conducted experimental tests in the image super-resolution task.The experimental results show that the model with the introduction of asymmetric loss weights improves all the indexes of the processing results without increasing the training time.In the typical super-resolution network SRCNN,by introducing asymmetric weights,it is possible to improve the peak signal-to-noise ratio(PSNR)by up to about 0.5%,the structural similarity index(SSIM)by up to about 0.3%,and reduce the root-mean-square error(RMSE)by up to about 1.7%with essentially no increase in training time.In addition,we also further tested the performance of the proposed method in the denoising task to verify the potential applicability of the method in the image restoration task.展开更多
Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be di...Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.展开更多
基金Supported by the National Defense Basic Scientific Research Program of China.
文摘Amphibious vehicles are more prone to attitude instability compared to ships,making it crucial to develop effective methods for monitoring instability risks.However,large inclination events,which can lead to instability,occur frequently in both experimental and operational data.This infrequency causes events to be overlooked by existing prediction models,which lack the precision to accurately predict inclination attitudes in amphibious vehicles.To address this gap in predicting attitudes near extreme inclination points,this study introduces a novel loss function,termed generalized extreme value loss.Subsequently,a deep learning model for improved waterborne attitude prediction,termed iInformer,was developed using a Transformer-based approach.During the embedding phase,a text prototype is created based on the vehicle’s operation log data is constructed to help the model better understand the vehicle’s operating environment.Data segmentation techniques are used to highlight local data variation features.Furthermore,to mitigate issues related to poor convergence and slow training speeds caused by the extreme value loss function,a teacher forcing mechanism is integrated into the model,enhancing its convergence capabilities.Experimental results validate the effectiveness of the proposed method,demonstrating its ability to handle data imbalance challenges.Specifically,the model achieves over a 60%improvement in root mean square error under extreme value conditions,with significant improvements observed across additional metrics.
基金supported by the National Key Research and Development Program of China(Grant numbers:2021YFF0901705,2021YFF0901700)the State Key Laboratory of Media Convergence and Communication,Communication University of China+1 种基金the Fundamental Research Funds for the Central Universitiesthe High-Quality and Cutting-Edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
文摘In the Internet era,recommendation systems play a crucial role in helping users find relevant information from large datasets.Class imbalance is known to severely affect data quality,and therefore reduce the performance of recommendation systems.Due to the imbalance,machine learning algorithms tend to classify inputs into the positive(majority)class every time to achieve high prediction accuracy.Imbalance can be categorized such as by features and classes,but most studies consider only class imbalance.In this paper,we propose a recommendation system that can integrate multiple networks to adapt to a large number of imbalanced features and can deal with highly skewed and imbalanced datasets through a loss function.We propose a loss aware feature attention mechanism(LAFAM)to solve the issue of feature imbalance.The network incorporates an attention mechanism and uses multiple sub-networks to classify and learn features.For better results,the network can learn the weights of sub-networks and assign higher weights to important features.We propose suppression loss to address class imbalance,which favors negative loss by penalizing positive loss,and pays more attention to sample points near the decision boundary.Experiments on two large-scale datasets verify that the performance of the proposed system is greatly improved compared to baseline methods.
基金supported by NIH grants NS067017 to HXQNS16446 to JHK
文摘Among cases of spinal cord injury are injuries involving the dorsal column in the cervical spinal cord that interrupt the major cutaneous afferents from the hand to the cuneate nucleus(Cu)in the brainstem.Deprivation of touch and proprioceptive inputs consequently impair skilled hand use.
基金support from the following institutional grant.Internal Grant Agency of the Faculty of Economics and Management,Czech University of Life Sciences Prague,grant no.2023A0004(https://iga.pef.czu.cz/,accessed on 6 June 2025).
文摘This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.
基金supported by Chongqing Municipal Commission of Housing and Urban-Rural Development(Grant No.CKZ2024-87)China Chongqing Municipal Science and Technology Bureau(Grant No.2024TIAD-CYKJCXX0121).
文摘Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.
基金partially funded by the National Natural Science Foundation of China(U2142205)the Guangdong Major Project of Basic and Applied Basic Research(2020B0301030004)+1 种基金the Special Fund for Forecasters of China Meteorological Administration(CMAYBY2020-094)the Graduate Student Research and Innovation Program of Central South University(2023ZZTS0347)。
文摘Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.
文摘In this paper we propose an absolute error loss EB estimator for parameter of one-side truncation distribution families. Under some conditions we have proved that the convergence rates of its Bayes risk is o, where 0<λ,r≤1,Mn≤lnln n (for large n),Mn→∞ as n→∞.
基金This study was supported by:Inner Mongolia Academy of Forestry Sciences Open Research Project(Grant No.KF2024MS03)The Project to Improve the Scientific Research Capacity of the Inner Mongolia Academy of Forestry Sciences(Grant No.2024NLTS04)The Innovation and Entrepreneurship Training Program for Undergraduates of Beijing Forestry University(Grant No.X202410022268).
文摘Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods exhibit deficiencies in detail recovery and noise suppression,particularly when processing complex landscapes(e.g.,forests,farmlands),leading to artifacts and spectral distortions that limit practical utility.To address this,we propose an enhanced Super-Resolution Generative Adversarial Network(SRGAN)framework featuring three key innovations:(1)Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing;(2)A multi-loss joint optimization strategy dynamically weighting Charbonnier loss(β=0.5),Visual Geometry Group(VGG)perceptual loss(α=1),and adversarial loss(γ=0.1)to synergize pixel-level accuracy and perceptual quality;(3)A multi-scale residual network(MSRN)capturing cross-scale texture features(e.g.,forest canopies,mountain contours).Validated on Sentinel-2(10 m)and SPOT-6/7(2.5 m)datasets covering 904 km2 in Motuo County,Xizang,our method outperforms the SRGAN baseline(SR4RS)with Peak Signal-to-Noise Ratio(PSNR)gains of 0.29 dB and Structural Similarity Index(SSIM)improvements of 3.08%on forest imagery.Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity(LPIPS)increases.The method significantly improves noise robustness and edge retention in complex geomorphology,demonstrating 18%faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring.Future work will integrate spectral constraints and lightweight architectures.
文摘Ufmylation is an ubiquitin-like post-translational modification characterized by the covalent binding of mature UFM1 to target proteins.Although the consequences of ufmylation on target proteins are not fully understood,its importance is evident from the disorders resulting from its dysfunction.Numerous case reports have established a link between biallelic loss-of-function and/or hypomorphic variants in ufmylation-related genes and a spectrum of pediatric neurodevelopmental disorders.
文摘The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurodegenerative diseases are characterized by the progressive loss of neuronal structure and function.
基金Supported by the NNSF of China(71001046)Supported by the NSF of Jiangxi Province(20114BAB211004)
文摘LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.
基金This work is supported by the National Natural Science Foundation of China under Grant 61972207,U1836208,U1836110,61672290the Major Program of the National Social Science Fund of China under Grant No.17ZDA092,by the National Key R&D Program of China under Grant 2018YFB1003205+1 种基金by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,Chinaby the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund。
文摘Internet of Things(IoT)is a network that connects things in a special union.It embeds a physical entity through an intelligent perception system to obtain information about the component at any time.It connects various objects.IoT has the ability of information transmission,information perception,and information processing.The air quality forecasting has always been an urgent problem,which affects people’s quality of life seriously.So far,many air quality prediction algorithms have been proposed,which can be mainly classified into two categories.One is regression-based prediction,the other is deep learning-based prediction.Regression-based prediction is aimed to make use of the classical regression algorithm and the various supervised meteorological characteristics to regress themeteorological value.Deep learning methods usually use convolutional neural networks(CNN)or recurrent neural networks(RNN)to predict the meteorological value.As an excellent feature extractor,CNN has achieved good performance in many scenes.In the same way,as an efficient network for orderly data processing,RNN has also achieved good results.However,few or none of the above methods can meet the current accuracy requirements on prediction.Moreover,there is no way to pay attention to the trend monitoring of air quality data.For the sake of accurate results,this paper proposes a novel predicted-trend-based loss function(PTB),which is used to replace the loss function in RNN.At the same time,the trend of change and the predicted value are constrained to obtain more accurate prediction results of PM_(2.5).In addition,this paper extends the model scenario to the prediction of the whole existing training data features.All the data on the next day of the model is mixed labels,which effectively realizes the prediction of all features.The experiments show that the loss function proposed in this paper is effective.
文摘Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.
文摘The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.
基金Supported by National Natural Science Foundation of China(Grant Nos.11701144,11971149)Henan Province Key and Promotion Special(Science and Technology)Project(Grant No.212102310305).
文摘According to the World Health Organization,about 50 million people worldwide suffer from epilepsy.The detection and treatment of epilepsy face great challenges.Electroencephalogram(EEG)is a significant research object widely used in diagnosis and treatment of epilepsy.In this paper,an adaptive feature learning model for EEG signals is proposed,which combines Huber loss function with adaptive weight penalty term.Firstly,each EEG signal is decomposed by intrinsic time-scale decomposition.Secondly,the statistical index values are calculated from the instantaneous amplitude and frequency of every component and fed into the proposed model.Finally,the discriminative features learned by the proposed model are used to detect seizures.Our main innovation is to consider a highly flexible penalization based on Huber loss function,which can set different weights according to the influence of different features on epilepsy detection.Besides,the new model can be solved by proximal alternating direction multiplier method,which can effectively ensure the convergence of the algorithm.The performance of the proposed method is evaluated on three public EEG datasets provided by the Bonn University,Childrens Hospital Boston-Massachusetts Institute of Technology,and Neurological and Sleep Center at Hauz Khas,New Delhi(New Delhi Epilepsy data).The recognition accuracy on these two datasets is 98%and 99.05%,respectively,indicating the application value of the new model.
文摘Title:A dual-parameter method for seismic resilience assessment of buildings Authors:LI Shuang;HU Binbin;LIU Wen;ZHAI Changhai Abstract:To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on postearthquake loss and recovery time is improved.A three-level function tree model is established,which can consider the dynamic changes in weight coefficients of different category of components relative to their functional losses.Bayesian networks are utilized to quantify the impact of weather conditions,construction technology levels,and worker skill levels on component repair time.A method for determining the real-time functional recovery curve of buildings based on the component repair process is proposed.Taking a three-story teaching building as an example,the seismic resilience indices under basic earthquakes and rare earthquakes are calculated.The results show that the seismic resilience grade of the teaching building is comprehensively judged as GradeⅢ,and its resilience grade is more significantly affected by postearthquake loss.The proposed method can be used to predict the seismic resilience of buildings prior to earthquakes,identify weak components within buildings,and provide guidance for taking measures to enhance the seismic resilience of buildings.
文摘We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.
基金Project partially supported by the Young Scientists Fund of the National Natural Science Foundation of China(Grant No.60904004)the Key Youth Science and Technology Foundation of University of Electronic Science and Technology of China (Grant No.L08010201JX0720)
文摘This paper studies the problem of robust H∞ control of piecewise-linear chaotic systems with random data loss. The communication links between the plant and the controller are assumed to be imperfect (that is, data loss occurs intermittently, which appears typically in a network environment). The data loss is modelled as a random process which obeys a Bernoulli distribution. In the face of random data loss, a piecewise controller is designed to robustly stabilize the networked system in the sense of mean square and also achieve a prescribed H∞ disturbance attenuation performance based on a piecewise-quadratic Lyapunov function. The required H∞ controllers can be designed by solving a set of linear matrix inequalities (LMIs). Chua's system is provided to illustrate the usefulness and applicability of the developed theoretical results.
基金supported by the National Natural Science Foundation of China(62201618).
文摘Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancement.However,the currently prevalent loss functions assign equal weight to each pixel point during loss calculation,which hampers the ability to reflect the roles of different pixel points and fails to exploit the image’s characteristics fully.To address this issue,this study proposes an asymmetric loss function based on the image and data characteristics of the image recovery task.This novel loss function can adjust the weight of the reconstruction loss based on the grey value of different pixel points,thereby effectively optimizing the network training by differentially utilizing the grey information from the original image.Specifically,we calculate a weight factor for each pixel point based on its grey value and combine it with the reconstruction loss to create a new loss function.This ensures that pixel points with smaller grey values receive greater attention,improving network recovery.In order to verify the effectiveness of the proposed asymmetric loss function,we conducted experimental tests in the image super-resolution task.The experimental results show that the model with the introduction of asymmetric loss weights improves all the indexes of the processing results without increasing the training time.In the typical super-resolution network SRCNN,by introducing asymmetric weights,it is possible to improve the peak signal-to-noise ratio(PSNR)by up to about 0.5%,the structural similarity index(SSIM)by up to about 0.3%,and reduce the root-mean-square error(RMSE)by up to about 1.7%with essentially no increase in training time.In addition,we also further tested the performance of the proposed method in the denoising task to verify the potential applicability of the method in the image restoration task.
基金This is a Plenary Report on the International Symposium on Approximation Theory and Remote SensingApplications held in Kunming, China in April 2006Supported in part by NSF of China under grants 10571010 , 10171007 and Startup Grant for Doctoral Researchof Beijing University of Technology
文摘Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.