期刊文献+
共找到29篇文章
< 1 2 >
每页显示 20 50 100
Composition Analysis and Identification of Ancient Glass Products Based on L1 Regularization Logistic Regression
1
作者 Yuqiao Zhou Xinyang Xu Wenjing Ma 《Applied Mathematics》 2024年第1期51-64,共14页
In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluste... In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics. 展开更多
关键词 Glass Composition l1 regularization logistic Regression Model K-Means Clustering Analysis Elbow Rule Parameter Verification
在线阅读 下载PDF
L1/2 Regularization Based on Bayesian Empirical Likelihood
2
作者 Yuan Wang Wanzhou Ye 《Advances in Pure Mathematics》 2022年第5期392-404,共13页
Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empir... Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection. 展开更多
关键词 Bayesian Empirical likelihood Generalized Gaussian Prior l1/2 regularization MCMC Method
在线阅读 下载PDF
APPROXIMATION ANALYSES FOR FUZZY VALUED FUNCTIONS IN L_1(μ)-NORM BY REGULAR FUZZY NEURAL NETWORKS 被引量:4
3
作者 Liu Puyin (Dept. of System Eng. and Math., National Univ. of Defence Tech., Changsha 410073) 《Journal of Electronics(China)》 2000年第2期132-138,共7页
By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-... By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-layer feedforward regular fuzzy neural networks to the fuzzy valued integrably bounded function F : Rn → FcO(R). That is, if the transfer functionσ: R→R is non-polynomial and integrable function on each finite interval, F may be innorm approximated by fuzzy valued functions defined as to anydegree of accuracy. Finally some real examples demonstrate the conclusions. 展开更多
关键词 FUZZY VAlUED simple function regular FUZZY neural network l1(μ) APPROXIMATION Universal approximator
在线阅读 下载PDF
L(d,1)-labeling of regular tilings
4
作者 戴本球 宋增民 《Journal of Southeast University(English Edition)》 EI CAS 2005年第1期115-118,共4页
L(d, 1)-labeling is a kind of graph coloring problem from frequency assignment in radio networks, in which adjacent nodes must receive colors that are at least d apart while nodes at distance two from each other must ... L(d, 1)-labeling is a kind of graph coloring problem from frequency assignment in radio networks, in which adjacent nodes must receive colors that are at least d apart while nodes at distance two from each other must receive different colors. We focus on L(d, 1)-labeling of regular tilings for d≥3 since the cases d=0, 1 or 2 have been researched by Calamoneri and Petreschi. For all three kinds of regular tilings, we give their L (d, 1)-labeling numbers for any integer d≥3. Therefore, combined with the results given by Calamoneri and Petreschi, the L(d, 1)-labeling numbers of regular tilings for any nonnegative integer d may be determined completely. 展开更多
关键词 Graph theory Radio communication
在线阅读 下载PDF
基于截断总体最小二乘法与L_(1)正则化的结构损伤识别
5
作者 骆紫薇 蔡楚欣 +1 位作者 赖小李 刘焕林 《振动与冲击》 北大核心 2025年第15期217-223,共7页
模态参数因其易于获取且对结构损伤敏感等特点常被用于结构损伤识别。基于模态参数和有限元模型的损伤识别方法能有效定位和量化结构损伤,但在测量噪声和模型误差等因素的共同影响下,识别结果可能与实际情况存在较大偏差,难以准确评估... 模态参数因其易于获取且对结构损伤敏感等特点常被用于结构损伤识别。基于模态参数和有限元模型的损伤识别方法能有效定位和量化结构损伤,但在测量噪声和模型误差等因素的共同影响下,识别结果可能与实际情况存在较大偏差,难以准确评估结构的安全状态。针对此问题,基于截断总体最小二乘法与L_(1)正则化技术,提出了一种新的结构损伤识别方法。该方法首先分析了既有灵敏度方程中误差的来源;然后,通过截断总体最小二乘法构造了损伤折减系数改变量与模态参数改变量之间新的近似关系式;最后,结合结构损伤的稀疏性,引入L_(1)正则化对问题进行约束,以改善问题的不适定性并提高识别精度。数值仿真和试验结果表明,所提方法能有效地识别结构的多种损伤工况,且误判较少,具有较高的识别精度和较强的鲁棒性。 展开更多
关键词 结构损伤识别 一阶灵敏度分析 l_(1)正则化 截断总体最小二乘法
在线阅读 下载PDF
Parameter Optimization of Regularization Variational Merging and Its Application in GNSS/MET Water Vapor
6
作者 Wang Gen Zhou Shuxue +1 位作者 Ding Xia Liu Huilan 《Meteorological and Environmental Research》 CAS 2019年第2期44-50,共7页
The paper discusses the core parameters of the 3 D and 4 D variational merging based on L1 norm regularization,namely optimization characteristic correlation length of background error covariance matrix and regulariza... The paper discusses the core parameters of the 3 D and 4 D variational merging based on L1 norm regularization,namely optimization characteristic correlation length of background error covariance matrix and regularization parameter. Classical 3 D/4 D variational merging is based on the theory that error follows Gaussian distribution. It involves the solution of the objective functional gradient in minimization iteration,which requires the data to have continuity and differentiability. Classic 3 D/4 D-dimensional variational merging method was extended,and L1 norm was used as the constraint coupling to the classical variational merged model. Experiment was carried out by using linear advection-diffusion equation as four-dimensional prediction model,and parameter optimization of this method is discussed. Considering the strong temporal and spatial variation of water vapor,this method is further applied to the precipitable water vapor( PWV) merging by calculating reanalysis data and GNSS retrieval.Parameters were adjusted gradually to analyze the influence of background field on the merging result,and the experiment results show that the mathematical algorithm adopted in this paper is feasible. 展开更多
关键词 VARIATIONAl MERGING l1 NORM PARAMETER optimization Precipitable water vapor regularization PARAMETER
在线阅读 下载PDF
Estimating primaries by sparse inversion of the 3D Curvelet transform and the L1-norm constraint 被引量:7
7
作者 冯飞 王德利 +1 位作者 朱恒 程浩 《Applied Geophysics》 SCIE CSCD 2013年第2期201-209,237,共10页
In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm r... In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm regularization, and use alternating optimization to directly estimate the primary reflection coefficients and source wavelet. The 3D curvelet transform is used as a sparseness constraint when inverting the primary reflection coefficients, which results in avoiding the prediction subtraction process in the surface-related multiples elimination (SRME) method. The proposed method not only reduces the damage to the effective waves but also improves the elimination of multiples. It is also a wave equation- based method for elimination of surface multiple reflections, which effectively removes surface multiples under complex submarine conditions. 展开更多
关键词 Sparse inversion primary reflection coefficients 3D Curvelet transformation l1regularization convex optimization
在线阅读 下载PDF
一种基于L_(1/2)正则约束的超分辨率重建算法 被引量:7
8
作者 徐志刚 李文文 +1 位作者 朱红蕾 朱旭锋 《华中科技大学学报(自然科学版)》 EI CAS CSCD 北大核心 2017年第6期38-42,共5页
为了提高重建图像质量,减少处理时间,提出一种基于L_(1/2)正则约束的单帧图像超分辨率重建算法.该算法在稀疏重建字典对训练阶段,为了有效提取低分辨率图像边缘、纹理等特征细节信息,采用小波系数单支重构方法对低分辨率图像进行特征提... 为了提高重建图像质量,减少处理时间,提出一种基于L_(1/2)正则约束的单帧图像超分辨率重建算法.该算法在稀疏重建字典对训练阶段,为了有效提取低分辨率图像边缘、纹理等特征细节信息,采用小波系数单支重构方法对低分辨率图像进行特征提取;而在图像重建阶段,为了解决基于L1正则模型得到的解时常不够稀疏,重建图像质量有待进一步提高的问题,采用L_(1/2)范数代替L1范数构建超分辨率重建模型,并且采用一种快速求解的L_(1/2)正则化算法进行稀疏求解.实验结果表明:与现有算法相比较,该算法在重建图像主观和客观评价指标、算法运行速度等方面均更优. 展开更多
关键词 重建图像 超分辨率 稀疏表示 l(1/2)正则模型 小波系数单支重构
原文传递
基于L1/2正则化理论的地震稀疏反褶积 被引量:8
9
作者 康治梁 张雪冰 《石油物探》 EI CSCD 北大核心 2019年第6期855-863,共9页
地震反褶积是一种重要的压缩地震子波、提高薄层纵向分辨率的地震数据处理方法。在层状地层的假设下,反射系数可视作稀疏的脉冲序列,所以地震反褶积可以描述为一个稀疏求解问题,L 1正则化被广泛用于解决稀疏问题,但近年来一些文献证明L ... 地震反褶积是一种重要的压缩地震子波、提高薄层纵向分辨率的地震数据处理方法。在层状地层的假设下,反射系数可视作稀疏的脉冲序列,所以地震反褶积可以描述为一个稀疏求解问题,L 1正则化被广泛用于解决稀疏问题,但近年来一些文献证明L 1正则化的稀疏表达能力不是最优的。针对这一问题,基于快速发展的L 1/2正则化理论,提出将L 1/2正则化作为反射系数的稀疏约束进行地震反褶积处理,并使用其特定的阈值迭代算法进行求解,对单道模型的测试证实了该方法对正则化参数和噪声有较好的适应能力。简单二维模型和Marmousi2模型数据的测试结果表明,基于该方法的反演结果能较好地拟合反射系数振幅,并且对噪声干扰的鲁棒性更强,能够更好地保护弱反射系数。实际数据应用结果表明,该方法能有效消除子波影响,较好地分辨出薄层结构和透镜体结构,为地震数据高分辨处理提供了有力工具。 展开更多
关键词 地震反演 稀疏性 l 1正则化 l 1/2正则化理论 非凸正则化 高分辨率 薄层识别
在线阅读 下载PDF
系数的L^(1)相互关系对非线性退化椭圆方程解的正则性的影响 被引量:1
10
作者 邹维林 任远春 肖美萍 《数学物理学报(A辑)》 CSCD 北大核心 2021年第5期1405-1414,共10页
该文主要研究一类非线性退化椭圆型方程-div(a(x,u,▽u))+6(x)g(u)+B(x,u,▽u)=f(x),其中方程的主算子在{u=0}处退化.即使当f仅属于L^(1)时,证明了有界弱解的存在性,这在某种程度上推广了以往的结果.
关键词 退化椭圆型方程 l^(1)系数 有界弱解 正则性影响
在线阅读 下载PDF
L_1正则化问题解的必要性条件
11
作者 吴焚供 《广东第二师范学院学报》 2014年第5期36-38,共3页
利用凸集分离定理给出了一个L1正则化问题最优解存在的必要性条件.
关键词 l1正则化 最优解 必要条件
在线阅读 下载PDF
Face Recognition from Incomplete Measurements via <i>l<sub>1</sub></i>-Optimization
12
作者 Miguel Argaez Reinaldo Sanchez Carlos Ramirez 《American Journal of Computational Mathematics》 2012年第4期287-294,共8页
In this work, we consider a homotopic principle for solving large-scale and dense l1underdetermined problems and its applications in image processing and classification. We solve the face recognition problem where the... In this work, we consider a homotopic principle for solving large-scale and dense l1underdetermined problems and its applications in image processing and classification. We solve the face recognition problem where the input image contains corrupted and/or lost pixels. The approach involves two steps: first, the incomplete or corrupted image is subject to an inpainting process, and secondly, the restored image is used to carry out the classification or recognition task. Addressing these two steps involves solving large scale l1minimization problems. To that end, we propose to solve a sequence of linear equality constrained multiquadric problems that depends on a regularization parameter that converges to zero. The procedure generates a central path that converges to a point on the solution set of the l1underdetermined problem. In order to solve each subproblem, a conjugate gradient algorithm is formulated. When noise is present in the model, inexact directions are taken so that an approximate solution is computed faster. This prevents the ill conditioning produced when the conjugate gradient is required to iterate until a zero residual is attained. 展开更多
关键词 SPARSE Representation l1Minimization Face Recognition SPARSE Recovery INTERIOR Point Methods SPARSE regularization
在线阅读 下载PDF
I(L)型诱导空间的性质 被引量:1
13
作者 胡兰芳 《江苏师范大学学报(自然科学版)》 CAS 1989年第2期9-16,共8页
本文讨论了Fuzzy拓扑空间的I(L)型诱导空间的闭包和内部运算,并讨论了它的可分性、C_Ⅰ、C_Ⅱ和分离性。
关键词 I(l)型诱导空间 可分空间 C_I空间 C_Ⅱ空间 正则空间 T_i空间(i=0 1 2 3 4)
在线阅读 下载PDF
Generating Cartoon Images from Face Photos with Cycle-Consistent Adversarial Networks 被引量:1
14
作者 Tao Zhang Zhanjie Zhang +2 位作者 Wenjing Jia Xiangjian He Jie Yang 《Computers, Materials & Continua》 SCIE EI 2021年第11期2733-2747,共15页
The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications... The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficult for CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16)to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance(FID)score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art. 展开更多
关键词 Generative adversarial network spectral normalization lipschitz stability constraint VGG16 l1 regularization term l2 regularization term Frechet inception distance
在线阅读 下载PDF
A Sharp Nonasymptotic Bound and Phase Diagram of L1/2 Regularization 被引量:1
15
作者 Hai ZHANG Zong Ben XU +2 位作者 Yao WANG Xiang Yu CHANG Yong LIANG 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2014年第7期1242-1258,共17页
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squ... We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes. 展开更多
关键词 l1/2 regularization phase diagram compressive sensing
原文传递
Sparse Solutions of Mixed Complementarity Problems 被引量:1
16
作者 Peng Zhang Zhensheng Yu 《Journal of Applied Mathematics and Physics》 2020年第1期10-22,共13页
In this paper, we consider an extragradient thresholding algorithm for finding the sparse solution of mixed complementarity problems (MCPs). We establish a relaxation l1 regularized projection minimization model for t... In this paper, we consider an extragradient thresholding algorithm for finding the sparse solution of mixed complementarity problems (MCPs). We establish a relaxation l1 regularized projection minimization model for the original problem and design an extragradient thresholding algorithm (ETA) to solve the regularized model. Furthermore, we prove that any cluster point of the sequence generated by ETA is a solution of MCP. Finally, numerical experiments show that the ETA algorithm can effectively solve the l1 regularized projection minimization model and obtain the sparse solution of the mixed complementarity problem. 展开更多
关键词 Mixed Complementarity Problem SPARSE Solution l1 regularIZED PROJECTION MINIMIZATION Model Extragradient THRESHOlDING Algorithm
在线阅读 下载PDF
A BMO ESTIMATE FOR THE MULTILINEAR SINGULAR INTEGRAL OPERATOR
17
作者 Qihui Zhang 《Analysis in Theory and Applications》 2006年第3期271-282,共12页
The behavior on the space L^∞(R^n) for the multilinear singular integral operator defined by TAf(x)=∫RnΩ(x-y)/|x-y|^n+1(A(x)-A(y)△A(y)(x-y))f(y)dy is considered, where 12 is homogeneous of deg... The behavior on the space L^∞(R^n) for the multilinear singular integral operator defined by TAf(x)=∫RnΩ(x-y)/|x-y|^n+1(A(x)-A(y)△A(y)(x-y))f(y)dy is considered, where 12 is homogeneous of degree zero, integrable on the unit sphere and has vanishing is considered, where Ω is homogeneous of degree zero, integrable on the unit sphere and has vanishingmoment of order one, A has derivatives of order one in BMO(R^n). It is proved that if Ω satisfies some minimum size condition and an L1-Dini type regularity condition, then for f ∈ L^∞(R^n), TAf is either infinite almost everywhere or finite almost everywhere, and in the latter case, TAf ∈ BMO(R^n). 展开更多
关键词 multilinear singular integral operator l^1-Dini type regularity condition
在线阅读 下载PDF
A pruning algorithm with L_(1/2) regularizer for extreme learning machine 被引量:1
18
作者 Ye-tian FAN Wei WU +2 位作者 Wen-yu YANG Qin-wei FAN Jian WANG 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第2期119-125,共7页
Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this pa... Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization. 展开更多
关键词 Extreme learning machine(ElM) l1/2 regularizer Network pruning
原文传递
Truncated L1 Regularized Linear Regression:Theory and Algorithm
19
作者 Mingwei Dai Shuyang Dai +2 位作者 Junjun Huang Lican Kang Xiliang Lu 《Communications in Computational Physics》 SCIE 2021年第6期190-209,共20页
Truncated L1 regularization proposed by Fan in[5],is an approximation to the L0 regularization in high-dimensional sparse models.In this work,we prove the non-asymptotic error bound for the global optimal solution to ... Truncated L1 regularization proposed by Fan in[5],is an approximation to the L0 regularization in high-dimensional sparse models.In this work,we prove the non-asymptotic error bound for the global optimal solution to the truncated L1 regularized linear regression problem and study the support recovery property.Moreover,a primal dual active set algorithm(PDAS)for variable estimation and selection is proposed.Coupled with continuation by a warm-start strategy leads to a primal dual active set with continuation algorithm(PDASC).Data-driven parameter selection rules such as cross validation,BIC or voting method can be applied to select a proper regularization parameter.The application of the proposed method is demonstrated by applying it to simulation data and a breast cancer gene expression data set(bcTCGA). 展开更多
关键词 High-dimensional linear regression SPARSITY truncated l1 regularization primal dual active set algorithm
原文传递
Stock Price Forecasting and Rule Extraction Based on L1-Orthogonal Regularized GRU Decision Tree Interpretation Model
20
作者 Wenjun Wu Yuechen Zhao +1 位作者 Yue Wang Xiuli Wang 《国际计算机前沿大会会议论文集》 2020年第2期309-328,共20页
Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision t... Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision tree,GRU-DT,was conducted to represent the prediction process of a neural network,and some rule screening algorithms were proposed to find out significant rules in the prediction.In the empirical study,the data of 10 different industries in China’s CSI 300 were selected for stock price trend prediction,and extracted rules were compared and analyzed.And the method of technical index discretization was used to make rules easy for decision-making.Empirical results show that the AUC of the model is stable between 0.72 and 0.74,and the value of F1 and Accuracy are stable between 0.68 and 0.70,indicating that discretized technical indicators can predict the short-term trend of stock price effectively.And the fidelity of GRU-DT to the GRU model reaches 0.99.The prediction rules of different industries have some commonness and individuality. 展开更多
关键词 Explainable artificial intelligence Neural network interpretability Rule extraction Stock forecasting l1-orthogonal regularization
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部