期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Tibetan Few-Shot Learning Model With Deep Contextualised Two-Level Word Embeddings
1
作者 Ziyue Zhang Yongbin Yu +9 位作者 Xiangxiang Wang Xiao Feng Yuze Li Jiarun Shen Dorje Tashi Jin Zhang Lobsang Yeshi Lei Li Nyima Tashi jingye cai 《CAAI Transactions on Intelligence Technology》 2025年第5期1394-1410,共17页
Few-shot learning is the task of identifying new text categories from a limited set of training examples.The two key challenges in few-shot learning are insufficient understanding of new samples and imperfect modellin... Few-shot learning is the task of identifying new text categories from a limited set of training examples.The two key challenges in few-shot learning are insufficient understanding of new samples and imperfect modelling.The uniqueness of low-resource languages lies in their limited linguistic resources,which directly leads to the difficulty for models to learn sufficiently rich feature representations from limited samples.As a minority language,Tibetan few-shot learning requires further exploration.With limited data resources,if the model's understanding of text is noncontextual,it cannot provide sufficiently distinctive feature representations,limiting its performance in few-shot learning.Therefore,this paper proposed a few-shot learning architecture called two-level word embeddings matching networks(TWE-MN).TWE-MN is specifically designed to enhance the model's representational capacity and optimise its generalisation capabilities in data-scarce environments.As this paper focuses on Tibetan few-shot learning tasks,a pretrained Tibetan language model,BoBERT,was constructed.BoBERT,as the preembedding layer of TWE-MN,in combination with the BoBERT-augmented full-context embedding,can capture feature information from local to global levels.This paper evaluated the performance of TWE-MN in Tibetan few-shot learning tasks and Tibetan text classification tasks.The experimental results show that TWE-MN outperformed vanilla MN in all Tibetan few-shot learning tasks,with an average accuracy improvement of 4.5%–6.5%and up to 6.8%at most.In addition,this paper also explores the potential of TWE-MN in other NLP tasks,such as text classification and machine translation. 展开更多
关键词 artificial intelligence deep learning natural language processing neural network
在线阅读 下载PDF
Electrical Impedance Tomography Image Reconstruction Using Iterative Lavrentiev and L-Curve-Based Regularization Algorithm
2
作者 Wenqin WANG jingye cai Lian YANG 《Journal of Electromagnetic Analysis and Applications》 2010年第1期45-50,共6页
Electrical impedance tomography (EIT) is a technique for determining the electrical conductivity and permittivity distribution inside a medium from measurements made on its surface. The impedance distribution reconstr... Electrical impedance tomography (EIT) is a technique for determining the electrical conductivity and permittivity distribution inside a medium from measurements made on its surface. The impedance distribution reconstruction in EIT is a nonlinear inverse problem that requires the use of a regularization method. The generalized Tikhonov regularization methods are often used in solving inverse problems. However, for EIT image reconstruction, the generalized Tikhonov regularization methods may lose the boundary information due to its smoothing operation. In this paper, we propose an iterative Lavrentiev regularization and L-curve-based algorithm to reconstruct EIT images. The regularization parameter should be carefully chosen, but it is often heuristically selected in the conventional regularization-based reconstruction algorithms. So, an L-curve-based optimization algorithm is used for selecting the Lavrentiev regularization parameter. Numerical analysis and simulation results are performed to illustrate EIT image reconstruction. It is shown that choosing the appropriate regularization parameter plays an important role in reconstructing EIT images. 展开更多
关键词 Electrical Impedance Tomography (EIT) Reconstruction ALGORITHM ITERATIVE Lavrentiev REGULARIZATION Parameter Inverse Problem.
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部