Complex structured documents can be intentionally represented as a tree structure decorated with attributes. Ignoring attributes (these are related to semantic aspects that can be treated separately from purely struct...Complex structured documents can be intentionally represented as a tree structure decorated with attributes. Ignoring attributes (these are related to semantic aspects that can be treated separately from purely structural aspects which interest us here), in the context of a cooperative edition, legal structures are characterized by a document model (an abstract grammar) and each intentional representation can be manipulated independently and eventually asynchronously by several co-authors through various editing tools that operate on its “partial replicas”. For unsynchronized edition of a partial replica, considered co-author must have a syntactic document local model that constraints him to ensure minimum consistency of local representation that handles with respect to the global model. This consistency is synonymous with the existence of one or more (global) intentional representations towards the global model, assuming the current local representation as her/their partial replica. The purpose of this paper is to present the grammatical structures which are grammars that permit not only to specify a (global) model for documents published in a cooperative manner, but also to derive automatically via a so call projection operation, consistent (local) models for each co-authors involved in the cooperative edition. We also show some properties that meet these grammatical structures.展开更多
This article proposes a document-level prompt learning approach using LLMs to extract the timeline-based storyline. Through verification tests on datasets such as ESCv1.2 and Timeline17, the results show that the prom...This article proposes a document-level prompt learning approach using LLMs to extract the timeline-based storyline. Through verification tests on datasets such as ESCv1.2 and Timeline17, the results show that the prompt + one-shot learning proposed in this article works well. Meanwhile, our research findings indicate that although timeline-based storyline extraction has shown promising prospects in the practical applications of LLMs, it is still a complex natural language processing task that requires further research.展开更多
The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same ...The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.展开更多
The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is av...The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is available, the word-use variability problem will have substantial impact on the Information Retrieval (IR) performance. To solve the problem, a new technology to short document retrieval named Reference Document Model (RDM) is put forward in this letter. RDM gets the statistical semantic of the query/document by pseudo feedback both for the query and document from reference documents. The contributions of this model are three-fold: (1) Pseudo feedback both for the query and the document; (2) Building the query model and the document model from reference documents; (3) Flexible indexing units, which can be ally linguistic elements such as documents, paragraphs, sentences, n-grams, term or character. For short document retrieval, RDM achieves significant improvements over the classical probabilistic models on the task of ad hoc retrieval on Text REtrieval Conference (TREC) test sets. Results also show that the shorter the document, the better the RDM performance.展开更多
为提高应用文编写效率,提出一种融合大语言模型(large language model,LLM)与向量知识库(vector knowledge base)的应用文自动生成框架.根据目标应用场景,以人工编写的标准应用文为范本,构建结构化辅助生成文件,并建立相应类型应用文的...为提高应用文编写效率,提出一种融合大语言模型(large language model,LLM)与向量知识库(vector knowledge base)的应用文自动生成框架.根据目标应用场景,以人工编写的标准应用文为范本,构建结构化辅助生成文件,并建立相应类型应用文的向量知识库.利用目标类型应用文的章节标题和用户输入的关键信息在知识库中进行检索,匹配相关文段;设置提示词引导LLM,以召回的参考文段及用户输入的提示信息为参考,使用末级标题作为分割标志,分章节生成应用文文本;最终按规定格式整合全文并输出完整的目标应用文.以应急预案为例,在同一评价标准下使用ChatGPT-4Turbo进行评测,自动生成的应急预案高度趋近于人工编写的质量,二者的文档质量相似度达95.87%.所提方法能够在算力资源有限的情况下突破字数限制,生成符合基本标准的长篇幅应用文,可供人工参考或直接使用,极大提高了编写人员的工作效率.展开更多
面对网络中日益增多的数字作品以及人们版权意识的增强,确认数字作品版权归属非常重要,对于数字作品原创性检测问题,文本匹配技术能够很好地解决这一问题。文本匹配技术通过算法来判断句子之间的语义是否相近。最近几年,深度学习迅速发...面对网络中日益增多的数字作品以及人们版权意识的增强,确认数字作品版权归属非常重要,对于数字作品原创性检测问题,文本匹配技术能够很好地解决这一问题。文本匹配技术通过算法来判断句子之间的语义是否相近。最近几年,深度学习迅速发展,解决文本匹配任务的方法也得到了很好的发展。在已有的基于核的文档排序神经模型(a kernel based neural model for document ranking, KNRM)上进一步地研究和创新,提出融合KNRM和轻量级梯度提升机(light gradient boosting machine, LightGBM)算法的文本匹配模型,在交互矩阵转化的直方图上采用kernel-pooling的方式来提取相关局部特征信息,引入K个不同大小的核函数,来捕捉不同细粒度的相关匹配信号,获取高斯核特征,将LightGBM算法作为分类器,进行分类处理工作,预测最后的匹配结果。通过多个数据集验证模型效果,实验表明,融合模型KNRM-LightGBM在准确率方面优于原模型KNRM,能够达到更好的文本匹配效果。展开更多
文摘Complex structured documents can be intentionally represented as a tree structure decorated with attributes. Ignoring attributes (these are related to semantic aspects that can be treated separately from purely structural aspects which interest us here), in the context of a cooperative edition, legal structures are characterized by a document model (an abstract grammar) and each intentional representation can be manipulated independently and eventually asynchronously by several co-authors through various editing tools that operate on its “partial replicas”. For unsynchronized edition of a partial replica, considered co-author must have a syntactic document local model that constraints him to ensure minimum consistency of local representation that handles with respect to the global model. This consistency is synonymous with the existence of one or more (global) intentional representations towards the global model, assuming the current local representation as her/their partial replica. The purpose of this paper is to present the grammatical structures which are grammars that permit not only to specify a (global) model for documents published in a cooperative manner, but also to derive automatically via a so call projection operation, consistent (local) models for each co-authors involved in the cooperative edition. We also show some properties that meet these grammatical structures.
文摘This article proposes a document-level prompt learning approach using LLMs to extract the timeline-based storyline. Through verification tests on datasets such as ESCv1.2 and Timeline17, the results show that the prompt + one-shot learning proposed in this article works well. Meanwhile, our research findings indicate that although timeline-based storyline extraction has shown promising prospects in the practical applications of LLMs, it is still a complex natural language processing task that requires further research.
文摘The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.
基金Supported by the Funds of Heilongjiang Outstanding Young Teacher (1151G037).
文摘The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is available, the word-use variability problem will have substantial impact on the Information Retrieval (IR) performance. To solve the problem, a new technology to short document retrieval named Reference Document Model (RDM) is put forward in this letter. RDM gets the statistical semantic of the query/document by pseudo feedback both for the query and document from reference documents. The contributions of this model are three-fold: (1) Pseudo feedback both for the query and the document; (2) Building the query model and the document model from reference documents; (3) Flexible indexing units, which can be ally linguistic elements such as documents, paragraphs, sentences, n-grams, term or character. For short document retrieval, RDM achieves significant improvements over the classical probabilistic models on the task of ad hoc retrieval on Text REtrieval Conference (TREC) test sets. Results also show that the shorter the document, the better the RDM performance.
文摘为提高应用文编写效率,提出一种融合大语言模型(large language model,LLM)与向量知识库(vector knowledge base)的应用文自动生成框架.根据目标应用场景,以人工编写的标准应用文为范本,构建结构化辅助生成文件,并建立相应类型应用文的向量知识库.利用目标类型应用文的章节标题和用户输入的关键信息在知识库中进行检索,匹配相关文段;设置提示词引导LLM,以召回的参考文段及用户输入的提示信息为参考,使用末级标题作为分割标志,分章节生成应用文文本;最终按规定格式整合全文并输出完整的目标应用文.以应急预案为例,在同一评价标准下使用ChatGPT-4Turbo进行评测,自动生成的应急预案高度趋近于人工编写的质量,二者的文档质量相似度达95.87%.所提方法能够在算力资源有限的情况下突破字数限制,生成符合基本标准的长篇幅应用文,可供人工参考或直接使用,极大提高了编写人员的工作效率.
文摘面对网络中日益增多的数字作品以及人们版权意识的增强,确认数字作品版权归属非常重要,对于数字作品原创性检测问题,文本匹配技术能够很好地解决这一问题。文本匹配技术通过算法来判断句子之间的语义是否相近。最近几年,深度学习迅速发展,解决文本匹配任务的方法也得到了很好的发展。在已有的基于核的文档排序神经模型(a kernel based neural model for document ranking, KNRM)上进一步地研究和创新,提出融合KNRM和轻量级梯度提升机(light gradient boosting machine, LightGBM)算法的文本匹配模型,在交互矩阵转化的直方图上采用kernel-pooling的方式来提取相关局部特征信息,引入K个不同大小的核函数,来捕捉不同细粒度的相关匹配信号,获取高斯核特征,将LightGBM算法作为分类器,进行分类处理工作,预测最后的匹配结果。通过多个数据集验证模型效果,实验表明,融合模型KNRM-LightGBM在准确率方面优于原模型KNRM,能够达到更好的文本匹配效果。