利用WOS(Web of Science)和Wikipedia两种数据源,对大数据相关的内容进行词频统计、文本归类分析,得出两种数据源下大数据主题的共识和差异,并进一步梳理提炼出大数据领域的主题类别。共同的类别包括整体角度、技术层面、应用层面、实...利用WOS(Web of Science)和Wikipedia两种数据源,对大数据相关的内容进行词频统计、文本归类分析,得出两种数据源下大数据主题的共识和差异,并进一步梳理提炼出大数据领域的主题类别。共同的类别包括整体角度、技术层面、应用层面、实体和活动等,进一步细分的主题包括数据及数据源、大数据处理和分析技术、大数据系统和应用、国家地区以及企业的推动、社会和人的讨论、行业和学科变化等。最后论文还结合相关数据探讨了大数据领域的研究前沿。展开更多
The present paper describes the use of online free language resources for translating and expanding queries in CLIR (cross-language information retrieval). In a previous study, we proposed method queries that were t...The present paper describes the use of online free language resources for translating and expanding queries in CLIR (cross-language information retrieval). In a previous study, we proposed method queries that were translated by two machine translation systems on the Language Gridem. The queries were then expanded using an online dictionary to translate compound words or word phrases. A concept base was used to compare back translation words with the original query in order to delete mistranslated words. In order to evaluate the proposed method, we constructed a CLIR system and used the science documents of the NTCIR1 dataset. The proposed method achieved high precision. However~ proper nouns (names of people and places) appear infrequently in science documents. In information retrieval, proper nouns present unique problems. Since proper nouns are usually unknown words, they are difficult to find in monolingual dictionaries, not to mention bilingual dictionaries. Furthermore, the initial query of the user is not always the best description of the desired information. In order to solve this problem, and to create a better query representation, query expansion is often proposed as a solution. Wikipedia was used to translate compound words or word phrases. It was also used to expand queries together with a concept base. The NTCIRI and NTCIR 6 datasets were used to evaluate the proposed method. In the proposed method, the CLIR system was implemented with a high rate of precision. The proposed syst had a higher ranking than the NTCIRI and NTCIR6 participation systems.展开更多
维基百科(Wikipedia)现有搜索模块采用关键词匹配方式导致搜索效率相对低下.为了提高Wikipedia中的知识获取效率,提出基于链接分析的词间距算法(TDL,TermDistance based on Linkage).利用可扩展的计算模型,通过内部链接结构分析发现词簇...维基百科(Wikipedia)现有搜索模块采用关键词匹配方式导致搜索效率相对低下.为了提高Wikipedia中的知识获取效率,提出基于链接分析的词间距算法(TDL,TermDistance based on Linkage).利用可扩展的计算模型,通过内部链接结构分析发现词簇,并且引入排序和推荐机制.基于Wikipedia 2009年5月快照数据的实验表明,TDL有效增强了Wiki-pedia知识检索的准确性,经由用户评判检验证实TDL算法能有效提高用户意图识别度达7%.展开更多
文摘利用WOS(Web of Science)和Wikipedia两种数据源,对大数据相关的内容进行词频统计、文本归类分析,得出两种数据源下大数据主题的共识和差异,并进一步梳理提炼出大数据领域的主题类别。共同的类别包括整体角度、技术层面、应用层面、实体和活动等,进一步细分的主题包括数据及数据源、大数据处理和分析技术、大数据系统和应用、国家地区以及企业的推动、社会和人的讨论、行业和学科变化等。最后论文还结合相关数据探讨了大数据领域的研究前沿。
文摘The present paper describes the use of online free language resources for translating and expanding queries in CLIR (cross-language information retrieval). In a previous study, we proposed method queries that were translated by two machine translation systems on the Language Gridem. The queries were then expanded using an online dictionary to translate compound words or word phrases. A concept base was used to compare back translation words with the original query in order to delete mistranslated words. In order to evaluate the proposed method, we constructed a CLIR system and used the science documents of the NTCIR1 dataset. The proposed method achieved high precision. However~ proper nouns (names of people and places) appear infrequently in science documents. In information retrieval, proper nouns present unique problems. Since proper nouns are usually unknown words, they are difficult to find in monolingual dictionaries, not to mention bilingual dictionaries. Furthermore, the initial query of the user is not always the best description of the desired information. In order to solve this problem, and to create a better query representation, query expansion is often proposed as a solution. Wikipedia was used to translate compound words or word phrases. It was also used to expand queries together with a concept base. The NTCIRI and NTCIR 6 datasets were used to evaluate the proposed method. In the proposed method, the CLIR system was implemented with a high rate of precision. The proposed syst had a higher ranking than the NTCIRI and NTCIR6 participation systems.
文摘维基百科(Wikipedia)现有搜索模块采用关键词匹配方式导致搜索效率相对低下.为了提高Wikipedia中的知识获取效率,提出基于链接分析的词间距算法(TDL,TermDistance based on Linkage).利用可扩展的计算模型,通过内部链接结构分析发现词簇,并且引入排序和推荐机制.基于Wikipedia 2009年5月快照数据的实验表明,TDL有效增强了Wiki-pedia知识检索的准确性,经由用户评判检验证实TDL算法能有效提高用户意图识别度达7%.