期刊文献+
共找到2,080篇文章
< 1 2 104 >
每页显示 20 50 100
History and evaluation of national-scale geochemical data sets for the United States 被引量:8
1
作者 David B.Smith Steven M.Smith John D.Horton 《Geoscience Frontiers》 SCIE CAS CSCD 2013年第2期167-183,共17页
Six national-scale,or near national-scale,geochemical data sets for soils or stream sediments exist for the United States.The earliest of these,here termed the 'Shacklette' data set,was generated by a U.S. Geologica... Six national-scale,or near national-scale,geochemical data sets for soils or stream sediments exist for the United States.The earliest of these,here termed the 'Shacklette' data set,was generated by a U.S. Geological Survey(USGS) project conducted from 1961 to 1975.This project used soil collected from a depth of about 20 cm as the sampling medium at 1323 sites throughout the conterminous U.S.The National Uranium Resource Evaluation Hydrogeochemical and Stream Sediment Reconnaissance(NUREHSSR) Program of the U.S.Department of Energy was conducted from 1975 to 1984 and collected either stream sediments,lake sediments,or soils at more than 378,000 sites in both the conterminous U.S.and Alaska.The sampled area represented about 65%of the nation.The Natural Resources Conservation Service(NRCS),from 1978 to 1982,collected samples from multiple soil horizons at sites within the major crop-growing regions of the conterminous U.S.This data set contains analyses of more than 3000 samples.The National Geochemical Survey,a USGS project conducted from 1997 to 2009,used a subset of the NURE-HSSR archival samples as its starting point and then collected primarily stream sediments, with occasional soils,in the parts of the U.S.not covered by the NURE-HSSR Program.This data set contains chemical analyses for more than 70,000 samples.The USGS,in collaboration with the Mexican Geological Survey and the Geological Survey of Canada,initiated soil sampling for the North American Soil Geochemical Landscapes Project in 2007.Sampling of three horizons or depths at more than 4800 sites in the U.S.was completed in 2010,and chemical analyses are currently ongoing.The NRCS initiated a project in the 1990s to analyze the various soil horizons from selected pedons throughout the U.S.This data set currently contains data from more than 1400 sites.This paper(1) discusses each data set in terms of its purpose,sample collection protocols,and analytical methods;and(2) evaluates each data set in terms of its appropriateness as a national-scale geochemical database and its usefulness for nationalscale geochemical mapping. 展开更多
关键词 Geochemical mapping National-scale geochemical data Geochemical baselines united States
在线阅读 下载PDF
Fuzzy data envelopment analysis approach based on sample decision making units 被引量:11
2
作者 Muren Zhanxin Ma Wei Cui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第3期399-407,共9页
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty... The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches. 展开更多
关键词 fuzzy mathematical programming sample decision making unit fuzzy data envelopment analysis EFFICIENCY α-cut.
在线阅读 下载PDF
Reconfigurable-System-on-Chip Implementation of Data Processing Units for Space Applications
3
作者 张宇宁 常亮 +1 位作者 杨根庆 李华旺 《Transactions of Tianjin University》 EI CAS 2010年第4期270-274,共5页
Application-specific data processing units (DPUs) are commonly adopted for operational control and data processing in space missions. To overcome the limitations of traditional radiation-hardened or fully commercial d... Application-specific data processing units (DPUs) are commonly adopted for operational control and data processing in space missions. To overcome the limitations of traditional radiation-hardened or fully commercial design approaches, a reconfigurable-system-on-chip (RSoC) solution based on state-of-the-art FPGA is introduced. The flexibility and reliability of this approach are outlined, and the requirements for an enhanced RSoC design with in-flight reconfigurability for space applications are presented. This design has been demonstrated as an on-board computer prototype, providing an in-flight reconfigurable DPU design approach using integrated hardwired processors. 展开更多
关键词 RSoC in-flight reconfigurability spaceborne data processing unit
在线阅读 下载PDF
The Design and Implementation of the Data Buffer Unit in an Artificial intelligence Computer ITM-1
4
作者 张晨曦 《High Technology Letters》 EI CAS 1996年第2期55-58,共4页
This paper describes the function,structure and working status of the data buffer unitDBU,one of the most important functional units on ITM-1.It also discusses DBU’s supportto the multiprocessor system and Prolog lan... This paper describes the function,structure and working status of the data buffer unitDBU,one of the most important functional units on ITM-1.It also discusses DBU’s supportto the multiprocessor system and Prolog language. 展开更多
关键词 DBU The Design and Implementation of the data Buffer unit in an Artificial intelligence Computer ITM-1 Prolog CPU ITM
在线阅读 下载PDF
Improving EGT sensing data anomaly detection of aircraft auxiliary power unit 被引量:8
5
作者 Liansheng LIU Yu PENG +3 位作者 Lulu WANG Yu DONG Datong LIU Qing GUO 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2020年第2期448-455,共8页
The reliability of the on-wing aircraft Auxiliary Power Unit(APU)decides the cost and the comfort of flight to a large degree.The most important function of APU is to help start main engines by providing compressed ai... The reliability of the on-wing aircraft Auxiliary Power Unit(APU)decides the cost and the comfort of flight to a large degree.The most important function of APU is to help start main engines by providing compressed air.Especially on the condition of sudden shutdown in the air,APU can offer additional thrust for landing.Therefore,its condition monitoring has drawn much attention from the academic and industrial field.Among the on-wing sensing data which can reflect its condition,Exhaust Gas Temperature(EGT)is one of the most important parameters.To ensure the reliability of EGT,one kind of data-driven anomaly detection framework for EGT sensing data is proposed based on the Gaussian Process Regression and Kernel Principal Component Analysis.The situations of one-dimensional and two-dimensional input data for EGT anomaly detection are considered,respectively.The cross-validation experiments are carried out by utilizing the real condition data of APU,which are provided by China Southern Airlines Company Limited Shenyang Maintenance Base.The anomalous stuck condition of EGT sensing data is also detected.Experimental results show that the proposed EGT sensing data anomaly detection method can achieve better performance of false positive ratio,false negative ratio and accuracy. 展开更多
关键词 ANOMALY detection AUXILIARY power unit Condition-based maintenance data-DRIVEN framework EXHAUST gas temperature
原文传递
Using Genetic Algorithm as Test Data Generator for Stored PL/SQL Program Units 被引量:1
6
作者 Mohammad A. Alshraideh Basel A. Mahafzah +1 位作者 Hamzeh S. Eyal Salman Imad Salah 《Journal of Software Engineering and Applications》 2013年第2期65-73,共9页
PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity ... PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch. 展开更多
关键词 GENETIC Algorithms SQL Stored PROGRAM unitS Test data Structural Testing SQL EXCEPTIONS
暂未订购
An Integrated Method of Data Mining and Flow Unit Identification for Typical Low Permeability Reservoir Prediction
7
作者 Peng Yu 《World Journal of Engineering and Technology》 2019年第1期122-128,共7页
With the development of oilfield exploration and mining, the research on continental oil and gas reservoirs has been gradually refined, and the exploration target of offshore reservoir has also entered the hot studyst... With the development of oilfield exploration and mining, the research on continental oil and gas reservoirs has been gradually refined, and the exploration target of offshore reservoir has also entered the hot studystage of small sand bodies, small fault blocks, complex structures, low permeability and various heterogeneous geological bodies. Thus, the marine oil and gas development will inevitably enter thecomplicated reservoir stage;meanwhile the corresponding assessment technologies, engineering measures andexploration method should be designed delicately. Studying on hydraulic flow unit of low permeability reservoir of offshore oilfield has practical significance for connectivity degree and remaining oil distribution. An integrated method which contains the data mining and flow unit identification part was used on the flow unit prediction of low permeability reservoir;the predicted results?were compared with mature commercial system results for verifying its application. This strategy is successfully applied to increase the accuracy by choosing the outstanding prediction result. Excellent computing system could provide more accurate geological information for reservoir characterization. 展开更多
关键词 Low PERMEABILITY Reservoir Offshore OILFIELD Hydraulic FLOW unit FLOW unit IDENTIFICATION data Mining
暂未订购
Clastic compaction unit classification based on clay content and integrated compaction recovery using well and seismic data 被引量:1
8
作者 Zhong Hong Ming-Jun Su +1 位作者 Hua-Qing Liu Gai Gao 《Petroleum Science》 SCIE CAS CSCD 2016年第4期685-697,共13页
Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as ... Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as sandstone and mudstone to undertake separate porositydepth compaction modeling. However, using just two lithologies is an oversimplification that cannot represent the compaction history. In such schemes, the precision of the compaction recovery is inadequate. To improve the precision of compaction recovery, a depth compaction model has been proposed that involves both porosity and clay content. A clastic lithological compaction unit classification method, based on clay content, has been designed to identify lithological boundaries and establish sets of compaction units. Also, on the basis of the clastic compaction unit classification, two methods of compaction recovery that integrate well and seismic data are employed to extrapolate well-based compaction information outward along seismic lines and recover the paleo-topography of the clastic strata in the region. The examples presented here show that a better understanding of paleo-geomorphology can be gained by applying the proposed compaction recovery technology. 展开更多
关键词 Compaction recovery Porosity-clay contentdepth compaction model Classification of lithological compaction unit Well and seismic data integrated compaction recovery technology
原文传递
“泛珠三角”九省区消费结构的实证研究——基于Panel data下的数量分析
9
作者 邓淇中 龙宁 《湖南工程学院学报(社会科学版)》 2007年第4期5-8,25,共5页
通过对"泛珠三角"九省区内的消费现状、影响因素及其产生原因的初步研究,构建了两个计量经济学模型来做进一步的实证分析:其一是利用单位根理论检验区域内消费支出是否具有长期均衡关系;其二是利用面板数据模型估计区域内消... 通过对"泛珠三角"九省区内的消费现状、影响因素及其产生原因的初步研究,构建了两个计量经济学模型来做进一步的实证分析:其一是利用单位根理论检验区域内消费支出是否具有长期均衡关系;其二是利用面板数据模型估计区域内消费结构的差异,期望能以此促进整个"泛珠三角"区域的经济均衡与可持续发展。 展开更多
关键词 面板数据 单位根 固定效应
在线阅读 下载PDF
基于Feed4JUnit架构的单元测试技术研究与应用 被引量:1
10
作者 杨鹏 《软件工程师》 2014年第7期25-27,共3页
软件测试技术在软件质量控制过程中一直起着非常重要的作用。JUnit是被广泛应用的Java单元测试框架,主要测试基于java语言编写的程序代码,用于编写和运行可重复的测试。Feed4JUnit是开源的基于JUnit的扩展,通过使用Feed4JUnit提供的注释... 软件测试技术在软件质量控制过程中一直起着非常重要的作用。JUnit是被广泛应用的Java单元测试框架,主要测试基于java语言编写的程序代码,用于编写和运行可重复的测试。Feed4JUnit是开源的基于JUnit的扩展,通过使用Feed4JUnit提供的注释,用户可以很方便地把测试数据存放在文件或其他数据源。本文分析了应用Feed4JUnit进行单元测试的方法,并通过实际开发示例实现数据与代码分离的测试。 展开更多
关键词 Feed4Junit架构 单元测试 数据源
在线阅读 下载PDF
博弈论视角下中国与欧美数据跨境传输合作的分歧与策略选择
11
作者 殷维 陈星宏 《情报杂志》 北大核心 2026年第2期73-81,共9页
深入分析中美欧数据跨境传输模式的共性与分歧、探讨中国与欧盟、美国进行数据跨境传输博弈的最佳策略选择,为优化中国数据治理范式、平衡数据安全与流动效益、开展数据跨境传输合作提供理论支撑。本文采用比较分析法剖析中美欧数据跨... 深入分析中美欧数据跨境传输模式的共性与分歧、探讨中国与欧盟、美国进行数据跨境传输博弈的最佳策略选择,为优化中国数据治理范式、平衡数据安全与流动效益、开展数据跨境传输合作提供理论支撑。本文采用比较分析法剖析中美欧数据跨境传输的路径分歧,并通过构建博弈论模型,以推演中国与美欧博弈的均衡策略。基于博弈模型分析结果,中欧在数据安全领域存在稳定的合作基础,中美数据跨境在于规避“消耗战”并进行风险管控。为优化中国数据跨境传输领域的合作路径、激发数字经济发展活力,对内应完善数据跨境治理体系,通过数据出境监管试点探索韧性治理路径,推动技术标准建设与监管工具现代化;对外应实施差异化的国际合作策略,以安全共识为基础推进中欧规则互认,以风险管控为核心规范中美双边互动,以发展为导向拓展南方国家数字合作。 展开更多
关键词 数据跨境 数据跨境传输合作 数据安全 数据治理 数据立法 博弈论 美国 欧盟
在线阅读 下载PDF
面向可信度提升的数模混合驱动快速机组组合求解方法
12
作者 王文烨 冯川 +2 位作者 管昱翔 马文浩 车亮 《电力系统自动化》 北大核心 2026年第1期74-85,共12页
随着新型电力系统电源结构、电网拓扑日益复杂,系统节点数、机组数不断增大,采用传统优化方法求解安全约束机组组合(SCUC)模型面临维数灾、计算速度慢等问题。虽然采用数据驱动决策方法可以快速求解SCUC模型,但其可解释性不足导致决策... 随着新型电力系统电源结构、电网拓扑日益复杂,系统节点数、机组数不断增大,采用传统优化方法求解安全约束机组组合(SCUC)模型面临维数灾、计算速度慢等问题。虽然采用数据驱动决策方法可以快速求解SCUC模型,但其可解释性不足导致决策结果不可用。为应对上述问题,提出面向可信度提升的数模混合驱动快速机组组合求解方法。首先,构建基于深度强化学习(DRL)的SCUC求解模型,实现机组启停决策结果的快速预求解;然后,构建综合考虑DRL行为级可解释性指标和策略级可解释性指标的启停决策可信度评估体系,识别出高可信度的机组启停结果,增强决策结果的可解释性;最后,构建数模混合驱动的SCUC实现模型的快速求解,并对低可信度的决策结果进行优化调整。基于某省级电网748节点系统的仿真验证表明,所提方法在增强机组启停决策结果可解释性的前提下,实现了SCUC的快速求解。 展开更多
关键词 机组组合 深度强化学习 数据驱动 可解释性 可信度
在线阅读 下载PDF
数据驱动测试在Nunit框架中的应用
13
作者 王敏 陈亚光 《微型机与应用》 2012年第22期10-12,18,共4页
为了解决单元测试工具Nunit本身不支持数据驱动测试的问题,提出了在Nunit框架下实现数据驱动测试的方法。该方法首先将测试类所使用的测试数据基本信息设定在ini文件中,将输入数据及预期结果存放于Excel文件中。随后通过属性标签[TestFi... 为了解决单元测试工具Nunit本身不支持数据驱动测试的问题,提出了在Nunit框架下实现数据驱动测试的方法。该方法首先将测试类所使用的测试数据基本信息设定在ini文件中,将输入数据及预期结果存放于Excel文件中。随后通过属性标签[TestFixtureSetUp]标记的方法动态读取ini文件中的基本信息,再根据这些基本信息读取Excel文件中的测试数据,并将测试数据保存于自定义的结构体数组中供各测试方法使用。该方法有效地实现了测试数据与测试脚本的分离,能降低测试脚本的维护工作量,提高测试效率。 展开更多
关键词 单元测试 Nunit框架 测试脚本 测试数据
在线阅读 下载PDF
基于re3data的中英科学数据仓储平台对比研究 被引量:2
14
作者 袁烨 陈媛媛 《数字图书馆论坛》 CSSCI 2024年第2期13-23,共11页
以re3data为数据获取源,选取中英两国406个科学数据仓储为研究对象,从分布特征、责任类型、仓储许可、技术标准及质量标准等5个方面、11个指标对两国科学数据仓储的建设情况进行对比分析,试图为我国数据仓储的可持续发展提出建议:广泛... 以re3data为数据获取源,选取中英两国406个科学数据仓储为研究对象,从分布特征、责任类型、仓储许可、技术标准及质量标准等5个方面、11个指标对两国科学数据仓储的建设情况进行对比分析,试图为我国数据仓储的可持续发展提出建议:广泛联结国内外异质机构,推进多学科领域的交流与合作,有效扩充仓储许可权限与类型,优化技术标准的应用现况,提高元数据使用的灵活性。 展开更多
关键词 科学数据 数据仓储平台 re3data 中国 英国
在线阅读 下载PDF
Evaluation and Ranking DMUs in the Presence of Both Undesirable and Ordinal Factors in Data Envelopment Analysis 被引量:3
15
作者 Zahra Aliakbarpoor Mohammad Izadikhah 《International Journal of Automation and computing》 EI 2012年第6期609-615,共7页
In the last decade,ranking units in data envelopment analysis(DEA) has become the interests of many DEA researchers and a variety of models were developed to rank units with multiple inputs and multiple outputs.These ... In the last decade,ranking units in data envelopment analysis(DEA) has become the interests of many DEA researchers and a variety of models were developed to rank units with multiple inputs and multiple outputs.These performance factors(inputs and outputs) are classified into two groups:desirable and undesirable.Obviously,undesirable factors in production process should be reduced to improve the performance.Also,some of these data may be known only in terms of ordinal relations.While the models developed in the past are interesting and meaningful,they didn t consider both undesirable and ordinal factors at the same time.In this research,we develop an evaluating model and a ranking model to overcome some deficiencies in the earlier models.This paper incorporates undesirable and ordinal data in DEA and discusses the efficiency evaluation and ranking of decision making units(DMUs) with undesirable and ordinal data.For this purpose,we transform the ordinal data into definite data,and then we consider each undesirable input and output as desirable output and input,respectively.Finally,an application that shows the capability of the proposed method is illustrated. 展开更多
关键词 data envelopment analysis(DEA) decision making units(DMUs) undesirable data ordinal data ranking.
原文传递
Green Architecture for Dense Home Area Networks Based on Radio-over-Fiber with Data Aggregation Approach
16
作者 Mohd Sharil Abdullah Mohd Adib Sarijari +4 位作者 Abdul Hadi Fikri Abdul Hamid Norsheila Fisal Anthony Lo Rozeha A.Rashid Sharifah Kamilah Syed Yusof 《Journal of Electronic Science and Technology》 CAS CSCD 2016年第2期133-144,共12页
The high-density population leads to crowded cities. The future city is envisaged to encompass a large-scale network with diverse applications and a massive number of interconnected heterogeneous wireless-enabled devi... The high-density population leads to crowded cities. The future city is envisaged to encompass a large-scale network with diverse applications and a massive number of interconnected heterogeneous wireless-enabled devices. Hence, green technology elements are crucial to design sustainable and future-proof network architectures. They are the solutions for spectrum scarcity, high latency, interference, energy efficiency, and scalability that occur in dense and heterogeneous wireless networks especially in the home area network (HAN). Radio-over-fiber (ROF) is a technology candidate to provide a global view of HAN's activities that can be leveraged to allocate orthogonal channel communications for enabling wireless-enabled HAN devices transmission, with considering the clustered-frequency-reuse approach. Our proposed network architecture design is mainly focused on enhancing the network throughput and reducing the average network communications latency by proposing a data aggregation unit (DAU). The performance shows that with the DAU, the average network communications latency reduces significantly while the network throughput is enhanced, compared with the existing ROF architecture without the DAU. 展开更多
关键词 data aggregation unit dense homearea network green architecture heterogeneousnetwork radio-over-fiber.
在线阅读 下载PDF
Spatiotemporal measurement of urbanization levels based on multiscale units: A case study of the Bohai Rim Region in China 被引量:1
17
作者 赵敏 程维明 +1 位作者 刘樯漪 王楠 《Journal of Geographical Sciences》 SCIE CSCD 2016年第5期531-548,共18页
Urbanization is a complex process reflecting the growth, formation and develop- ment of cities and their systems. Measuring regional urbanization levels within a long time series may ensure healthy and harmonious urba... Urbanization is a complex process reflecting the growth, formation and develop- ment of cities and their systems. Measuring regional urbanization levels within a long time series may ensure healthy and harmonious urban development. Based on DMSP/OLS night- time light data, a human-computer interactive boundary correction method was used to ob- tain information about built-up urban areas in the Bohai Rim region from 1992 to 2012. Con- sequently, a method was proposed and applied to measure urbanization levels using four measurement scale units: administrative division, land-sea location, terrain feature, and geomorphological types. Our conclusions are: 1) The extraction results based on DMSP/OLS nighttime light data showed substantial agreement with those obtained using Landsat TM/ETM+ data on spatial patterns. The overall accuracy was 97.70% on average, with an average Kappa of 0.79, indicating that the results extracted from DMSP/OLS nighttime light data were reliable and could well reflect the actual status of built-up urban areas. 2) Bohai Rim's urbanization level has increased significantly, demonstrating a high annual growth rate from 1998 to 2006. Areas with high urbanization levels have relocated evidently from capital to coastal cities. 3) The distribution of built-up urban areas showed a certain degree of zonal variation. The urbanization level was negatively correlated with relief amplitude and altitude. A high level of urbanization was found in low altitude platforms and low altitude plains, with a gradual narrowing of the gap between these two geomorphological types. 4) The measure- ment method presented in this study is fast, convenient, and incorporates multiple perspec- tives. It would offer various directions for urban construction and provide reference values for measuring national-level urbanization. 展开更多
关键词 nighttime light data urbanization level multiscale units Bohai Rim
原文传递
基于CSBD-XGBoost的入侵检测模型研究
18
作者 李川 韩斌 王树鸿 《成都信息工程大学学报》 2026年第1期47-54,共8页
针对网络入侵检测领域中存在数据不平衡、特征冗余、特征信息提取不全以及检测模型单一导致的多类检测率低、误报率高等问题,提出一种基于CSBD-XGBoost的多融合入侵检测模型。使用RUS和BorderlineSMOTE采样算法对多数类和少数类样本进... 针对网络入侵检测领域中存在数据不平衡、特征冗余、特征信息提取不全以及检测模型单一导致的多类检测率低、误报率高等问题,提出一种基于CSBD-XGBoost的多融合入侵检测模型。使用RUS和BorderlineSMOTE采样算法对多数类和少数类样本进行采样,以平衡数据集。采用主成分分析方法进行数据降维,消除特征冗余。然后分别通过双层卷积神经网络、自注意力机制与双向门控单元模块,提取空间特征和时间特征,并将提取的特征传入深度神经网络,进行初次分类。最后通过极端梯度提升进行分类提升,以提高模型的分类性能。在CIC-IDS2018、CICIDS2017和NSL-KDD数据集上进行实验,准确率可达99.75%、99.55%、98.66%,模型具有较好的泛化性,检测效果优于传统机器学习和深度学习方法。 展开更多
关键词 BorderlineSMOTE 数据降维 卷积神经网络 双向门控单元 极端梯度提升
在线阅读 下载PDF
英国医疗健康大数据Care.data的前车之鉴 被引量:8
19
作者 姚国章 《南京邮电大学学报(社会科学版)》 2017年第3期38-50,共13页
英国有着世界领先的医疗健康保障体系,积累了丰富而又宝贵的数据资源,为大数据在医疗健康领域的应用创造了良好的条件,Care.data项目是其重要的探索。但在项目实施过程中由于缺乏必要的规划部署和试点探索、没有正确处理好家庭医生与项... 英国有着世界领先的医疗健康保障体系,积累了丰富而又宝贵的数据资源,为大数据在医疗健康领域的应用创造了良好的条件,Care.data项目是其重要的探索。但在项目实施过程中由于缺乏必要的规划部署和试点探索、没有正确处理好家庭医生与项目的关系、缺乏公众的理解和支持、不当的商业开发及技术方案不够完善等原因最终导致了项目失败,为我国医疗健康大数据的开发利用提供了前车之鉴。我国正在大力推进大数据在医疗健康领域的应用,必须从英国的实践中汲取教训,以少走不必要的弯路。 展开更多
关键词 大数据 Care.data 医疗健康 隐私保护 英国
在线阅读 下载PDF
HtmlUnit在网上招聘系统中的应用
20
作者 陈免慧 沈炜 《电脑知识与技术(过刊)》 2015年第7X期61-63,共3页
该文提出采用Htmlunit框架能够轻易的解析javascript页面、获取动态Ajax页面数据、模拟浏览器交互和数据的自动抓取功能,并且加上XPath完成其他各个渠道职位及候选人的匹配查找和解析入库。最后通过实验,证明了Htmlunit在招聘系统中的... 该文提出采用Htmlunit框架能够轻易的解析javascript页面、获取动态Ajax页面数据、模拟浏览器交互和数据的自动抓取功能,并且加上XPath完成其他各个渠道职位及候选人的匹配查找和解析入库。最后通过实验,证明了Htmlunit在招聘系统中的应用。 展开更多
关键词 数据采集 Html unit AJAX 招聘系统
在线阅读 下载PDF
上一页 1 2 104 下一页 到第
使用帮助 返回顶部