期刊文献+
共找到3,316篇文章
< 1 2 166 >
每页显示 20 50 100
Enhancing the data processing speed of a deep-learning-based three-dimensional single molecule localization algorithm (FD-DeepLoc) with a combination of feature compression and pipeline programming
1
作者 Shuhao Guo Jiaxun Lin +1 位作者 Yingjun Zhang Zhen-Li Huang 《Journal of Innovative Optical Health Sciences》 2025年第2期150-160,共11页
Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.... Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.As the state of art 3D super-resolution localization algorithm based on deep learning,FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing,even though it has greatly improved the data processing throughput.In this paper,a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM.This new algorithm uses the feature compression method to reduce the parameters of the model,and combines it with pipeline programming to accelerate the inference process of the deep learning model.The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy,which can realize real-time processing of 256×256 pixels size images.The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering,and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm. 展开更多
关键词 Real-time data processing feature compression pipeline programming
原文传递
Enhancing Data Analysis and Automation: Integrating Python with Microsoft Excel for Non-Programmers
2
作者 Osama Magdy Ali Mohamed Breik +2 位作者 Tarek Aly Atef Tayh Nour El-Din Raslan Mervat Gheith 《Journal of Software Engineering and Applications》 2024年第6期530-540,共11页
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision... Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions. 展开更多
关键词 PYTHON End-User Approach Microsoft Excel data Analysis Integration SPREADSHEET programMING data Visualization
在线阅读 下载PDF
Scientific data products and the data pre-processing subsystem of the Chang'e-3 mission 被引量:1
3
作者 Xu Tan Jian-Jun Liu +7 位作者 Chun-Lai Li Jian-Qing Feng Xin Ren Fen-Fei Wang Wei Yan Wei Zuo Xiao-Qian Wang Zhou-Bin Zhang 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2014年第12期1682-1694,共13页
The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1... The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar sur- face environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper de- scribes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies. 展开更多
关键词 Moon: data products -- methods: data pre-processing -- space vehicles:instruments
在线阅读 下载PDF
Intelligent Data Pre-processing Model in Integrated Ocean Observing Network System
4
作者 韩华 丁永生 刘凤鸣 《Journal of Donghua University(English Edition)》 EI CAS 2009年第5期499-502,共4页
There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analys... There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analysis. This paper proposes a data pre-processing model based on intelligent algorithms. Firstly, we introduce the integrated network platform of ocean observation. Next, the preprocessing model of data is presemed, and an imelligent cleaning model of data is proposed. Based on fuzzy clustering, the Kohonen clustering network is improved to fulfill the parallel calculation of fuzzy c-means clustering. The proposed dynamic algorithm can automatically f'md the new clustering center with the updated sample data. The rapid and dynamic performance of the model makes it suitable for real time calculation, and the efficiency and accuracy of the model is proved by test results through observation data analysis. 展开更多
关键词 integrated ocean observing network intelligentdata pre-processing data cleaning fuzzy soft clustering
在线阅读 下载PDF
A Data-Semantic-Conflict-Based Multi-Truth Discovery Algorithm for a Programming Site 被引量:2
5
作者 Haitao Xu Haiwang Zhang +2 位作者 Qianqian Li Tao Qin Zhen Zhang 《Computers, Materials & Continua》 SCIE EI 2021年第8期2681-2691,共11页
With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,differ... With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,different users can submit code in a distributed way.The consistency of code grammar can be achieved by syntax constraints.However,when different users work on the same code in semantic development programming practices,the development factors of different users will inevitably lead to the problem of data semantic conflict.In this paper,the characteristics of code segment data in a programming scene are considered.The code sequence can be obtained by disassembling the code segment using lexical analysis technology.Combined with a traditional solution of a data conflict problem,the code sequence can be taken as the declared value object in the data conflict resolution problem.Through the similarity analysis of code sequence objects,the concept of the deviation degree between the declared value object and the truth value object is proposed.A multi-truth discovery algorithm,called the multiple truth discovery algorithm based on deviation(MTDD),is proposed.The basic methods,such as Conflict Resolution on Heterogeneous Data,Voting-K,and MTRuths_Greedy,are compared to verify the performance and precision of the proposed MTDD algorithm. 展开更多
关键词 data semantic conflict multi-truth discovery programming site
在线阅读 下载PDF
Modeling viscosity of methane,nitrogen,and hydrocarbon gas mixtures at ultra-high pressures and temperatures using group method of data handling and gene expression programming techniques 被引量:1
6
作者 Farzaneh Rezaei Saeed Jafari +1 位作者 Abdolhossein Hemmati-Sarapardeh Amir H.Mohammadi 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2021年第4期431-445,共15页
Accurate gas viscosity determination is an important issue in the oil and gas industries.Experimental approaches for gas viscosity measurement are timeconsuming,expensive and hardly possible at high pressures and high... Accurate gas viscosity determination is an important issue in the oil and gas industries.Experimental approaches for gas viscosity measurement are timeconsuming,expensive and hardly possible at high pressures and high temperatures(HPHT).In this study,a number of correlations were developed to estimate gas viscosity by the use of group method of data handling(GMDH)type neural network and gene expression programming(GEP)techniques using a large data set containing more than 3000 experimental data points for methane,nitrogen,and hydrocarbon gas mixtures.It is worth mentioning that unlike many of viscosity correlations,the proposed ones in this study could compute gas viscosity at pressures ranging between 34 and 172 MPa and temperatures between 310 and 1300 K.Also,a comparison was performed between the results of these established models and the results of ten wellknown models reported in the literature.Average absolute relative errors of GMDH models were obtained 4.23%,0.64%,and 0.61%for hydrocarbon gas mixtures,methane,and nitrogen,respectively.In addition,graphical analyses indicate that the GMDH can predict gas viscosity with higher accuracy than GEP at HPHT conditions.Also,using leverage technique,valid,suspected and outlier data points were determined.Finally,trends of gas viscosity models at different conditions were evaluated. 展开更多
关键词 Gas Viscosity High pressure high temperature Group method of data handling Gene expression programming
在线阅读 下载PDF
Using Genetic Algorithm as Test Data Generator for Stored PL/SQL Program Units 被引量:1
7
作者 Mohammad A. Alshraideh Basel A. Mahafzah +1 位作者 Hamzeh S. Eyal Salman Imad Salah 《Journal of Software Engineering and Applications》 2013年第2期65-73,共9页
PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity ... PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch. 展开更多
关键词 GENETIC Algorithms SQL Stored program UNITS Test data Structural Testing SQL EXCEPTIONS
暂未订购
Research on Big Data and Artificial Intelligence Aided Decision-Making Mechanism with the Applications on Video Website Homemade Program Innovation 被引量:1
8
作者 Ting Li 《International Journal of Technology Management》 2016年第3期21-23,共3页
In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new medi... In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new media platform site content production with new possible, as also make the traditional media found in Internet age, the breakthrough point of the times. Site homemade video program, which is beneficial to reduce copyright purchase demand, reduce the cost, avoid the homogeneity competition, rich advertising marketing at the same time, improve the profit pattern, the organic combination of content production and operation, complete the strategic transformation. On the basis of these advantages, once the site of homemade video program to form a brand and a higher brand influence. Our later research provides the literature survey for the related issues. 展开更多
关键词 Bid data Artificial Intelligence DECISION-MAKING Video Website program Innovation.
在线阅读 下载PDF
Random Forests Algorithm Based Duplicate Detection in On-Site Programming Big Data Environment 被引量:1
9
作者 Qianqian Li Meng Li +1 位作者 Lei Guo Zhen Zhang 《Journal of Information Hiding and Privacy Protection》 2020年第4期199-205,共7页
On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is e... On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is essential for on-site programming big data.Duplicate data detection is an important step in data cleaning,which can save storage resources and enhance data consistency.Due to the insufficiency in traditional Sorted Neighborhood Method(SNM)and the difficulty of high-dimensional data detection,an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed.The efficiency of the algorithm can be elevated by improving the method of the key-selection,reducing dimension of data set and using an adaptive variable size sliding window.Experimental results show that the improved SNM algorithm exhibits better performance and achieve higher accuracy. 展开更多
关键词 On-site programming big data duplicate record detection random forests adaptive sliding window
在线阅读 下载PDF
Program of International Conference on Data-driven Discovery: When Data Science Meets Information Science(June 19-22, 2016, Beijing, China)
10
《Journal of Data and Information Science》 2016年第2期92-94,共3页
关键词 When data Science Meets Information Science program of International Conference on data-driven Discovery June 19-22 BEIJING China
在线阅读 下载PDF
Optimizing Memory Access Efficiency in CUDA Kernel via Data Layout Technique
11
作者 Neda Seifi Abdullah Al-Mamun 《Journal of Computer and Communications》 2024年第5期124-139,共16页
Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these adv... Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing. 展开更多
关键词 data Layout Optimization CUDA Performance Optimization GPU Memory Optimization Dynamic programming Matrix Multiplication Memory Access Pattern Optimization in CUDA
在线阅读 下载PDF
Contribution of the MERISE-Type Conceptual Data Model to the Construction of Monitoring and Evaluation Indicators of the Effectiveness of Training in Relation to the Needs of the Labor Market in the Republic of Congo
12
作者 Roch Corneille Ngoubou Basile Guy Richard Bossoto Régis Babindamana 《Open Journal of Applied Sciences》 2024年第8期2187-2200,共14页
This study proposes the use of the MERISE conceptual data model to create indicators for monitoring and evaluating the effectiveness of vocational training in the Republic of Congo. The importance of MERISE for struct... This study proposes the use of the MERISE conceptual data model to create indicators for monitoring and evaluating the effectiveness of vocational training in the Republic of Congo. The importance of MERISE for structuring and analyzing data is underlined, as it enables the measurement of the adequacy between training and the needs of the labor market. The innovation of the study lies in the adaptation of the MERISE model to the local context, the development of innovative indicators, and the integration of a participatory approach including all relevant stakeholders. Contextual adaptation and local innovation: The study suggests adapting MERISE to the specific context of the Republic of Congo, considering the local particularities of the labor market. Development of innovative indicators and new measurement tools: It proposes creating indicators to assess skills matching and employer satisfaction, which are crucial for evaluating the effectiveness of vocational training. Participatory approach and inclusion of stakeholders: The study emphasizes actively involving training centers, employers, and recruitment agencies in the evaluation process. This participatory approach ensures that the perspectives of all stakeholders are considered, leading to more relevant and practical outcomes. Using the MERISE model allows for: • Rigorous data structuring, organization, and standardization: Clearly defining entities and relationships facilitates data organization and standardization, crucial for effective data analysis. • Facilitation of monitoring, analysis, and relevant indicators: Developing both quantitative and qualitative indicators helps measure the effectiveness of training in relation to the labor market, allowing for a comprehensive evaluation. • Improved communication and common language: By providing a common language for different stakeholders, MERISE enhances communication and collaboration, ensuring that all parties have a shared understanding. The study’s approach and contribution to existing research lie in: • Structured theoretical and practical framework and holistic approach: The study offers a structured framework for data collection and analysis, covering both quantitative and qualitative aspects, thus providing a comprehensive view of the training system. • Reproducible methodology and international comparison: The proposed methodology can be replicated in other contexts, facilitating international comparison and the adoption of best practices. • Extension of knowledge and new perspective: By integrating a participatory approach and developing indicators adapted to local needs, the study extends existing research and offers new perspectives on vocational training evaluation. 展开更多
关键词 MERISE Conceptual data Model (MCD) Monitoring Indicators Evaluation of Training Effectiveness Training-Employment Adequacy Labor Market Information Systems Analysis Adjustment of Training programs EMPLOYABILITY Professional Skills
在线阅读 下载PDF
数字时代经管类大数据分析课程教改研究——以“Python经济金融大数据分析”为例 被引量:2
13
作者 吕一清 吴云峰 《大数据》 2025年第1期45-55,共11页
数字时代下经管类分析课程需要进行与时代相适应的教学改革,特别是将Python编程与大数据分析等新兴技术融入课程体系。通过对“Python经济金融大数据分析”课程的教学研究,发现现有课程存在内容过于理论化、考核方式单一化、缺乏对学生... 数字时代下经管类分析课程需要进行与时代相适应的教学改革,特别是将Python编程与大数据分析等新兴技术融入课程体系。通过对“Python经济金融大数据分析”课程的教学研究,发现现有课程存在内容过于理论化、考核方式单一化、缺乏对学生综合应用能力的培养等问题。为此进行了系列改革,如课程内容的更新与优化、教学方法的创新等。通过改革前后的问卷调查与考核对比发现,这些措施的实施有效提升了学生的编程能力和解决实际问题的能力。该教改方案对数字时代下相关课程改革具有参考意义。 展开更多
关键词 Python编程 教学改革 大数据 经济管理
在线阅读 下载PDF
建筑全寿期智慧化整合设计方法研究 被引量:1
14
作者 郑方 王振宇 +2 位作者 孟宇凡 李静雅 黄也桐 《世界建筑》 2025年第3期98-103,共6页
针对传统建筑设计方法在复杂公共建筑中的决策科学性等方面的挑战,本研究致力于开发一种面向建筑全寿期的智慧化整合设计方法,从数据标准、生成算法和集成平台3个维度,对该方法进行初步探索和示范应用。研究将编制贯穿建筑全生命周期的... 针对传统建筑设计方法在复杂公共建筑中的决策科学性等方面的挑战,本研究致力于开发一种面向建筑全寿期的智慧化整合设计方法,从数据标准、生成算法和集成平台3个维度,对该方法进行初步探索和示范应用。研究将编制贯穿建筑全生命周期的整合设计数据标准,开发面向场地、空间和立面的生成式设计算法,并搭建基于网页端的SaaS集成平台和基于空间计算的数据增强现实平台,旨在推动建筑全生命周期的数据贯通与动态优化,提升整合设计协作效率,支持建筑策划和建筑设计的智慧化转型。 展开更多
关键词 建筑策划 建筑全寿期 整合设计方法 数据贯通 生成式设计
在线阅读 下载PDF
一种异构系统下计算软件性能数据采集方法
15
作者 顾蓓蓓 邱霁岩 +2 位作者 王宁 陈健 迟学斌 《计算机研究与发展》 北大核心 2025年第9期2382-2395,共14页
超级计算已从传统CPU集群向异构平台快速发展,随着硬件平台的类型转换,对于计算软件程序调优及性能测评等都面临着重大挑战.当前一些国际主流并行程序性能分析工具及软件普遍存在与国产超算异构系统处理器产品兼容性低、往往需要进行插... 超级计算已从传统CPU集群向异构平台快速发展,随着硬件平台的类型转换,对于计算软件程序调优及性能测评等都面临着重大挑战.当前一些国际主流并行程序性能分析工具及软件普遍存在与国产超算异构系统处理器产品兼容性低、往往需要进行插桩及重编译代码的方式,且单节点性能数据采集准确度不高等问题.为了改进这些不足,提出了一种异构系统计算软件浮点性能数据采集方法.该方法基于国产超算系统验证平台对浮点性能采集原型进行开发及验证.目前已实现单节点和多节点性能指标数据的有效采集,且对原程序无侵入性,该方法无需修改需要被监控程序的代码,且无需采用插桩的方式进行监控,通用性强.最后,与rocHPL,Cannon,mixbench这3类程序进行对比实验分析,并针对人工智能(artificial intelligence,AI)计算,在残差网络(residual network,Res Net)程序上开展了性能数据采集方面的监测研究.证明提出的采集方法准确度较高,采集效果达到实验预期,且对程序调优具有较好的参考价值,验证了该方法的有效性. 展开更多
关键词 异构系统 性能指标 浮点数据 采集程序 性能测评
在线阅读 下载PDF
计及算力需求响应的神经分支电-算网快速优化方法
16
作者 张磊 李然 +3 位作者 唐伦 陈思捷 赵世振 苏福 《上海交通大学学报》 北大核心 2025年第11期1592-1602,I0001-I0003,共14页
数据中心的快速发展使其可以作为需求响应参与电力系统调度,通过在区域间调度数据中心内算力资源能够实现节能减排、节约成本的目的,但在电力系统调度中考虑数据中心算力资源的需求响应面临计算速度不足的问题,因此提出了计及算力需求... 数据中心的快速发展使其可以作为需求响应参与电力系统调度,通过在区域间调度数据中心内算力资源能够实现节能减排、节约成本的目的,但在电力系统调度中考虑数据中心算力资源的需求响应面临计算速度不足的问题,因此提出了计及算力需求响应的神经分支电-算网快速优化方法.首先建立考虑算力资源需求响应的电-算网双层优化模型,其次结合图卷积神经网络与分支定界法,应用于双层模型中.通过历史数据训练,计及算力需求响应的神经分支电-算网快速优化方法具备快速确定分支定界变量顺序、最小化迭代次数的能力,显著提高求解速度,实现考虑数据中心算力资源的机组组合需求响应高速求解.在“东数西算”工程仿真场景中验证所提方法性能,与伪成本分支算法相比,求解时间平均缩短了39.1%;与商用求解器CPLEX相比,求解时间平均缩短了38.1%;与基于机器学习的优化加速算法Extratrees相比,求解时间平均缩短了13.5%.此外若将其用于日内调度,系统协同调度频率从1次/h提高到了4次/h,24 h最大提升的消纳量占总风力发电量的17.42%. 展开更多
关键词 混合整数规划 机组组合 需求响应 数据中心 神经分支
在线阅读 下载PDF
计算作业数据差异化技术在工科课程训练中的应用与实践
17
作者 冷学礼 宋占龙 +1 位作者 魏民 袁学良 《电脑与信息技术》 2025年第5期141-144,共4页
计算作业是促进工科学生掌握专业理论及设计计算能力的根本途径,使用VBA编程技术实现数据差异化的计算作业布置有助于促进工科专业培养具有严谨自信能力的毕业生。通过模块化VBA代码实现了文档运行代码与教师编制题目及解题代码的分离,... 计算作业是促进工科学生掌握专业理论及设计计算能力的根本途径,使用VBA编程技术实现数据差异化的计算作业布置有助于促进工科专业培养具有严谨自信能力的毕业生。通过模块化VBA代码实现了文档运行代码与教师编制题目及解题代码的分离,教师只需要关注题干数据约束及解题过程等核心内容的代码实现过程,这一实践措施的技术门槛低、入门快、部署简便,利于在工科专业快速推广。 展开更多
关键词 计算作业 数据差异化 Excel VBA编程
在线阅读 下载PDF
面向北斗高精度定位需求的气象数据解码实验设计
18
作者 章迪 郭际明 +1 位作者 周吕 杨飞 《中国现代教育装备》 2025年第19期7-9,13,共4页
设计了问题导向、需求驱动的ERA5气象数据解码实验。实验过程中,明确指出北斗高精度定位对高精度气象数据的需求,激发学生解决问题的兴趣。针对Fortran语言无法直接调用ERA5的GRIB解码工具这一技术难题,设计了将C/C++语言代码编译为动... 设计了问题导向、需求驱动的ERA5气象数据解码实验。实验过程中,明确指出北斗高精度定位对高精度气象数据的需求,激发学生解决问题的兴趣。针对Fortran语言无法直接调用ERA5的GRIB解码工具这一技术难题,设计了将C/C++语言代码编译为动态链接库并由Fortran程序调用进行解码的实验过程,有效地培养学生的实践能力和创新思维。 展开更多
关键词 解码 气象数据 混合编程 实验设计
在线阅读 下载PDF
农业农村大数据政策对农业新质生产力发展的影响——基于省级面板数据的实证分析 被引量:1
19
作者 熊春林 兰宁 李漱 《重庆邮电大学学报(社会科学版)》 2025年第4期148-161,共14页
农业农村大数据是形成农业新质生产力的优质生产要素。文章以《农业农村大数据试点方案》为一次准自然实验,基于20052022年中国30个省(区、市)面板数据,运用倾向匹配双重差分法(PSM-DID),实证检验了农业农村大数据政策对农业新质生产力... 农业农村大数据是形成农业新质生产力的优质生产要素。文章以《农业农村大数据试点方案》为一次准自然实验,基于20052022年中国30个省(区、市)面板数据,运用倾向匹配双重差分法(PSM-DID),实证检验了农业农村大数据政策对农业新质生产力发展的影响效应及作用路径。研究发现,农业农村大数据政策能显著促进农业新质生产力发展,具体通过农业社会化服务、农技创新应用、农业经营规模化等中介变量产生促进作用,且存在一定程度的区域异质性、农业资源禀赋异质性及项目异质性。在三大地理区域方面,农业农村大数据政策对东部和西部农业新质生产力发展具有显著促进作用,对中部地区的影响则不显著。在农业资源禀赋方面,农业农村大数据政策显著正向影响我国粮食主销区、粮食产销平衡区的农业新质生产力发展,对粮食主产区的影响则不显著。在项目异质性方面,涉农数据共享、单品种大数据建设、农业农村大数据应用等3种类型项目对农业新质生产力发展具有显著的促进作用,而对市场化建设与运营机制项目的促进作用不显著。为此,应进一步完善政策顶层设计、优化政策作用机制,因地制宜、分类施策,充分发挥农业农村大数据政策对农业新质生产力的促进作用,为实现农业高质量发展和加快建设农业强国提供强大动能。 展开更多
关键词 《农业农村大数据试点方案》 农业新质生产力 双重差分法 农业社会化服务
在线阅读 下载PDF
基于CBAM-CNN的CPS负荷重分配攻击检测定位方法设计
20
作者 陆玲霞 马朝祥 +1 位作者 闫旻睿 于淼 《实验技术与管理》 北大核心 2025年第6期78-89,共12页
负荷重分配攻击是一种特殊的虚假信息注入攻击。对于电力信息物理系统,基于模型的方法难以检测定位多类型负荷重分配攻击,且针对多类型负荷重分配攻击的数据驱动检测定位方法研究较少。为此,设计了一种以双层规划模型为基础的,基于带卷... 负荷重分配攻击是一种特殊的虚假信息注入攻击。对于电力信息物理系统,基于模型的方法难以检测定位多类型负荷重分配攻击,且针对多类型负荷重分配攻击的数据驱动检测定位方法研究较少。为此,设计了一种以双层规划模型为基础的,基于带卷积注意力模块神经网络的负荷重分配攻击定位检测方法。首先对电力信息物理系统中的信息系统进行建模,总结得到三种信息侧负荷重分配攻击行为。随后建立考虑攻击者和调度中心管理者博弈关系的双层规划模型,针对不同攻击场景生成负荷重分配攻击数据集。为了检测定位不同类型的攻击,将所研究问题转化为多标签分类问题,利用卷积神经网络的卷积结构特性挖掘并学习具有稀疏标签数据的邻域信息,引入卷积注意力模块,从通道信息和空间信息两个角度增强网络对于重点信息的学习能力,改善了网络漏判率较高的问题,提高了网络检测定位性能。在38节点电力信息物理系统算例上进行仿真实验,验证了所提方法的有效性。与对比方法相比,所提方法对于三种攻击类型都有较低的误判率和漏判率,检测定位性能更加出色。 展开更多
关键词 电力信息物理系统 负荷重分配攻击 双层规划模型 数据驱动 卷积注意力模块 卷积神经网络
在线阅读 下载PDF
上一页 1 2 166 下一页 到第
使用帮助 返回顶部