A casting process CAD is put forward to design and draw casting process. The 2D casting process CAD, most of the current systems are developed based on one certain version of the AutoCAD system. However the applicatio...A casting process CAD is put forward to design and draw casting process. The 2D casting process CAD, most of the current systems are developed based on one certain version of the AutoCAD system. However the application of these 2D casting process CAD systems in foundry enterprises are restricted because they have several deficiencies, such as being overly dependent on the AutoCAD system, and some part files based on PDF format can not be opened directly. To overcome these deficiencies, for the first time an innovative 2D casting process CAD system based on PDF and image format file has been proposed, which breaks through the traditional research and application notion of the 2D casting process CAD system based on AutoCAD. Several key technologies of this system such as coordinate transformation, CAD interactive drawing, file storage, PDF and image format files display, and image recognition technologies were described in detail. A practical 2D CAD casting process system named HZCAD2D(PDF) was developed, which is capable of designing and drawing the casting process on the part drawing based on the PDF format directly, without spending time on drawing the part produced by AutoCAD system. Final y, taking two actual castings as examples, the casting processes were drawn using this system, demonstrating that this system can significantly shorten the cycle of casting process designing.展开更多
In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of ...In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.展开更多
Case-file backlogs were identified as one of the cause factors affecting the competitiveness of the forensic science laboratory (FSL). Backlogs represent case-files?that remain unprocessed or unreported within a selec...Case-file backlogs were identified as one of the cause factors affecting the competitiveness of the forensic science laboratory (FSL). Backlogs represent case-files?that remain unprocessed or unreported within a selected time interval (year, week or month) which leads to increased customer complaints, rework, cost of analysis, degradation of biological samples, etc. Case-file backlogging was quantified in three consecutive years (2014 to 2016), using the following parameters: case-files received and case-files processed, difference of which gives case-files backlogged. There was a need to define time interval for a case-file to be regarded as backlogged (that is, one week), results of which can translate into backlogged case-files per month or year. A data collection tool was established and used for three work stations (forensic chemistry, biology/DNA and toxicology laboratories). The tool includes starting and ending date for each?time interval, in which the numbers of case-files received and processed were entered followed by computing the backlogs. It was observed that, case-files reported?increased between 2014 and 2016 leading to a decrease in backlogged case-files.?The annual percentage of the case-files backlogged was highest for forensic?toxicology. The highest number of case-files backlogged was observed for forensic?chemistry, followed by forensic biology/DNA. The number of case-files?backlogged per analyst per year was highest in 2014 and dropped continuously?towards 2016, being comparably higher in forensic biology/DNA and chemistry.?Probability density functions (PDFs) and cumulative distribution functions (CDFs)?of backlogs data indicated that a large number of backlogs created in previous?weeks were eliminated. It was concluded that the effect of case-file backlogging on FSL competitiveness can be minimized by continued management effort in backlog elimination.展开更多
为了解决大型工程项目中文件的传输时间与成本问题,提出一个基于文件工作流的工程项目文件管理优化方法。首先,构建了工程项目文件管理环境和具有逻辑顺序的文件工作流模型,分析了文件的传输和缓存。在此基础上,将文件管理优化问题建模...为了解决大型工程项目中文件的传输时间与成本问题,提出一个基于文件工作流的工程项目文件管理优化方法。首先,构建了工程项目文件管理环境和具有逻辑顺序的文件工作流模型,分析了文件的传输和缓存。在此基础上,将文件管理优化问题建模为马尔可夫过程,通过设计状态空间、动作空间及奖励函数等实现文件工作流的任务完成时间与缓存成本的联合优化。其次,采用对抗式双重深度Q网络(dueling double deep Q network,D3QN)来降低训练时间,提高训练效率。仿真结果验证了提出方案在不同参数配置下文件传输的有效性,并且在任务体量增大时仍能保持较好的优化能力。展开更多
Currently,knowledge-based systems are generally updated manually,resulting in long cycles,low efficiency,and difficulty in ensuring comprehensive and systematic supplementary content.To address this issue,a knowledge-...Currently,knowledge-based systems are generally updated manually,resulting in long cycles,low efficiency,and difficulty in ensuring comprehensive and systematic supplementary content.To address this issue,a knowledge-based system automatic updating method based on webpage information and user files is proposed for combine harvesters.The knowledge storage structure in an electronic warehouse is analyzed,and knowledge and data types are determined.A crawler designed to locate the combine harvester-related information in the target webpage realizes the acquisition and sorting of webpage knowledge and data.The type of file uploaded by the user is set based on the knowledge base and data type;the content of the user file is extracted and filtered,and the knowledge and data of the knowledge-based system are user-defined.The organized knowledge and data are stored or updated in the knowledge base to achieve automatic updating of the knowledge-based system based on webpage information and user files.The test results show that based on user file updates,the knowledge and data of the knowledge-based system can be customized according to user needs.This meets user requirements and allows for effective automatic updates based on webpages after automatic updating of the knowledge base based on webpages.After the automatic updating of the knowledge base based on webpage information is triggered,the knowledge and data from the three webpages that were initially crawled and read are automatically updated to the knowledge base in 5.265 s.Realizing the automatic updating of knowledge and data can shorten the updating period,maintain the effectiveness and practicability of the knowledge-based system,ensure the scientific and advanced nature of the intelligent design process,and provide technical models and methods for the knowledge collection of similar knowledge-based systems.展开更多
基金financially supported by the Program for New Century Excellent Talents in University (No.NCET-09-0396)the National Science&Technology Key Projects of Numerical Control (No.2012ZX04012-011)the Fundamental Research Funds for the Central Universities (2014-IV-016)
文摘A casting process CAD is put forward to design and draw casting process. The 2D casting process CAD, most of the current systems are developed based on one certain version of the AutoCAD system. However the application of these 2D casting process CAD systems in foundry enterprises are restricted because they have several deficiencies, such as being overly dependent on the AutoCAD system, and some part files based on PDF format can not be opened directly. To overcome these deficiencies, for the first time an innovative 2D casting process CAD system based on PDF and image format file has been proposed, which breaks through the traditional research and application notion of the 2D casting process CAD system based on AutoCAD. Several key technologies of this system such as coordinate transformation, CAD interactive drawing, file storage, PDF and image format files display, and image recognition technologies were described in detail. A practical 2D CAD casting process system named HZCAD2D(PDF) was developed, which is capable of designing and drawing the casting process on the part drawing based on the PDF format directly, without spending time on drawing the part produced by AutoCAD system. Final y, taking two actual castings as examples, the casting processes were drawn using this system, demonstrating that this system can significantly shorten the cycle of casting process designing.
基金supported by the Research Fund of National Key Laboratory of Computer Architecture under Grant No.CARCH201501the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing under Grant No.2016A09
文摘In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.
文摘Case-file backlogs were identified as one of the cause factors affecting the competitiveness of the forensic science laboratory (FSL). Backlogs represent case-files?that remain unprocessed or unreported within a selected time interval (year, week or month) which leads to increased customer complaints, rework, cost of analysis, degradation of biological samples, etc. Case-file backlogging was quantified in three consecutive years (2014 to 2016), using the following parameters: case-files received and case-files processed, difference of which gives case-files backlogged. There was a need to define time interval for a case-file to be regarded as backlogged (that is, one week), results of which can translate into backlogged case-files per month or year. A data collection tool was established and used for three work stations (forensic chemistry, biology/DNA and toxicology laboratories). The tool includes starting and ending date for each?time interval, in which the numbers of case-files received and processed were entered followed by computing the backlogs. It was observed that, case-files reported?increased between 2014 and 2016 leading to a decrease in backlogged case-files.?The annual percentage of the case-files backlogged was highest for forensic?toxicology. The highest number of case-files backlogged was observed for forensic?chemistry, followed by forensic biology/DNA. The number of case-files?backlogged per analyst per year was highest in 2014 and dropped continuously?towards 2016, being comparably higher in forensic biology/DNA and chemistry.?Probability density functions (PDFs) and cumulative distribution functions (CDFs)?of backlogs data indicated that a large number of backlogs created in previous?weeks were eliminated. It was concluded that the effect of case-file backlogging on FSL competitiveness can be minimized by continued management effort in backlog elimination.
文摘为了解决大型工程项目中文件的传输时间与成本问题,提出一个基于文件工作流的工程项目文件管理优化方法。首先,构建了工程项目文件管理环境和具有逻辑顺序的文件工作流模型,分析了文件的传输和缓存。在此基础上,将文件管理优化问题建模为马尔可夫过程,通过设计状态空间、动作空间及奖励函数等实现文件工作流的任务完成时间与缓存成本的联合优化。其次,采用对抗式双重深度Q网络(dueling double deep Q network,D3QN)来降低训练时间,提高训练效率。仿真结果验证了提出方案在不同参数配置下文件传输的有效性,并且在任务体量增大时仍能保持较好的优化能力。
基金supported by the"Suqian Talent"Xiongying Program for Educational Innovation Talents Project(Grant No.SQXY202431)the National Key Research and Development Program of China(Grant No.2017YFD0700100).
文摘Currently,knowledge-based systems are generally updated manually,resulting in long cycles,low efficiency,and difficulty in ensuring comprehensive and systematic supplementary content.To address this issue,a knowledge-based system automatic updating method based on webpage information and user files is proposed for combine harvesters.The knowledge storage structure in an electronic warehouse is analyzed,and knowledge and data types are determined.A crawler designed to locate the combine harvester-related information in the target webpage realizes the acquisition and sorting of webpage knowledge and data.The type of file uploaded by the user is set based on the knowledge base and data type;the content of the user file is extracted and filtered,and the knowledge and data of the knowledge-based system are user-defined.The organized knowledge and data are stored or updated in the knowledge base to achieve automatic updating of the knowledge-based system based on webpage information and user files.The test results show that based on user file updates,the knowledge and data of the knowledge-based system can be customized according to user needs.This meets user requirements and allows for effective automatic updates based on webpages after automatic updating of the knowledge base based on webpages.After the automatic updating of the knowledge base based on webpage information is triggered,the knowledge and data from the three webpages that were initially crawled and read are automatically updated to the knowledge base in 5.265 s.Realizing the automatic updating of knowledge and data can shorten the updating period,maintain the effectiveness and practicability of the knowledge-based system,ensure the scientific and advanced nature of the intelligent design process,and provide technical models and methods for the knowledge collection of similar knowledge-based systems.