The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1...The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar sur- face environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper de- scribes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies.展开更多
Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem....Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.As the state of art 3D super-resolution localization algorithm based on deep learning,FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing,even though it has greatly improved the data processing throughput.In this paper,a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM.This new algorithm uses the feature compression method to reduce the parameters of the model,and combines it with pipeline programming to accelerate the inference process of the deep learning model.The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy,which can realize real-time processing of 256×256 pixels size images.The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering,and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm.展开更多
There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analys...There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analysis. This paper proposes a data pre-processing model based on intelligent algorithms. Firstly, we introduce the integrated network platform of ocean observation. Next, the preprocessing model of data is presemed, and an imelligent cleaning model of data is proposed. Based on fuzzy clustering, the Kohonen clustering network is improved to fulfill the parallel calculation of fuzzy c-means clustering. The proposed dynamic algorithm can automatically f'md the new clustering center with the updated sample data. The rapid and dynamic performance of the model makes it suitable for real time calculation, and the efficiency and accuracy of the model is proved by test results through observation data analysis.展开更多
With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,differ...With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,different users can submit code in a distributed way.The consistency of code grammar can be achieved by syntax constraints.However,when different users work on the same code in semantic development programming practices,the development factors of different users will inevitably lead to the problem of data semantic conflict.In this paper,the characteristics of code segment data in a programming scene are considered.The code sequence can be obtained by disassembling the code segment using lexical analysis technology.Combined with a traditional solution of a data conflict problem,the code sequence can be taken as the declared value object in the data conflict resolution problem.Through the similarity analysis of code sequence objects,the concept of the deviation degree between the declared value object and the truth value object is proposed.A multi-truth discovery algorithm,called the multiple truth discovery algorithm based on deviation(MTDD),is proposed.The basic methods,such as Conflict Resolution on Heterogeneous Data,Voting-K,and MTRuths_Greedy,are compared to verify the performance and precision of the proposed MTDD algorithm.展开更多
Accurate gas viscosity determination is an important issue in the oil and gas industries.Experimental approaches for gas viscosity measurement are timeconsuming,expensive and hardly possible at high pressures and high...Accurate gas viscosity determination is an important issue in the oil and gas industries.Experimental approaches for gas viscosity measurement are timeconsuming,expensive and hardly possible at high pressures and high temperatures(HPHT).In this study,a number of correlations were developed to estimate gas viscosity by the use of group method of data handling(GMDH)type neural network and gene expression programming(GEP)techniques using a large data set containing more than 3000 experimental data points for methane,nitrogen,and hydrocarbon gas mixtures.It is worth mentioning that unlike many of viscosity correlations,the proposed ones in this study could compute gas viscosity at pressures ranging between 34 and 172 MPa and temperatures between 310 and 1300 K.Also,a comparison was performed between the results of these established models and the results of ten wellknown models reported in the literature.Average absolute relative errors of GMDH models were obtained 4.23%,0.64%,and 0.61%for hydrocarbon gas mixtures,methane,and nitrogen,respectively.In addition,graphical analyses indicate that the GMDH can predict gas viscosity with higher accuracy than GEP at HPHT conditions.Also,using leverage technique,valid,suspected and outlier data points were determined.Finally,trends of gas viscosity models at different conditions were evaluated.展开更多
PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity ...PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.展开更多
In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new medi...In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new media platform site content production with new possible, as also make the traditional media found in Internet age, the breakthrough point of the times. Site homemade video program, which is beneficial to reduce copyright purchase demand, reduce the cost, avoid the homogeneity competition, rich advertising marketing at the same time, improve the profit pattern, the organic combination of content production and operation, complete the strategic transformation. On the basis of these advantages, once the site of homemade video program to form a brand and a higher brand influence. Our later research provides the literature survey for the related issues.展开更多
On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is e...On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is essential for on-site programming big data.Duplicate data detection is an important step in data cleaning,which can save storage resources and enhance data consistency.Due to the insufficiency in traditional Sorted Neighborhood Method(SNM)and the difficulty of high-dimensional data detection,an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed.The efficiency of the algorithm can be elevated by improving the method of the key-selection,reducing dimension of data set and using an adaptive variable size sliding window.Experimental results show that the improved SNM algorithm exhibits better performance and achieve higher accuracy.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
While adopting an elevation-over-azimuth architecture by an inter-satellite linkage antenna of a user satellite, a zenith pass problem always occurs when the antenna is tracing the tracking and data relay satellite (...While adopting an elevation-over-azimuth architecture by an inter-satellite linkage antenna of a user satellite, a zenith pass problem always occurs when the antenna is tracing the tracking and data relay satellite (TDRS). This paper deals with this problem by way of, firstly, introducing movement laws of the inter-satellite linkage to predict the movement of the user satellite antenna followed by analyzing the potential pass moment and the actual one of the zenith pass in detail. A number of specific orbit altitudes for the user satellite that can remove the blindness zone are obtained. Finally, on the base of the predicted results from the movement laws of the inter-satellite linkage, the zenith pass tracing strategies for the user satellite antenna are designed under the program guidance using a trajectory preprocessor. Simulations have confirmed the reasonability and feasibility of the strategies in dealing with the zenith pass problem.展开更多
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty...The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.展开更多
A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood d...A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the naYve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.展开更多
Aiming at the problem of coal gangue identifcation in the current fully mechanized mining face and coal washing,this article proposed a convolution neural network(CNN)coal and rock identifcation method based on hypers...Aiming at the problem of coal gangue identifcation in the current fully mechanized mining face and coal washing,this article proposed a convolution neural network(CNN)coal and rock identifcation method based on hyperspectral data.First,coal and rock spectrum data were collected by a near-infrared spectrometer,and then four methods were used to flter 120 sets of collected data:frst-order diferential(FD),second-order diferential(SD),standard normal variable transformation(SNV),and multi-style smoothing.The coal and rock refectance spectrum data were pre-processed to enhance the intensity of spectral refectance and absorption characteristics,as well as efectively remove the spectral curve noise generated by instrument performance and environmental factors.A CNN model was constructed,and its advantages and disadvantages were judged based on the accuracy of the three parameter combinations(i.e.,the learning rate,the number of feature extraction layers,and the dropout rate)to generate the best CNN classifer for the hyperspectral data for rock recognition.The experiments show that the recognition accuracy of the one-dimensional CNN model proposed in this paper reaches 94.6%.Verifcation of the advantages and efectiveness of the method were proposed in this article.展开更多
Aiming at three-passive-sensor location system, a generalized 3-dimension (3-D) assignment model is constructed based on property information, and a multi-target programming model is proposed based on direction-find...Aiming at three-passive-sensor location system, a generalized 3-dimension (3-D) assignment model is constructed based on property information, and a multi-target programming model is proposed based on direction-finding and property fusion information. The multi-target programming model is transformed into a single target programming problem to resolve, and its data association result is compared with the results which are solved by using one kind of information only. Simulation experiments show the effectiveness of the multi-target programming algorithm with higher data association accuracy and less calculation.展开更多
文摘The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar sur- face environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper de- scribes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies.
基金supported by the Start-up Fund from Hainan University(No.KYQD(ZR)-20077)。
文摘Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.As the state of art 3D super-resolution localization algorithm based on deep learning,FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing,even though it has greatly improved the data processing throughput.In this paper,a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM.This new algorithm uses the feature compression method to reduce the parameters of the model,and combines it with pipeline programming to accelerate the inference process of the deep learning model.The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy,which can realize real-time processing of 256×256 pixels size images.The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering,and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm.
基金Key Science and Technology Project of the Shanghai Committee of Science and Technology, China (No.06dz1200921)Major Basic Research Project of the Shanghai Committee of Science and Technology(No.08JC1400100)+1 种基金Shanghai Talent Developing Foundation, China(No.001)Specialized Foundation for Excellent Talent of Shanghai,China
文摘There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analysis. This paper proposes a data pre-processing model based on intelligent algorithms. Firstly, we introduce the integrated network platform of ocean observation. Next, the preprocessing model of data is presemed, and an imelligent cleaning model of data is proposed. Based on fuzzy clustering, the Kohonen clustering network is improved to fulfill the parallel calculation of fuzzy c-means clustering. The proposed dynamic algorithm can automatically f'md the new clustering center with the updated sample data. The rapid and dynamic performance of the model makes it suitable for real time calculation, and the efficiency and accuracy of the model is proved by test results through observation data analysis.
基金supported by the National Key R&D Program of China(No.2018YFB1003905)the National Natural Science Foundation of China under Grant(No.61971032)Fundamental Research Funds for the Central Universities(No.FRF-TP-18-008A3).
文摘With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,different users can submit code in a distributed way.The consistency of code grammar can be achieved by syntax constraints.However,when different users work on the same code in semantic development programming practices,the development factors of different users will inevitably lead to the problem of data semantic conflict.In this paper,the characteristics of code segment data in a programming scene are considered.The code sequence can be obtained by disassembling the code segment using lexical analysis technology.Combined with a traditional solution of a data conflict problem,the code sequence can be taken as the declared value object in the data conflict resolution problem.Through the similarity analysis of code sequence objects,the concept of the deviation degree between the declared value object and the truth value object is proposed.A multi-truth discovery algorithm,called the multiple truth discovery algorithm based on deviation(MTDD),is proposed.The basic methods,such as Conflict Resolution on Heterogeneous Data,Voting-K,and MTRuths_Greedy,are compared to verify the performance and precision of the proposed MTDD algorithm.
文摘Accurate gas viscosity determination is an important issue in the oil and gas industries.Experimental approaches for gas viscosity measurement are timeconsuming,expensive and hardly possible at high pressures and high temperatures(HPHT).In this study,a number of correlations were developed to estimate gas viscosity by the use of group method of data handling(GMDH)type neural network and gene expression programming(GEP)techniques using a large data set containing more than 3000 experimental data points for methane,nitrogen,and hydrocarbon gas mixtures.It is worth mentioning that unlike many of viscosity correlations,the proposed ones in this study could compute gas viscosity at pressures ranging between 34 and 172 MPa and temperatures between 310 and 1300 K.Also,a comparison was performed between the results of these established models and the results of ten wellknown models reported in the literature.Average absolute relative errors of GMDH models were obtained 4.23%,0.64%,and 0.61%for hydrocarbon gas mixtures,methane,and nitrogen,respectively.In addition,graphical analyses indicate that the GMDH can predict gas viscosity with higher accuracy than GEP at HPHT conditions.Also,using leverage technique,valid,suspected and outlier data points were determined.Finally,trends of gas viscosity models at different conditions were evaluated.
文摘PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.
文摘In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new media platform site content production with new possible, as also make the traditional media found in Internet age, the breakthrough point of the times. Site homemade video program, which is beneficial to reduce copyright purchase demand, reduce the cost, avoid the homogeneity competition, rich advertising marketing at the same time, improve the profit pattern, the organic combination of content production and operation, complete the strategic transformation. On the basis of these advantages, once the site of homemade video program to form a brand and a higher brand influence. Our later research provides the literature survey for the related issues.
基金supported by the National Key R&D Program of China(Nos.2018YFB1003905)the National Natural Science Foundation of China under Grant No.61971032,Fundamental Research Funds for the Central Universities(No.FRF-TP-18-008A3).
文摘On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is essential for on-site programming big data.Duplicate data detection is an important step in data cleaning,which can save storage resources and enhance data consistency.Due to the insufficiency in traditional Sorted Neighborhood Method(SNM)and the difficulty of high-dimensional data detection,an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed.The efficiency of the algorithm can be elevated by improving the method of the key-selection,reducing dimension of data set and using an adaptive variable size sliding window.Experimental results show that the improved SNM algorithm exhibits better performance and achieve higher accuracy.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
文摘While adopting an elevation-over-azimuth architecture by an inter-satellite linkage antenna of a user satellite, a zenith pass problem always occurs when the antenna is tracing the tracking and data relay satellite (TDRS). This paper deals with this problem by way of, firstly, introducing movement laws of the inter-satellite linkage to predict the movement of the user satellite antenna followed by analyzing the potential pass moment and the actual one of the zenith pass in detail. A number of specific orbit altitudes for the user satellite that can remove the blindness zone are obtained. Finally, on the base of the predicted results from the movement laws of the inter-satellite linkage, the zenith pass tracing strategies for the user satellite antenna are designed under the program guidance using a trajectory preprocessor. Simulations have confirmed the reasonability and feasibility of the strategies in dealing with the zenith pass problem.
基金supported by the National Natural Science Foundation of China (70961005)211 Project for Postgraduate Student Program of Inner Mongolia University+1 种基金National Natural Science Foundation of Inner Mongolia (2010Zd342011MS1002)
文摘The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.
基金This project was supported by the National Natural Science Foundation of China (60272024).
文摘A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the naYve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.
基金supported by the Theory and Method of Excavation-Support-Anchor Parallel Control for Intelligent Excavation Complex System(2021101030125)Green,intelligent,and safe mining of coal resources(52121003)the Mining Robotics Engineering Discipline Innovation and Intelligence Base(B21014).
文摘Aiming at the problem of coal gangue identifcation in the current fully mechanized mining face and coal washing,this article proposed a convolution neural network(CNN)coal and rock identifcation method based on hyperspectral data.First,coal and rock spectrum data were collected by a near-infrared spectrometer,and then four methods were used to flter 120 sets of collected data:frst-order diferential(FD),second-order diferential(SD),standard normal variable transformation(SNV),and multi-style smoothing.The coal and rock refectance spectrum data were pre-processed to enhance the intensity of spectral refectance and absorption characteristics,as well as efectively remove the spectral curve noise generated by instrument performance and environmental factors.A CNN model was constructed,and its advantages and disadvantages were judged based on the accuracy of the three parameter combinations(i.e.,the learning rate,the number of feature extraction layers,and the dropout rate)to generate the best CNN classifer for the hyperspectral data for rock recognition.The experiments show that the recognition accuracy of the one-dimensional CNN model proposed in this paper reaches 94.6%.Verifcation of the advantages and efectiveness of the method were proposed in this article.
基金This project was supported by the National Natural Science Foundation of China (60172033) the Excellent Ph.D.PaperAuthor Foundation of China (200036 ,200237) .
文摘Aiming at three-passive-sensor location system, a generalized 3-dimension (3-D) assignment model is constructed based on property information, and a multi-target programming model is proposed based on direction-finding and property fusion information. The multi-target programming model is transformed into a single target programming problem to resolve, and its data association result is compared with the results which are solved by using one kind of information only. Simulation experiments show the effectiveness of the multi-target programming algorithm with higher data association accuracy and less calculation.