Laser-induced breakdown spectroscopy(LIBS) is a potential technology for online coal property analysis,but successful quantitative measurement of calorific value using LIBS suffers from relatively low accuracy caused ...Laser-induced breakdown spectroscopy(LIBS) is a potential technology for online coal property analysis,but successful quantitative measurement of calorific value using LIBS suffers from relatively low accuracy caused by the matrix effect.To solve this problem,the support vector machine(SVM) and the partial least square(PLS) were combined to increase the measurement accuracy of calorific value in this study.The combination model utilized SVM to classify coal samples into two groups according to their volatile matter contents to reduce the matrix effect,and then applied PLS to establish calibration models for each sample group respectively.The proposed model was applied to the measurement of calorific values of 53 coal samples,showing that the proposed model could greatly increase accuracy of the measurement of calorific values.Compared with the traditional PLS method,the coefficient of determination(R2) was improved from 0.93 to 0.97,the root-mean-square error of prediction was reduced from 1.68 MJ kg-1 to1.08 MJ kg-1,and the average relative error was decreased from 6.7% to 3.93%,showing an overall improvement.展开更多
Dynamic wake field information is vital for the optimized design and control of wind farms.Combined with sparse measurement data from light detection and ranging(LiDAR),the physics-informed neural network(PINN)framewo...Dynamic wake field information is vital for the optimized design and control of wind farms.Combined with sparse measurement data from light detection and ranging(LiDAR),the physics-informed neural network(PINN)frameworks have recently been employed for forecasting freestream wind and wake fields.However,these PINN frameworks face challenges of low prediction accuracy and long training times.Therefore,this paper constructed a PINN framework for dynamic wake field prediction by integrating two accuracy improvement strategies and a step-by-step training time saving strategy.The results showed that the different performance improvement routes significantly improved the overall performance of the PINN.The accuracy and efficiency of the PINN with spatiotemporal improvement strategies were validated via LiDAR-measured data from a wind farm in Shandong province,China.This paper sheds light on load reduction,efficiency improvement,intelligent operation and maintenance of wind farms.展开更多
The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP...The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP)and greedy algorithms,have been effective in solving small problem instances but often struggle with scalability and efficiency as the problem size increases.DP,for instance,has exponential time complexity and can become computationally prohibitive for large problem instances.On the other hand,greedy algorithms offer faster solutions but may not always yield the optimal results,especially when the problem involves complex constraints or large numbers of items.This paper introduces a novel reinforcement learning(RL)approach to solve the knapsack problem by enhancing the state representation within the learning environment.We propose a representation where item weights and volumes are expressed as ratios relative to the knapsack’s capacity,and item values are normalized to represent their percentage of the total value across all items.This novel state modification leads to a 5%improvement in accuracy compared to the state-of-the-art RL-based algorithms,while significantly reducing execution time.Our RL-based method outperforms DP by over 9000 times in terms of speed,making it highly scalable for larger problem instances.Furthermore,we improve the performance of the RL model by incorporating Noisy layers into the neural network architecture.The addition of Noisy layers enhances the exploration capabilities of the agent,resulting in an additional accuracy boost of 0.2%–0.5%.The results demonstrate that our approach not only outperforms existing RL techniques,such as the Transformer model in terms of accuracy,but also provides a substantial improvement than DP in computational efficiency.This combination of enhanced accuracy and speed presents a promising solution for tackling large-scale optimization problems in real-world applications,where both precision and time are critical factors.展开更多
As a widely used reconstruction algorithm in quantum state tomography, maximum likelihood estimation tends to assign a rank-deficient matrix, which decreases estimation accuracy for certain quantum states. Fortunately...As a widely used reconstruction algorithm in quantum state tomography, maximum likelihood estimation tends to assign a rank-deficient matrix, which decreases estimation accuracy for certain quantum states. Fortunately, hedged maximum likelihood estimation (HMLE) [Phys. Rev. Lett. 105 (2010)200504] was proposed to avoid this problem. Here we study more details about this proposal in the two-qubit case and further improve its performance. We ameliorate the HMLE method by updating the hedging function based on the purity of the estimated state. Both performances of HMLE and ameliorated HMLE are demonstrated by numerical simulation and experimental implementation on the Werner states of polarization-entangled photons.展开更多
An imaging accuracy improving method is established, within which a distance coefficient including location information between sparse array configuration and the location of defect is proposed to select higher signal...An imaging accuracy improving method is established, within which a distance coefficient including location information between sparse array configuration and the location of defect is proposed to select higher signal- to-noise ratio data from all experimental data and then to use these selected data for elliptical imaging. Tile relationships among imaging accuracy, distance coefficient and residual direct wave are investigated, and then the residual direct wave is introduced to make the engineering application more convenient. The effectiveness of the proposed method is evaluated experimentally by sparse transducer array of a rectangle, and the results reveal that selecting experimental data of smaller distance coefficient can effectively improve imaging accuracy. Moreover, the direct wave difference increases with the decrease of the distance coefficient, which implies that the imaging accuracy can be effectively improved by using the experimental data of the larger direct wave difference.展开更多
Key-recovery technology is often used by an adversary to attempt to recover the cryptographic key of an encryption scheme. The most obvious key-recovery attack is the exhaustive key-search attack. But modern ciphers o...Key-recovery technology is often used by an adversary to attempt to recover the cryptographic key of an encryption scheme. The most obvious key-recovery attack is the exhaustive key-search attack. But modern ciphers often have a key space of size 2128 or greater, making such attacks infeasible with current technology. Cache-based side channel attack is another way to get the cryptographic key of an encryption scheme, but there are random noises in side channel attack. In order to reduce random errors, it is advisable to repeat the key recovery process many times. This paper is focused on the way to improve the key recovery accuracy by dealing with the key sequences obtained from the repeated Cache-based side channel attacks. To get the real key, private key bits from side channel attack are collected firstly. And then the key sequences are aligned using sequence alignment algorithms based on dynamic programming. The provided method of key recovery is universal, which is not limited to any cryptographic algorithm. The experiment shows that the proposed method has a good performance and a high availability when the error rate of the collected key bit is within a reasonable range.展开更多
The recent surge in demand for timely and accurate health information has highlighted the need for more advanced data analysis tools.To reduce the incidence of preventable medical errors,sophisticated IT-driven classi...The recent surge in demand for timely and accurate health information has highlighted the need for more advanced data analysis tools.To reduce the incidence of preventable medical errors,sophisticated IT-driven classification and prediction algorithms are essential.However,extracting meaningful insights from complex biomedical data remains a significant challenge in healthcare transformation.Modern biomedical and health research generates diverse data types,including electronic health records(EHRs),medical imaging,sensor data,and telemedicine inputs,which are often complex,heterogeneous,poorly annotated,and largely unstructured.Traditional statistical learning and data mining methods require extensive preprocessing before developing predictive or clustering models.This process becomes even more challenging when dealing with intricate datasets and limited domain-specific knowledge.Recent advancements in deep learning offer promising end-to-end models capable of handling such complexity.However,these models do not consistently achieve the high levels of accuracy required by healthcare professionals.In this study,we introduce a novel Deep Learning Algorithm combined with a generative AI designed to improve classification accuracy in clinical applications significantly.The algorithm is tailored for seamless integration into hospital workflows and electronic health record systems—an area that is the central focus of our ongoing research.The proposed method combines real-world clinical data with synthetic data generated by Principal Model Generative AI.This approach increased classification accuracy in our experiments from 76%to 95%-98%.展开更多
We demonstrate a new synchronization method for the White Rabbit system. Signals are transmitted in a single mode fiber in both directions with the same light wavelength. Without the complex calibration process of the...We demonstrate a new synchronization method for the White Rabbit system. Signals are transmitted in a single mode fiber in both directions with the same light wavelength. Without the complex calibration process of the fiber asymmetry parameter, the new method reduces the effect of chromatic dispersion and improves the synchronization accuracy. The experiment achieves timing synchronization accuracy below 200 ps over 50 km fiber constructed by different companies' fiber spools. The proposed method would make White Rabbit technology immune to the chromatic dispersion of fiber links and can be applied to long distance synchronization.展开更多
We introduce a novel method to accurately extract the optical parameters in terahertz reflection imaging. Our method builds on standard self-referencing methods using the reflected signal from the bottom of the imagin...We introduce a novel method to accurately extract the optical parameters in terahertz reflection imaging. Our method builds on standard self-referencing methods using the reflected signal from the bottom of the imaging window material to further compensate for time-dependent system fluctuations and position-dependent variation in the window thickness. Our proposed method not only improves the accuracy, but also simplifies the imaging procedure and reduces measurement times.展开更多
基金supported by the key R&D program of China Energy Investment Corporation (GJNY-18-27)National Natural Science Foundation of China (Nos. 61675110 and 51906124)。
文摘Laser-induced breakdown spectroscopy(LIBS) is a potential technology for online coal property analysis,but successful quantitative measurement of calorific value using LIBS suffers from relatively low accuracy caused by the matrix effect.To solve this problem,the support vector machine(SVM) and the partial least square(PLS) were combined to increase the measurement accuracy of calorific value in this study.The combination model utilized SVM to classify coal samples into two groups according to their volatile matter contents to reduce the matrix effect,and then applied PLS to establish calibration models for each sample group respectively.The proposed model was applied to the measurement of calorific values of 53 coal samples,showing that the proposed model could greatly increase accuracy of the measurement of calorific values.Compared with the traditional PLS method,the coefficient of determination(R2) was improved from 0.93 to 0.97,the root-mean-square error of prediction was reduced from 1.68 MJ kg-1 to1.08 MJ kg-1,and the average relative error was decreased from 6.7% to 3.93%,showing an overall improvement.
基金supported by the National Natural Science Foundation of China(Grant Nos.12072105,11932006,and 52308498)the Natural Science Foundation of Jiangsu Province,China(Grant No.BK20220976).
文摘Dynamic wake field information is vital for the optimized design and control of wind farms.Combined with sparse measurement data from light detection and ranging(LiDAR),the physics-informed neural network(PINN)frameworks have recently been employed for forecasting freestream wind and wake fields.However,these PINN frameworks face challenges of low prediction accuracy and long training times.Therefore,this paper constructed a PINN framework for dynamic wake field prediction by integrating two accuracy improvement strategies and a step-by-step training time saving strategy.The results showed that the different performance improvement routes significantly improved the overall performance of the PINN.The accuracy and efficiency of the PINN with spatiotemporal improvement strategies were validated via LiDAR-measured data from a wind farm in Shandong province,China.This paper sheds light on load reduction,efficiency improvement,intelligent operation and maintenance of wind farms.
基金supported in part by the Research Start-Up Funds of South-Central Minzu University under Grants YZZ23002,YZY23001,and YZZ18006in part by the Hubei Provincial Natural Science Foundation of China under Grants 2024AFB842 and 2023AFB202+3 种基金in part by the Knowledge Innovation Program of Wuhan Basic Research underGrant 2023010201010151in part by the Spring Sunshine Program of Ministry of Education of the People’s Republic of China under Grant HZKY20220331in part by the Funds for Academic Innovation Teams and Research Platformof South-CentralMinzu University Grant Number:XT224003,PTZ24001in part by the Career Development Fund(CDF)of the Agency for Science,Technology and Research(A*STAR)(Grant Number:C233312007).
文摘The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP)and greedy algorithms,have been effective in solving small problem instances but often struggle with scalability and efficiency as the problem size increases.DP,for instance,has exponential time complexity and can become computationally prohibitive for large problem instances.On the other hand,greedy algorithms offer faster solutions but may not always yield the optimal results,especially when the problem involves complex constraints or large numbers of items.This paper introduces a novel reinforcement learning(RL)approach to solve the knapsack problem by enhancing the state representation within the learning environment.We propose a representation where item weights and volumes are expressed as ratios relative to the knapsack’s capacity,and item values are normalized to represent their percentage of the total value across all items.This novel state modification leads to a 5%improvement in accuracy compared to the state-of-the-art RL-based algorithms,while significantly reducing execution time.Our RL-based method outperforms DP by over 9000 times in terms of speed,making it highly scalable for larger problem instances.Furthermore,we improve the performance of the RL model by incorporating Noisy layers into the neural network architecture.The addition of Noisy layers enhances the exploration capabilities of the agent,resulting in an additional accuracy boost of 0.2%–0.5%.The results demonstrate that our approach not only outperforms existing RL techniques,such as the Transformer model in terms of accuracy,but also provides a substantial improvement than DP in computational efficiency.This combination of enhanced accuracy and speed presents a promising solution for tackling large-scale optimization problems in real-world applications,where both precision and time are critical factors.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11574291,61108009 and 61222504
文摘As a widely used reconstruction algorithm in quantum state tomography, maximum likelihood estimation tends to assign a rank-deficient matrix, which decreases estimation accuracy for certain quantum states. Fortunately, hedged maximum likelihood estimation (HMLE) [Phys. Rev. Lett. 105 (2010)200504] was proposed to avoid this problem. Here we study more details about this proposal in the two-qubit case and further improve its performance. We ameliorate the HMLE method by updating the hedging function based on the purity of the estimated state. Both performances of HMLE and ameliorated HMLE are demonstrated by numerical simulation and experimental implementation on the Werner states of polarization-entangled photons.
文摘An imaging accuracy improving method is established, within which a distance coefficient including location information between sparse array configuration and the location of defect is proposed to select higher signal- to-noise ratio data from all experimental data and then to use these selected data for elliptical imaging. Tile relationships among imaging accuracy, distance coefficient and residual direct wave are investigated, and then the residual direct wave is introduced to make the engineering application more convenient. The effectiveness of the proposed method is evaluated experimentally by sparse transducer array of a rectangle, and the results reveal that selecting experimental data of smaller distance coefficient can effectively improve imaging accuracy. Moreover, the direct wave difference increases with the decrease of the distance coefficient, which implies that the imaging accuracy can be effectively improved by using the experimental data of the larger direct wave difference.
基金Supported in part by the Fundamental Research Funds for the Central Universities of China(2015JBM034)
文摘Key-recovery technology is often used by an adversary to attempt to recover the cryptographic key of an encryption scheme. The most obvious key-recovery attack is the exhaustive key-search attack. But modern ciphers often have a key space of size 2128 or greater, making such attacks infeasible with current technology. Cache-based side channel attack is another way to get the cryptographic key of an encryption scheme, but there are random noises in side channel attack. In order to reduce random errors, it is advisable to repeat the key recovery process many times. This paper is focused on the way to improve the key recovery accuracy by dealing with the key sequences obtained from the repeated Cache-based side channel attacks. To get the real key, private key bits from side channel attack are collected firstly. And then the key sequences are aligned using sequence alignment algorithms based on dynamic programming. The provided method of key recovery is universal, which is not limited to any cryptographic algorithm. The experiment shows that the proposed method has a good performance and a high availability when the error rate of the collected key bit is within a reasonable range.
文摘The recent surge in demand for timely and accurate health information has highlighted the need for more advanced data analysis tools.To reduce the incidence of preventable medical errors,sophisticated IT-driven classification and prediction algorithms are essential.However,extracting meaningful insights from complex biomedical data remains a significant challenge in healthcare transformation.Modern biomedical and health research generates diverse data types,including electronic health records(EHRs),medical imaging,sensor data,and telemedicine inputs,which are often complex,heterogeneous,poorly annotated,and largely unstructured.Traditional statistical learning and data mining methods require extensive preprocessing before developing predictive or clustering models.This process becomes even more challenging when dealing with intricate datasets and limited domain-specific knowledge.Recent advancements in deep learning offer promising end-to-end models capable of handling such complexity.However,these models do not consistently achieve the high levels of accuracy required by healthcare professionals.In this study,we introduce a novel Deep Learning Algorithm combined with a generative AI designed to improve classification accuracy in clinical applications significantly.The algorithm is tailored for seamless integration into hospital workflows and electronic health record systems—an area that is the central focus of our ongoing research.The proposed method combines real-world clinical data with synthetic data generated by Principal Model Generative AI.This approach increased classification accuracy in our experiments from 76%to 95%-98%.
基金supported by the Program of International S&T Cooperation under Grant No.2016YFE0100200
文摘We demonstrate a new synchronization method for the White Rabbit system. Signals are transmitted in a single mode fiber in both directions with the same light wavelength. Without the complex calibration process of the fiber asymmetry parameter, the new method reduces the effect of chromatic dispersion and improves the synchronization accuracy. The experiment achieves timing synchronization accuracy below 200 ps over 50 km fiber constructed by different companies' fiber spools. The proposed method would make White Rabbit technology immune to the chromatic dispersion of fiber links and can be applied to long distance synchronization.
基金Research Grants Council of Hong Kong(415313,14205514)Direct Grant,Chinese University of Hong Kong
文摘We introduce a novel method to accurately extract the optical parameters in terahertz reflection imaging. Our method builds on standard self-referencing methods using the reflected signal from the bottom of the imaging window material to further compensate for time-dependent system fluctuations and position-dependent variation in the window thickness. Our proposed method not only improves the accuracy, but also simplifies the imaging procedure and reduces measurement times.