Chemical processes are complex, for which traditional neural network models usually can not lead to satisfactory accuracy. Selective neural network ensemble is an effective way to enhance the generalization accuracy o...Chemical processes are complex, for which traditional neural network models usually can not lead to satisfactory accuracy. Selective neural network ensemble is an effective way to enhance the generalization accuracy of networks, but there are some problems, e.g., lacking of unified definition of diversity among component neural networks and difficult to improve the accuracy by selecting if the diversities of available networks are small. In this study, the output errors of networks are vectorized, the diversity of networks is defined based on the error vectors, and the size of ensemble is analyzed. Then an error vectorization based selective neural network ensemble (EVSNE) is proposed, in which the error vector of each network can offset that of the other networks by training the component networks orderly. Thus the component networks have large diversity. Experiments and comparisons over standard data sets and actual chemical process data set for production of high-density polyethylene demonstrate that EVSNE performs better in generalization ability.展开更多
MacCormack explicit scheme and Baldwin-Lomax algebraic turbulent model are employed to solve the axisymmetric compressible Navier-Stokes equations for the numerical simulation of the supersonic mustanl floats interact...MacCormack explicit scheme and Baldwin-Lomax algebraic turbulent model are employed to solve the axisymmetric compressible Navier-Stokes equations for the numerical simulation of the supersonic mustanl floats interacted with transverse injection at the base of a cone. A temperature switch function must be added to the artificial viscous model suggested by jameson etc to enhance the scheme's ability to eliminate oscillation for some injection case.The typical code optimization techniques about vectorization and some useful concepts and terminology about multiprocessing of YH-2 parallel supercmputer is given and explatined with some examples After reconstruction and optimization the code gets a spedup 5 .973 on pipeline computer YH- 1 and gets a speedup 1 886 for 2 processors and 3.545 for 4 processors on YH-2 parallel supeercomputer by using domain decomposition method..展开更多
The explosive growth of social media means portrait editing and retouching are in high demand.While portraits are commonly captured and stored as raster images,editing raster images is non-trivial and requires the use...The explosive growth of social media means portrait editing and retouching are in high demand.While portraits are commonly captured and stored as raster images,editing raster images is non-trivial and requires the user to be highly skilled.Aiming at developing intuitive and easy-to-use portrait editing tools,we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation.The base layer consists of a set of sparse diffusion curves(DCs)which characterize salient geometric features and low-frequency colors,providing a means for semantic color transfer and facial expression editing.The middle level encodes specular highlights and shadows as large,editable Poisson regions(PRs)and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs.The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation.We train a deep generative model that can produce high-frequency residuals automatically.Thanks to the inherent meaning in vector primitives,editing portraits becomes easy and intuitive.In particular,our method supports color transfer,facial expression editing,highlight and shadow editing,and automatic retouching.To quantitatively evaluate the results,we extend the commonly used FLIP metric(which measures color and feature differences between two images)to consider illumination.The new metric,illumination-sensitive FLIP,can effectively capture salient changes in color transfer results,and is more consistent with human perception than FLIP and other quality measures for portrait images.We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks,such as retouching,light editing,color transfer,and expression editing.展开更多
Vector graphics plays an important role in computer animation and imaging technologies. However present techniques and tools cannot fully replace traditional pencil and paper. Additionally, vector representation of an...Vector graphics plays an important role in computer animation and imaging technologies. However present techniques and tools cannot fully replace traditional pencil and paper. Additionally, vector representation of an image is not always available. There is not yet a good solution for vectorizing a picture drawn on a paper. This work attempts to solve the problem of vectorizing grayscale line drawings. The solution proposed uses Disk B-Spline curves to represent strokes of an image in vector form. The algorithm builds a vector representation from a grayscale raster image, which can be a scanned picture for instance. The proposed method uses a Gaussian sliding window to calculate skeleton and perceptive width of a stroke. As a result of vectorization, the given image is represented by a set of Disk B-Spline curves.展开更多
The development of clinical candidates that modify the natural progression of sporadic Parkinson's disease and related synucleinopathies is a praiseworthy endeavor,but extremely challenging.Therapeutic candidates ...The development of clinical candidates that modify the natural progression of sporadic Parkinson's disease and related synucleinopathies is a praiseworthy endeavor,but extremely challenging.Therapeutic candidates that were successful in preclinical Parkinson's disease animal models have repeatedly failed when tested in clinical trials.While these failures have many possible explanations,it is perhaps time to recognize that the problem lies with the animal models rather than the putative candidate.In other words,the lack of adequate animal models of Parkinson's disease currently represents the main barrier to preclinical identification of potential disease-modifying therapies likely to succeed in clinical trials.However,this barrier may be overcome by the recent introduction of novel generations of viral vectors coding for different forms of alpha-synuclein species and related genes.Although still facing several limitations,these models have managed to mimic the known neuropathological hallmarks of Parkinson's disease with unprecedented accuracy,delineating a more optimistic scenario for the near future.展开更多
Dengue fever is an acute infectious disease caused by the dengue virus and transmitted by mosquito vectors[1].Its clinical manifestations include high fever,headache,muscle and joint pain,and rash.It holds a significa...Dengue fever is an acute infectious disease caused by the dengue virus and transmitted by mosquito vectors[1].Its clinical manifestations include high fever,headache,muscle and joint pain,and rash.It holds a significant position in global public health.In recent years,its incidence has continued to rise worldwide[2],making it one of the major diseases threatening human health.The disease course of dengue fever is divided into three typical phases:the acute febrile phase,the critical phase,and the recovery phase.While most patients experience mild symptoms,some may progress to severe dengue and potentially fatal outcomes if not promptly and effectively treated during the critical phase.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
The capabilities of GIS for constructing a digital elevation model of a mountainous area and visualizing a spatial image of the terrain are given in this paper.Graphic,digital data and topographic maps,which are the m...The capabilities of GIS for constructing a digital elevation model of a mountainous area and visualizing a spatial image of the terrain are given in this paper.Graphic,digital data and topographic maps,which are the main sources for GIS,are described.The methods of vectorization of isolines and the requirements for technical means of processing graphic materials are presented in detail.The advantages and disadvantages of the DEM of a mountainous region are shown here.Segmentation methods using an interpolation polynomial are described in detail.A DEM of the mountainous area where the border between the republics runs was constructed in 2D and 3D formats using the GIS Panorama.Reducing the chord length when segmenting isolines on topographic maps leads to more accurate DEM construction.A vertical profile of a mountainous area with a visibility zone between two points was constructed.It is expected that the improved latitude,longitude and altitude parameters of the topographic map will be used to form a regional geodetic network and geospatial analysis of mountain ranges.It is proposed to use not only satellite data,but also classical geodetic networks and maps.It is recommended to use satellite and aerial photography to clarify the topographic and geodetic support of the studied area.展开更多
Architectural plan generation via pix2pix series algorithms faces dual challenges:the absence of domain-specific evaluation metrics and a lack of systematic insights into the joint impact of training configurations.To...Architectural plan generation via pix2pix series algorithms faces dual challenges:the absence of domain-specific evaluation metrics and a lack of systematic insights into the joint impact of training configurations.To address the limitations of pix2pix-based models adaptation to architectural design,we designed a training regimen involving 12 experiments with varying training set sizes,dataset characteristics,and algorithms.These experiments utilized our self-built,high-quality,large-volume synthetic dataset of architectural-like plans.By saving intermediate models,we obtained 240 generative models for evaluation on a fixed test set.To quantify model performance,we developed a dual-aspect evaluation method that assesses predictions through pixel similarity(principle adherence)and segmentation line continuity(vectorization quality).Analysis revealed algorithm choice and training set size as primary factors,with larger sets enhancing the benefits of high-resolution and enhancedannotation datasets.The optimal model achieved high-quality predictions,demonstrating strict adherence to predefined principles(0.81 similarity)and effective vectorization(0.86 segmentation line continuity).Testing on 7695 samples of varying complexity confirmed the model’s robustness,strong generative capability,and controlled innovation within defined principles,validated through 3D model conversion.This work provides a domain-adapted framework for training and evaluating pix2pix-based architectural generators,bridging generative research and practical applications.展开更多
Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex int...Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex internal chemical systems of LIBs and the nonlinear degradation of their performance,direct measurement of SOH and RUL is challenging.To address these issues,the Twin Support Vector Machine(TWSVM)method is proposed to predict SOH and RUL.Initially,the constant current charging time of the lithium battery is extracted as a health indicator(HI),decomposed using Variational Modal Decomposition(VMD),and feature correlations are computed using Importance of Random Forest Features(RF)to maximize the extraction of critical factors influencing battery performance degradation.Furthermore,to enhance the global search capability of the Convolution Optimization Algorithm(COA),improvements are made using Good Point Set theory and the Differential Evolution method.The Improved Convolution Optimization Algorithm(ICOA)is employed to optimize TWSVM parameters for constructing SOH and RUL prediction models.Finally,the proposed models are validated using NASA and CALCE lithium-ion battery datasets.Experimental results demonstrate that the proposed models achieve an RMSE not exceeding 0.007 and an MAPE not exceeding 0.0082 for SOH and RUL prediction,with a relative error in RUL prediction within the range of[-1.8%,2%].Compared to other models,the proposed model not only exhibits superior fitting capability but also demonstrates robust performance.展开更多
Glucose molecules are of great significance being one of the most important molecules in metabolic chain.However,due to the small Raman scattering cross-section and weak/non-adsorption on bare metals,accurately obtain...Glucose molecules are of great significance being one of the most important molecules in metabolic chain.However,due to the small Raman scattering cross-section and weak/non-adsorption on bare metals,accurately obtaining their"fingerprint information"remains a huge obstacle.Herein,we developed a tip-enhanced Raman scattering(TERS)technique to address this challenge.Adopting an optical fiber radial vector mode internally illuminates the plasmonic fiber tip to effectively suppress the background noise while generating a strong electric-field enhanced tip hotspot.Furthermore,the tip hotspot approaching the glucose molecules was manipulated via the shear-force feedback to provide more freedom for selecting substrates.Consequently,our TERS technique achieves the visualization of all Raman modes of glucose molecules within spectral window of 400-3200 cm^(-1),which is not achievable through the far-field/surface-enhanced Raman,or the existing TERS techniques.Our TERS technique offers a powerful tool for accurately identifying Raman scattering of molecules,paving the way for biomolecular analysis.展开更多
Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain c...Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain conditioning factor selection method rather than systematically study this uncertainty issue.Targeted,this study aims to systematically explore the influence rules of various commonly used conditioning factor selection methods on LSP,and on this basis to innovatively propose a principle with universal application for optimal selection of conditioning factors.An'yuan County in southern China is taken as example considering 431 landslides and 29 types of conditioning factors.Five commonly used factor selection methods,namely,the correlation analysis(CA),linear regression(LR),principal component analysis(PCA),rough set(RS)and artificial neural network(ANN),are applied to select the optimal factor combinations from the original 29 conditioning factors.The factor selection results are then used as inputs of four types of common machine learning models to construct 20 types of combined models,such as CA-multilayer perceptron,CA-random forest.Additionally,multifactor-based multilayer perceptron random forest models that selecting conditioning factors based on the proposed principle of“accurate data,rich types,clear significance,feasible operation and avoiding duplication”are constructed for comparisons.Finally,the LSP uncertainties are evaluated by the accuracy,susceptibility index distribution,etc.Results show that:(1)multifactor-based models have generally higher LSP performance and lower uncertainties than those of factors selection-based models;(2)Influence degree of different machine learning on LSP accuracy is greater than that of different factor selection methods.Conclusively,the above commonly used conditioning factor selection methods are not ideal for improving LSP performance and may complicate the LSP processes.In contrast,a satisfied combination of conditioning factors can be constructed according to the proposed principle.展开更多
Thrust-vectoring capability has become a critical feature for propulsion systems as space missions move from static to dynamic.Thrust-vectoring is a well-developed area of rocket engine science.For electric propulsion...Thrust-vectoring capability has become a critical feature for propulsion systems as space missions move from static to dynamic.Thrust-vectoring is a well-developed area of rocket engine science.For electric propulsion,however,it is an evolving field that has taken a new leap forward in recent years.A review and analysis of thrust-vectoring schemes for electric propulsion systems have been conducted.The scope of this review includes thrust-vectoring schemes that can be implemented for electrostatic,electromagnetic,and beam-driven thrusters.A classification of electric propulsion schemes that provide thrust-vectoring capability is developed.More attention is given to schemes implemented in laboratory prototypes and flight models.The final part is devoted to a discussion on the suitability of different electric propulsion systems with thrust-vectoring capability for modern space mission operations.The thrust-vectoring capability of electric propulsion is necessary for inner and outer space satellites,which are at a disadvantage with conventional unidirectional propulsion systems due to their limited maneuverability.展开更多
Vector winds play a crucial role in weather and climate,as well as the effective utilization of wind energy resources.However,limited research has been conducted on treating the wind field as a vector field in the eva...Vector winds play a crucial role in weather and climate,as well as the effective utilization of wind energy resources.However,limited research has been conducted on treating the wind field as a vector field in the evaluation of numerical weather prediction models.In this study,the authors treat vector winds as a whole by employing a vector field evaluation method,and evaluate the mesoscale model of the China Meteorological Administration(CMA-MESO)and ECMWF forecast,with reference to ERA5 reanalysis,in terms of multiple aspects of vector winds over eastern China in 2022.The results show that the ECMWF forecast is superior to CMA-MESO in predicting the spatial distribution and intensity of 10-m vector winds.Both models overestimate the wind speed in East China,and CMA-MESO overestimates the wind speed to a greater extent.The forecasting skill of the vector wind field in both models decreases with increasing lead time.The forecasting skill of CMA-MESO fluctuates more and decreases faster than that of the ECMWF forecast.There is a significant negative correlation between the model vector wind forecasting skill and terrain height.This study provides a scientific evaluation of the local application of vector wind forecasts of the CMA-MESO model and ECMWF forecast.展开更多
Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advanc...Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advancements in machine learning,achieving both high accuracy and low computational cost for arrhythmia classification remains a critical issue.Computer-aided diagnosis systems can play a key role in early detection,reducing mortality rates associated with cardiac disorders.This study proposes a fully automated approach for ECG arrhythmia classification using deep learning and machine learning techniques to improve diagnostic accuracy while minimizing processing time.The methodology consists of three stages:1)preprocessing,where ECG signals undergo noise reduction and feature extraction;2)feature Identification,where deep convolutional neural network(CNN)blocks,combined with data augmentation and transfer learning,extract key parameters;3)classification,where a hybrid CNN-SVM model is employed for arrhythmia recognition.CNN-extracted features were fed into a binary support vector machine(SVM)classifier,and model performance was assessed using five-fold cross-validation.Experimental findings demonstrated that the CNN2 model achieved 85.52%accuracy,while the hybrid CNN2-SVM approach significantly improved accuracy to 97.33%,outperforming conventional methods.This model enhances classification efficiency while reducing computational complexity.The proposed approach bridges the gap between accuracy and processing speed in ECG arrhythmia classification,offering a promising solution for real-time clinical applications.Its superior performance compared to nonlinear classifiers highlights its potential for improving automated cardiac diagnosis.展开更多
The application of machine learning for pyrite discrimination establishes a robust foundation for constructing the ore-forming history of multi-stage deposits;however,published models face challenges related to limite...The application of machine learning for pyrite discrimination establishes a robust foundation for constructing the ore-forming history of multi-stage deposits;however,published models face challenges related to limited,imbalanced datasets and oversampling.In this study,the dataset was expanded to approximately 500 samples for each type,including 508 sedimentary,573 orogenic gold,548 sedimentary exhalative(SEDEX)deposits,and 364 volcanogenic massive sulfides(VMS)pyrites,utilizing random forest(RF)and support vector machine(SVM)methodologies to enhance the reliability of the classifier models.The RF classifier achieved an overall accuracy of 99.8%,and the SVM classifier attained an overall accuracy of 100%.The model was evaluated by a five-fold cross-validation approach with 93.8%accuracy for the RF and 94.9%for the SVM classifier.These results demonstrate the strong feasibility of pyrite classification,supported by a relatively large,balanced dataset and high accuracy rates.The classifier was employed to reveal the genesis of the controversial Keketale Pb-Zn deposit in NW China,which has been inconclusive among SEDEX,VMS,or a SEDEX-VMS transition.Petrographic investigations indicated that the deposit comprises early fine-grained layered pyrite(Py1)and late recrystallized pyrite(Py2).The majority voting classified Py1 as the VMS type,with an accuracy of RF and SVM being 72.2%and 75%,respectively,and confirmed Py2 as an orogenic type with 74.3% and 77.1%accuracy,respectively.The new findings indicated that the Keketale deposit originated from a submarine VMS mineralization system,followed by late orogenic-type overprinting of metamorphism and deformation,which is consistent with the geological and geochemical observations.This study further emphasizes the advantages of Machine learning(ML)methods in accurately and directly discriminating the deposit types and reconstructing the formation history of multi-stage deposits.展开更多
In underground mining,especially in entry-type excavations,the instability of surrounding rock structures can lead to incalculable losses.As a crucial tool for stability analysis in entry-type excavations,the critical...In underground mining,especially in entry-type excavations,the instability of surrounding rock structures can lead to incalculable losses.As a crucial tool for stability analysis in entry-type excavations,the critical span graph must be updated to meet more stringent engineering requirements.Given this,this study introduces the support vector machine(SVM),along with multiple ensemble(bagging,adaptive boosting,and stacking)and optimization(Harris hawks optimization(HHO),cuckoo search(CS))techniques,to overcome the limitations of the traditional methods.The analysis indicates that the hybrid model combining SVM,bagging,and CS strategies has a good prediction performance,and its test accuracy reaches 0.86.Furthermore,the partition scheme of the critical span graph is adjusted based on the CS-BSVM model and 399 cases.Compared with previous empirical or semi-empirical methods,the new model overcomes the interference of subjective factors and possesses higher interpretability.Since relying solely on one technology cannot ensure prediction credibility,this study further introduces genetic programming(GP)and kriging interpolation techniques.The explicit expressions derived through GP can offer the stability probability value,and the kriging technique can provide interpolated definitions for two new subclasses.Finally,a prediction platform is developed based on the above three approaches,which can rapidly provide engineering feedback.展开更多
The Macao Science Satellite-1(known as MSS-1)is the first scientific exploration satellite that was designed to measure the Earth's low latitude magnetic field at high resolution and with high precision by collect...The Macao Science Satellite-1(known as MSS-1)is the first scientific exploration satellite that was designed to measure the Earth's low latitude magnetic field at high resolution and with high precision by collecting data in a near-equatorial orbit.Magnetic field data from MSS-1's onboard Vector Fluxgate Magnetometer(VFM),collected at a sample rate of 50 Hz,allows us to detect and investigate sources of magnetic data contamination,from DC to relevant Nyquist frequency.Here we report two types of artificial disturbances in the VFM data.One is V-shaped events concentrated at night,with frequencies sweeping from the Nyquist frequency down to zero and back up.The other is 5-Hz events(ones that exhibit a distinct 5 Hz spectrum peak);these events are always accompanied by intervals of spiky signals,and are clearly related to the attitude control of the satellite.Our analyses show that VFM noise levels in daytime are systematically lower than in nighttime.The daily average noise levels exhibit a period of about 52 days.The V-shaped events are strongly correlated with higher VFM noise levels.展开更多
The endpoint carbon content in the converter is critical for the quality of steel products,and accurately predicting this parameter is an effective way to reduce alloy consumption and improve smelting efficiency.Howev...The endpoint carbon content in the converter is critical for the quality of steel products,and accurately predicting this parameter is an effective way to reduce alloy consumption and improve smelting efficiency.However,most scholars currently focus on modifying methods to enhance model accuracy,while overlooking the extent to which input parameters influence accuracy.To address this issue,in this study,a prediction model for the endpoint carbon content in the converter was developed using factor analysis(FA)and support vector machine(SVM)optimized by improved particle swarm optimization(IPSO).Analysis of the factors influencing the endpoint carbon content during the converter smelting process led to the identification of 21 input parameters.Subsequently,FA was used to reduce the dimensionality of the data and applied to the prediction model.The results demonstrate that the performance of the FA-IPSO-SVM model surpasses several existing methods,such as twin support vector regression and support vector machine.The model achieves hit rates of 89.59%,96.21%,and 98.74%within error ranges of±0.01%,±0.015%,and±0.02%,respectively.Finally,based on the prediction results obtained by sequentially removing input parameters,the parameters were classified into high influence(5%-7%),medium influence(2%-5%),and low influence(0-2%)categories according to their varying degrees of impact on prediction accuracy.This classi-fication provides a reference for selecting input parameters in future prediction models for endpoint carbon content.展开更多
基金Supported by the National Natural Science Foundation of China (61074153, 61104131)the Fundamental Research Fundsfor Central Universities of China (ZY1111, JD1104)
文摘Chemical processes are complex, for which traditional neural network models usually can not lead to satisfactory accuracy. Selective neural network ensemble is an effective way to enhance the generalization accuracy of networks, but there are some problems, e.g., lacking of unified definition of diversity among component neural networks and difficult to improve the accuracy by selecting if the diversities of available networks are small. In this study, the output errors of networks are vectorized, the diversity of networks is defined based on the error vectors, and the size of ensemble is analyzed. Then an error vectorization based selective neural network ensemble (EVSNE) is proposed, in which the error vector of each network can offset that of the other networks by training the component networks orderly. Thus the component networks have large diversity. Experiments and comparisons over standard data sets and actual chemical process data set for production of high-density polyethylene demonstrate that EVSNE performs better in generalization ability.
文摘MacCormack explicit scheme and Baldwin-Lomax algebraic turbulent model are employed to solve the axisymmetric compressible Navier-Stokes equations for the numerical simulation of the supersonic mustanl floats interacted with transverse injection at the base of a cone. A temperature switch function must be added to the artificial viscous model suggested by jameson etc to enhance the scheme's ability to eliminate oscillation for some injection case.The typical code optimization techniques about vectorization and some useful concepts and terminology about multiprocessing of YH-2 parallel supercmputer is given and explatined with some examples After reconstruction and optimization the code gets a spedup 5 .973 on pipeline computer YH- 1 and gets a speedup 1 886 for 2 processors and 3.545 for 4 processors on YH-2 parallel supeercomputer by using domain decomposition method..
基金This project was supported by the Ministry of Education,Singapore,under its Academic Research Fund Tier 1(RG20/20)the National Natural Science Foundation of China(61872347)the Special Plan for the Development of Distinguished Young Scientists of ISCAS(Y8RC535018).
文摘The explosive growth of social media means portrait editing and retouching are in high demand.While portraits are commonly captured and stored as raster images,editing raster images is non-trivial and requires the user to be highly skilled.Aiming at developing intuitive and easy-to-use portrait editing tools,we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation.The base layer consists of a set of sparse diffusion curves(DCs)which characterize salient geometric features and low-frequency colors,providing a means for semantic color transfer and facial expression editing.The middle level encodes specular highlights and shadows as large,editable Poisson regions(PRs)and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs.The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation.We train a deep generative model that can produce high-frequency residuals automatically.Thanks to the inherent meaning in vector primitives,editing portraits becomes easy and intuitive.In particular,our method supports color transfer,facial expression editing,highlight and shadow editing,and automatic retouching.To quantitatively evaluate the results,we extend the commonly used FLIP metric(which measures color and feature differences between two images)to consider illumination.The new metric,illumination-sensitive FLIP,can effectively capture salient changes in color transfer results,and is more consistent with human perception than FLIP and other quality measures for portrait images.We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks,such as retouching,light editing,color transfer,and expression editing.
文摘Vector graphics plays an important role in computer animation and imaging technologies. However present techniques and tools cannot fully replace traditional pencil and paper. Additionally, vector representation of an image is not always available. There is not yet a good solution for vectorizing a picture drawn on a paper. This work attempts to solve the problem of vectorizing grayscale line drawings. The solution proposed uses Disk B-Spline curves to represent strokes of an image in vector form. The algorithm builds a vector representation from a grayscale raster image, which can be a scanned picture for instance. The proposed method uses a Gaussian sliding window to calculate skeleton and perceptive width of a stroke. As a result of vectorization, the given image is represented by a set of Disk B-Spline curves.
基金supported by grants PID2020-120308RB-I00 and PID2023-147802OB-I00 funded by MICIU/AEI/10.13039/501100011033FEDER,UE,by Aligning Science Across Parkinson’s(ref.ASAP-020505)through the Michael J.Fox Foundation for Parkinson’s Research+1 种基金by CiberNed Intramural Collaborative Projects(ref.PI2020/09)by the Spanish Fundación Mutua Madrile?a de Investigación Médica(to JLL)。
文摘The development of clinical candidates that modify the natural progression of sporadic Parkinson's disease and related synucleinopathies is a praiseworthy endeavor,but extremely challenging.Therapeutic candidates that were successful in preclinical Parkinson's disease animal models have repeatedly failed when tested in clinical trials.While these failures have many possible explanations,it is perhaps time to recognize that the problem lies with the animal models rather than the putative candidate.In other words,the lack of adequate animal models of Parkinson's disease currently represents the main barrier to preclinical identification of potential disease-modifying therapies likely to succeed in clinical trials.However,this barrier may be overcome by the recent introduction of novel generations of viral vectors coding for different forms of alpha-synuclein species and related genes.Although still facing several limitations,these models have managed to mimic the known neuropathological hallmarks of Parkinson's disease with unprecedented accuracy,delineating a more optimistic scenario for the near future.
文摘Dengue fever is an acute infectious disease caused by the dengue virus and transmitted by mosquito vectors[1].Its clinical manifestations include high fever,headache,muscle and joint pain,and rash.It holds a significant position in global public health.In recent years,its incidence has continued to rise worldwide[2],making it one of the major diseases threatening human health.The disease course of dengue fever is divided into three typical phases:the acute febrile phase,the critical phase,and the recovery phase.While most patients experience mild symptoms,some may progress to severe dengue and potentially fatal outcomes if not promptly and effectively treated during the critical phase.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
文摘The capabilities of GIS for constructing a digital elevation model of a mountainous area and visualizing a spatial image of the terrain are given in this paper.Graphic,digital data and topographic maps,which are the main sources for GIS,are described.The methods of vectorization of isolines and the requirements for technical means of processing graphic materials are presented in detail.The advantages and disadvantages of the DEM of a mountainous region are shown here.Segmentation methods using an interpolation polynomial are described in detail.A DEM of the mountainous area where the border between the republics runs was constructed in 2D and 3D formats using the GIS Panorama.Reducing the chord length when segmenting isolines on topographic maps leads to more accurate DEM construction.A vertical profile of a mountainous area with a visibility zone between two points was constructed.It is expected that the improved latitude,longitude and altitude parameters of the topographic map will be used to form a regional geodetic network and geospatial analysis of mountain ranges.It is proposed to use not only satellite data,but also classical geodetic networks and maps.It is recommended to use satellite and aerial photography to clarify the topographic and geodetic support of the studied area.
文摘Architectural plan generation via pix2pix series algorithms faces dual challenges:the absence of domain-specific evaluation metrics and a lack of systematic insights into the joint impact of training configurations.To address the limitations of pix2pix-based models adaptation to architectural design,we designed a training regimen involving 12 experiments with varying training set sizes,dataset characteristics,and algorithms.These experiments utilized our self-built,high-quality,large-volume synthetic dataset of architectural-like plans.By saving intermediate models,we obtained 240 generative models for evaluation on a fixed test set.To quantify model performance,we developed a dual-aspect evaluation method that assesses predictions through pixel similarity(principle adherence)and segmentation line continuity(vectorization quality).Analysis revealed algorithm choice and training set size as primary factors,with larger sets enhancing the benefits of high-resolution and enhancedannotation datasets.The optimal model achieved high-quality predictions,demonstrating strict adherence to predefined principles(0.81 similarity)and effective vectorization(0.86 segmentation line continuity).Testing on 7695 samples of varying complexity confirmed the model’s robustness,strong generative capability,and controlled innovation within defined principles,validated through 3D model conversion.This work provides a domain-adapted framework for training and evaluating pix2pix-based architectural generators,bridging generative research and practical applications.
基金funded by the Pyramid Talent Training Project of Beijing University of Civil Engineering and Architecture under Grant GJZJ20220802。
文摘Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex internal chemical systems of LIBs and the nonlinear degradation of their performance,direct measurement of SOH and RUL is challenging.To address these issues,the Twin Support Vector Machine(TWSVM)method is proposed to predict SOH and RUL.Initially,the constant current charging time of the lithium battery is extracted as a health indicator(HI),decomposed using Variational Modal Decomposition(VMD),and feature correlations are computed using Importance of Random Forest Features(RF)to maximize the extraction of critical factors influencing battery performance degradation.Furthermore,to enhance the global search capability of the Convolution Optimization Algorithm(COA),improvements are made using Good Point Set theory and the Differential Evolution method.The Improved Convolution Optimization Algorithm(ICOA)is employed to optimize TWSVM parameters for constructing SOH and RUL prediction models.Finally,the proposed models are validated using NASA and CALCE lithium-ion battery datasets.Experimental results demonstrate that the proposed models achieve an RMSE not exceeding 0.007 and an MAPE not exceeding 0.0082 for SOH and RUL prediction,with a relative error in RUL prediction within the range of[-1.8%,2%].Compared to other models,the proposed model not only exhibits superior fitting capability but also demonstrates robust performance.
基金supported by National Natural Science Foundation of China(12374358,91950207)Guangdong Basic and Applied Basic Research Foundation(2024A1515010420).
文摘Glucose molecules are of great significance being one of the most important molecules in metabolic chain.However,due to the small Raman scattering cross-section and weak/non-adsorption on bare metals,accurately obtaining their"fingerprint information"remains a huge obstacle.Herein,we developed a tip-enhanced Raman scattering(TERS)technique to address this challenge.Adopting an optical fiber radial vector mode internally illuminates the plasmonic fiber tip to effectively suppress the background noise while generating a strong electric-field enhanced tip hotspot.Furthermore,the tip hotspot approaching the glucose molecules was manipulated via the shear-force feedback to provide more freedom for selecting substrates.Consequently,our TERS technique achieves the visualization of all Raman modes of glucose molecules within spectral window of 400-3200 cm^(-1),which is not achievable through the far-field/surface-enhanced Raman,or the existing TERS techniques.Our TERS technique offers a powerful tool for accurately identifying Raman scattering of molecules,paving the way for biomolecular analysis.
基金funded by the Natural Science Foundation of China(Grant Nos.42377164 and 41972280)the Badong National Observation and Research Station of Geohazards(Grant No.BNORSG-202305).
文摘Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain conditioning factor selection method rather than systematically study this uncertainty issue.Targeted,this study aims to systematically explore the influence rules of various commonly used conditioning factor selection methods on LSP,and on this basis to innovatively propose a principle with universal application for optimal selection of conditioning factors.An'yuan County in southern China is taken as example considering 431 landslides and 29 types of conditioning factors.Five commonly used factor selection methods,namely,the correlation analysis(CA),linear regression(LR),principal component analysis(PCA),rough set(RS)and artificial neural network(ANN),are applied to select the optimal factor combinations from the original 29 conditioning factors.The factor selection results are then used as inputs of four types of common machine learning models to construct 20 types of combined models,such as CA-multilayer perceptron,CA-random forest.Additionally,multifactor-based multilayer perceptron random forest models that selecting conditioning factors based on the proposed principle of“accurate data,rich types,clear significance,feasible operation and avoiding duplication”are constructed for comparisons.Finally,the LSP uncertainties are evaluated by the accuracy,susceptibility index distribution,etc.Results show that:(1)multifactor-based models have generally higher LSP performance and lower uncertainties than those of factors selection-based models;(2)Influence degree of different machine learning on LSP accuracy is greater than that of different factor selection methods.Conclusively,the above commonly used conditioning factor selection methods are not ideal for improving LSP performance and may complicate the LSP processes.In contrast,a satisfied combination of conditioning factors can be constructed according to the proposed principle.
基金performed at large-scale research facility"Beam-M"of Bauman Moscow State Technical University following the government task by the Ministry of Science and Higher Education of the Russian Federation(No.FSFN-2024-0007).
文摘Thrust-vectoring capability has become a critical feature for propulsion systems as space missions move from static to dynamic.Thrust-vectoring is a well-developed area of rocket engine science.For electric propulsion,however,it is an evolving field that has taken a new leap forward in recent years.A review and analysis of thrust-vectoring schemes for electric propulsion systems have been conducted.The scope of this review includes thrust-vectoring schemes that can be implemented for electrostatic,electromagnetic,and beam-driven thrusters.A classification of electric propulsion schemes that provide thrust-vectoring capability is developed.More attention is given to schemes implemented in laboratory prototypes and flight models.The final part is devoted to a discussion on the suitability of different electric propulsion systems with thrust-vectoring capability for modern space mission operations.The thrust-vectoring capability of electric propulsion is necessary for inner and outer space satellites,which are at a disadvantage with conventional unidirectional propulsion systems due to their limited maneuverability.
基金primarily supported by the National Key R&D Program of China[grant number 2021YFC3000904]the Jiangsu Provincial Key Technology R&D Program[grant number BE2022851]National Natural Science Foundation of China[grant number 42405035]。
文摘Vector winds play a crucial role in weather and climate,as well as the effective utilization of wind energy resources.However,limited research has been conducted on treating the wind field as a vector field in the evaluation of numerical weather prediction models.In this study,the authors treat vector winds as a whole by employing a vector field evaluation method,and evaluate the mesoscale model of the China Meteorological Administration(CMA-MESO)and ECMWF forecast,with reference to ERA5 reanalysis,in terms of multiple aspects of vector winds over eastern China in 2022.The results show that the ECMWF forecast is superior to CMA-MESO in predicting the spatial distribution and intensity of 10-m vector winds.Both models overestimate the wind speed in East China,and CMA-MESO overestimates the wind speed to a greater extent.The forecasting skill of the vector wind field in both models decreases with increasing lead time.The forecasting skill of CMA-MESO fluctuates more and decreases faster than that of the ECMWF forecast.There is a significant negative correlation between the model vector wind forecasting skill and terrain height.This study provides a scientific evaluation of the local application of vector wind forecasts of the CMA-MESO model and ECMWF forecast.
文摘Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advancements in machine learning,achieving both high accuracy and low computational cost for arrhythmia classification remains a critical issue.Computer-aided diagnosis systems can play a key role in early detection,reducing mortality rates associated with cardiac disorders.This study proposes a fully automated approach for ECG arrhythmia classification using deep learning and machine learning techniques to improve diagnostic accuracy while minimizing processing time.The methodology consists of three stages:1)preprocessing,where ECG signals undergo noise reduction and feature extraction;2)feature Identification,where deep convolutional neural network(CNN)blocks,combined with data augmentation and transfer learning,extract key parameters;3)classification,where a hybrid CNN-SVM model is employed for arrhythmia recognition.CNN-extracted features were fed into a binary support vector machine(SVM)classifier,and model performance was assessed using five-fold cross-validation.Experimental findings demonstrated that the CNN2 model achieved 85.52%accuracy,while the hybrid CNN2-SVM approach significantly improved accuracy to 97.33%,outperforming conventional methods.This model enhances classification efficiency while reducing computational complexity.The proposed approach bridges the gap between accuracy and processing speed in ECG arrhythmia classification,offering a promising solution for real-time clinical applications.Its superior performance compared to nonlinear classifiers highlights its potential for improving automated cardiac diagnosis.
基金the National Key Research and Development Program of China(2021YFC2900300)the Natural Science Foundation of Guangdong Province(2024A1515030216)+2 种基金MOST Special Fund from State Key Laboratory of Geological Processes and Mineral Resources,China University of Geosciences(GPMR202437)the Guangdong Province Introduced of Innovative R&D Team(2021ZT09H399)the Third Xinjiang Scientific Expedition Program(2022xjkk1301).
文摘The application of machine learning for pyrite discrimination establishes a robust foundation for constructing the ore-forming history of multi-stage deposits;however,published models face challenges related to limited,imbalanced datasets and oversampling.In this study,the dataset was expanded to approximately 500 samples for each type,including 508 sedimentary,573 orogenic gold,548 sedimentary exhalative(SEDEX)deposits,and 364 volcanogenic massive sulfides(VMS)pyrites,utilizing random forest(RF)and support vector machine(SVM)methodologies to enhance the reliability of the classifier models.The RF classifier achieved an overall accuracy of 99.8%,and the SVM classifier attained an overall accuracy of 100%.The model was evaluated by a five-fold cross-validation approach with 93.8%accuracy for the RF and 94.9%for the SVM classifier.These results demonstrate the strong feasibility of pyrite classification,supported by a relatively large,balanced dataset and high accuracy rates.The classifier was employed to reveal the genesis of the controversial Keketale Pb-Zn deposit in NW China,which has been inconclusive among SEDEX,VMS,or a SEDEX-VMS transition.Petrographic investigations indicated that the deposit comprises early fine-grained layered pyrite(Py1)and late recrystallized pyrite(Py2).The majority voting classified Py1 as the VMS type,with an accuracy of RF and SVM being 72.2%and 75%,respectively,and confirmed Py2 as an orogenic type with 74.3% and 77.1%accuracy,respectively.The new findings indicated that the Keketale deposit originated from a submarine VMS mineralization system,followed by late orogenic-type overprinting of metamorphism and deformation,which is consistent with the geological and geochemical observations.This study further emphasizes the advantages of Machine learning(ML)methods in accurately and directly discriminating the deposit types and reconstructing the formation history of multi-stage deposits.
基金supported by the National Natural Science Foundation of China(Grant No.42177164)the Distinguished Youth Science Foundation of Hunan Province of China(Grant No.2022JJ10073)the Outstanding Youth Project of Hunan Provincial Department of Education,China(Grant No.23B0008).
文摘In underground mining,especially in entry-type excavations,the instability of surrounding rock structures can lead to incalculable losses.As a crucial tool for stability analysis in entry-type excavations,the critical span graph must be updated to meet more stringent engineering requirements.Given this,this study introduces the support vector machine(SVM),along with multiple ensemble(bagging,adaptive boosting,and stacking)and optimization(Harris hawks optimization(HHO),cuckoo search(CS))techniques,to overcome the limitations of the traditional methods.The analysis indicates that the hybrid model combining SVM,bagging,and CS strategies has a good prediction performance,and its test accuracy reaches 0.86.Furthermore,the partition scheme of the critical span graph is adjusted based on the CS-BSVM model and 399 cases.Compared with previous empirical or semi-empirical methods,the new model overcomes the interference of subjective factors and possesses higher interpretability.Since relying solely on one technology cannot ensure prediction credibility,this study further introduces genetic programming(GP)and kriging interpolation techniques.The explicit expressions derived through GP can offer the stability probability value,and the kriging technique can provide interpolated definitions for two new subclasses.Finally,a prediction platform is developed based on the above three approaches,which can rapidly provide engineering feedback.
基金supported by the National Key R&D Program of China(Grant2022YFF0503700)the National Natural Science Foundation of China(42474200 and 42174186)。
文摘The Macao Science Satellite-1(known as MSS-1)is the first scientific exploration satellite that was designed to measure the Earth's low latitude magnetic field at high resolution and with high precision by collecting data in a near-equatorial orbit.Magnetic field data from MSS-1's onboard Vector Fluxgate Magnetometer(VFM),collected at a sample rate of 50 Hz,allows us to detect and investigate sources of magnetic data contamination,from DC to relevant Nyquist frequency.Here we report two types of artificial disturbances in the VFM data.One is V-shaped events concentrated at night,with frequencies sweeping from the Nyquist frequency down to zero and back up.The other is 5-Hz events(ones that exhibit a distinct 5 Hz spectrum peak);these events are always accompanied by intervals of spiky signals,and are clearly related to the attitude control of the satellite.Our analyses show that VFM noise levels in daytime are systematically lower than in nighttime.The daily average noise levels exhibit a period of about 52 days.The V-shaped events are strongly correlated with higher VFM noise levels.
基金financially supported by the National Natural Science Foundation of China(No.52174297).
文摘The endpoint carbon content in the converter is critical for the quality of steel products,and accurately predicting this parameter is an effective way to reduce alloy consumption and improve smelting efficiency.However,most scholars currently focus on modifying methods to enhance model accuracy,while overlooking the extent to which input parameters influence accuracy.To address this issue,in this study,a prediction model for the endpoint carbon content in the converter was developed using factor analysis(FA)and support vector machine(SVM)optimized by improved particle swarm optimization(IPSO).Analysis of the factors influencing the endpoint carbon content during the converter smelting process led to the identification of 21 input parameters.Subsequently,FA was used to reduce the dimensionality of the data and applied to the prediction model.The results demonstrate that the performance of the FA-IPSO-SVM model surpasses several existing methods,such as twin support vector regression and support vector machine.The model achieves hit rates of 89.59%,96.21%,and 98.74%within error ranges of±0.01%,±0.015%,and±0.02%,respectively.Finally,based on the prediction results obtained by sequentially removing input parameters,the parameters were classified into high influence(5%-7%),medium influence(2%-5%),and low influence(0-2%)categories according to their varying degrees of impact on prediction accuracy.This classi-fication provides a reference for selecting input parameters in future prediction models for endpoint carbon content.