The model of heat source(MHS) which reflects the thermal interaction between materials and laser during processing determines the accuracy of simulation results. To acquire desirable simulations results, although vari...The model of heat source(MHS) which reflects the thermal interaction between materials and laser during processing determines the accuracy of simulation results. To acquire desirable simulations results, although various modifications of heat sources in the aspect of absorption process of laser by materials have been purposed, the distribution of laser power density(DLPD) in MHS is still modeled theoretically. However, in the actual situations of laser processing, the DLPD is definitely different from the ideal models. So, it is indispensable to build MHS using actual DLPD to improve the accuracy of simulation results. Besides, an automatic modeling method will be benefit to simplify the tedious pre-processing of simulations. This paper presents a modeling method and corresponding algorithm to model heat source using measured DLPD. This algorithm automatically processes original data to get modeling parameters and provides a step MHS combining with absorption models. Simulations and experiments of heat transfer in steel plates irradiated by laser prove the mothed and the step MHS. Moreover, the investigations of laser induced thermal-crack propagation in glass highlight the signification of modeling heat source based on actual DLPD and demonstrate the enormous application of this method in the simulation of laser processing.展开更多
The relationships between the selective laser melting(SLM)processing parameters including laser power,scanning speed and hatch space,the relative density,the microstructure,and resulting mechanical properties of Ti-6A...The relationships between the selective laser melting(SLM)processing parameters including laser power,scanning speed and hatch space,the relative density,the microstructure,and resulting mechanical properties of Ti-6Al-2Zr-1Mo-1V alloy were investigated in this work.The result shows that laser power acts a dominant role in determining the relative density in comparison with scanning speed and hatch space.The optimal SLM process window for fabricating relative density>99%samples is located in the energy density range of 34.72 J·mm^(-3)to 52.08 J·mm^(-3),where the laser power range is between 125 W and 175 W.An upward trend is found in the micro-hardness as the energy density is increased.The optimum SLM processing parameters of Ti-6Al-2Zr-1Mo-1V alloy are:laser power of 150 W,scanning speed of 1,600 mm·s^(-1),hatch space of 0.08 mm,and layer thickness of 0.03 mm.The highest ultimate tensile strength,yield strength,and ductility under the optimum processing parameter are achieved,which are 1,205 MPa,1,099 MPa,and 8%,respectively.The results of this study can be used to guide SLM production Ti-6Al-2Zr-1Mo-1V alloy parts.展开更多
The schlieren interferograms used to be analyzed in a qualitative way. In this paper, by means of the powerful computational ability and the large memory of computer; the image processing method is investigated for th...The schlieren interferograms used to be analyzed in a qualitative way. In this paper, by means of the powerful computational ability and the large memory of computer; the image processing method is investigated for the digitalization of an axisymmetric schlieren interferogram and the determination of the density field. This method includes the 2-D low-pass filtering, the thinning of interferometric fringes, the extraction of physical information and the numerical integration of the density field. The image processing results show that the accuracy of the quantitative analysis of the schlieren interferogram can be improved and a lot of time can be saved in dealing with optical experimental results. Therefore, the algorithm used here is useful and efficient.展开更多
Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the t...Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.展开更多
China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a...China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.展开更多
Thick electrodes can increase incorporation of active electrode materials by diminishing the proportion of inactive constituents,improving the overall energy density of batteries.However,thick electrodes fabricated us...Thick electrodes can increase incorporation of active electrode materials by diminishing the proportion of inactive constituents,improving the overall energy density of batteries.However,thick electrodes fabricated using the conventional slurry casting approach frequently exhibit an exacerbated accumulation of carbon additives and binders on their surfaces,invariably leading to compromised electrochemical properties.In this study,we introduce a designed conductive agent/binder composite synthesized from carbon nanotube and polytetrafluoroethylene.This agent/binder composite facilitates production of dry-process-prepared ultra-thick electrodes endowed with a three-dimensional and uniformly distributed percolative architecture,ensuring superior electronic conductivity and remarkable mechanical resilience.Using this approach,ultra-thick LiCoO_(2)(LCO) electrodes demonstrated superior cycling performance and rate capabilities,registering an impressive loading capacity of up to 101.4 mg/cm^(2),signifying a 242% increase in battery energy density.In another analytical endeavor,time-of-flight secondary ion mass spectroscopy was used to clarify the distribution of cathode electrolyte interphase(CEI) in cycled LCO electrodes.The results provide unprecedented evidence explaining the intricate correlation between CEI generation and carbon distribution,highlighting the intrinsic advantages of the proposed dry-process approach in fine-tu ning the CEI,with excellent cycling performance in batteries equipped with ultra-thick electrodes.展开更多
The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, wher...The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.展开更多
HIsarna is a promising ironmaking technology to reduce CO2 emission.Information of phase transformation is essential for reaction analysis of the cyclone reactor of the HIsarna process.In addition,data of density and ...HIsarna is a promising ironmaking technology to reduce CO2 emission.Information of phase transformation is essential for reaction analysis of the cyclone reactor of the HIsarna process.In addition,data of density and volume of the ore particles are necessary for estimation of the residence time of the particles in the cyclone reactor.Phase transformation of iron ore particles was experimentally studied in a drop-tube furnace under simulated cyclone conditions and compared with thermodynamic calculation.During the pre-reduction process inside the reactor,the mineralogy of iron ore particles transforms sequentially from hematite to sub-oxides.The density changes of the particles during the melting and reduction can be predicted based on the phase composition and temperature.Therefore,density models in the studies were evaluated with reported experimental data of slag.As a result,a more reliable density model was developed to calculate the density of the formed slag containing mainly FeO–Fe2O3.The density and volume of the partially reduced ore particles or melt droplets were estimated based on this model.The results show that the density of the ore particles decreases by 15.1%at most along the progressive reduction process.Furthermore,the model results also indicate that heating,melting and reduction of the ore could lead to 6.63%–9.37%swelling of the particles,which is mostly contributed by thermal expansion.It would result in corresponding variation in velocity of the ore particles or melt droplets during the flight inside the reactor.展开更多
The dynamics of zero-range processes on complex networks is expected to be influenced by the topological structure of underlying networks.A real space complete condensation phase transition in the stationary state may...The dynamics of zero-range processes on complex networks is expected to be influenced by the topological structure of underlying networks.A real space complete condensation phase transition in the stationary state may occur.We study the finite density effects of the condensation transition in both the stationary and dynamical zero-range processes on scale-free networks.By means of grand canonical ensemble method,we predict analytically the scaling laws of the average occupation number with respect to the finite density for the steady state.We further explore the relaxation dynamics of the condensation phase transition.By applying the hierarchical evolution and scaling ansatz,a scaling law for the relaxation dynamics is predicted.Monte Carlo simulations are performed and the predicted density scaling laws are nicely validated.展开更多
This paper shows that for a supercritical contact process in one dimension, if the initial distribution satisfies reasonable hypothesis, then the first hitting time of certain set with anomalous small density is asymp...This paper shows that for a supercritical contact process in one dimension, if the initial distribution satisfies reasonable hypothesis, then the first hitting time of certain set with anomalous small density is asymptotically exponentially distributed. We also show the similar result for the contact process on the finite set [0, N].展开更多
This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary erro...This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary error model and the identification principle based on the probability density function(PDF). The main contribution is that the NFM parameter updating approach is transformed into the shape control for the PDF of modeling error. More specifically, a virtual adaptive control system is constructed with the aid of the auxiliary error model and then the PDF shape control idea is used to tune NFM parameters so that the PDF of modeling error is controlled to follow a targeted PDF, which is in Gaussian or uniform distribution. Examples are used to validate the applicability of the proposed method and comparisons are made with the minimum mean square error based approaches.展开更多
Modeling of energy consumption(EC) and effluent quality(EQ) are very essential problems that need to be solved for the multiobjective optimal control in the wastewater treatment process(WWTP). To address this issue, a...Modeling of energy consumption(EC) and effluent quality(EQ) are very essential problems that need to be solved for the multiobjective optimal control in the wastewater treatment process(WWTP). To address this issue, a density peaks-based adaptive fuzzy neural network(DP-AFNN) is proposed in this study. To obtain suitable fuzzy rules, a DP-based clustering method is applied to fit the cluster centers to process nonlinearity.The parameters of the extracted fuzzy rules are fine-tuned based on the improved Levenberg-Marquardt algorithm during the training process. Furthermore, the analysis of convergence is performed to guarantee the successful application of the DPAFNN. Finally, the proposed DP-AFNN is utilized to develop the models of EC and EQ in the WWTP. The experimental results show that the proposed DP-AFNN can achieve fast convergence speed and high prediction accuracy in comparison with some existing methods.展开更多
Atrazine causes concern due to its resistant to biodegradation and could be accumulated in aquatic organisms,causing pollution in lakes.This study measured the concentration of atrazine in ice and the water under ice ...Atrazine causes concern due to its resistant to biodegradation and could be accumulated in aquatic organisms,causing pollution in lakes.This study measured the concentration of atrazine in ice and the water under ice through a simulated icing experiment and calculated the distribution coefficient K to characterize its migration ability in the freezing process.Furthermore,density functional theory(DFT)calculations were employed to expatiate the migration law of atrazine during icing process.According to the results,it could release more energy into the environment when atrazine staying in water phase(-15.077 kcal/mol)than staying in ice phase(-14.388 kcal/mol),therefore it was beneficial for the migration of atrazine from ice to water.This explains that during the freezing process,the concentration of atrazine in the ice was lower than that in the water.Thermodynamic calculations indicated thatwhen the temperature decreases from268 to 248 K,the internal energy contribution of the compound of atrazine and ice molecule(water cluster)decreases at the same vibrational frequency,resulting in an increase in the free energy difference of the compound from-167.946 to-165.390 kcal/mol.This demonstrated the diminished migratory capacity of atrazine.This study revealed the environmental behavior of atrazine during lake freezing,which was beneficial for the management of atrazine and other pollutants during freezing and environmental protection.展开更多
Two new approaches for the accurate prediction of densities of the commonly used glycol solutions in the gasprocessing industry are presented in the article. The first approach is based on developing a simple-to-use p...Two new approaches for the accurate prediction of densities of the commonly used glycol solutions in the gasprocessing industry are presented in the article. The first approach is based on developing a simple-to-use polynomial correlation for an appropriate prediction of density of glycol solutions as a function of temperature and weight percent of glycols in water, where the obtained results show very good agreement with the reported experimental data. The second approach, however, is based on the artificial neural networks (ANN) methodology, wherein the results demonstrate the ability of the introduced method to predict reasonably accurate densities of glycols under operating conditions. Comparisons of the two novel approaches indicated that the simple-to-use correlation appears to be superior owing to its simplicity and clear numerical background, wherein the relevant coefficients can be retuned if new and more accurate data are available in the future. The average deviation of the new proposed polynomial correlation results from reported data is 0.64 kg/m^3 whereas the average deviation of artificial neural networks (ANN) methodology from reported data is 1.1 kg/m^3.展开更多
The combination of magnesium alloys with the low-pressure expendable pattern casting(LP-EPC) process would bright future for application of magnesium alloys. The researches are focused on the effect of process paramet...The combination of magnesium alloys with the low-pressure expendable pattern casting(LP-EPC) process would bright future for application of magnesium alloys. The researches are focused on the effect of process parameters on the internal casting quality of magnesium alloy parts. AZ91D magnesium alloy castings were produced for different combinations of the LP-EPC process parameters. Specifically,pouring temperature,vacuum,filling velocity and coupling action of these factors were manipulated to observe their effect on the casting porosity and density distribution. The results indicate that the pouring temperature with LP-EPC process is lower than it in gravity casting. The selected process parameters,such as vacuum,filling velocity and coupled modes of them,must ensure melt metal flowing front profile exhibiting smooth and convex shape. The optimal process parameters for the castings are pouring temperature 983-1 023 K,vacuum 0.02-0.03 MPa,filling velocity 60-95 mm/s,and simultaneous filling with sucking.展开更多
The Datun mining area is the test area in this paper,and the purpose is to obtain the mining land cover classification map in 2003.The data source used in this paper is the Landsat enhanced thematic mapper plus(ETM)re...The Datun mining area is the test area in this paper,and the purpose is to obtain the mining land cover classification map in 2003.The data source used in this paper is the Landsat enhanced thematic mapper plus(ETM)remote sensing data.By obtaining the normalized difference vegetation index(NDVI)of the study area,the land cover classification information is extracted using the density segmentation method.Because the results cannot distinguish the construction land and wetland,this paper obtains the humidity information of multi-spectral data through the tasseled cap transformation.From the density segmentation image of humidity information,the construction and wetland types can be clearly distinguished.Finally,combining the two classification maps,the visual interpretation using Landsat ETM fusion image with 15-m resolution helps to get the final classification results.After classification accuracy assessment,the overall accuracy calculated from the classification confusion matrix is 86%.This result can be applied in actual project.展开更多
A new identification method of neuro-uzzy Hammerstein model based on probability density function(PDF) is presented,which is different from the idea that mean squared error(MSE) is employed as the index function in tr...A new identification method of neuro-uzzy Hammerstein model based on probability density function(PDF) is presented,which is different from the idea that mean squared error(MSE) is employed as the index function in traditional identification methods.Firstly,a neuro-fuzzy based Hammerstein model is constructed to describe the nonlinearity of Hammerstein process without any prior process knowledge.Secondly,a kind of special test signal is used to separate the link parts of the Hammerstein model.More specifically,the conception of PDF is introduced to solve the identification problem of the neuro-fuzzy Hammerstein model.The antecedent parameters are estimated by a clustering algorithm,while the consequent parameters of the model are identified by designing a virtual PDF control system in which the PDF of the modeling error is estimated and controlled to converge to the target.The proposed method not only guarantees the accuracy of the model but also dominates the spatial distribution of PDF of the model error to improve the generalization ability of the model.Simulated results show the effectiveness of the proposed method.展开更多
Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the...Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.展开更多
Smearing methods have been used to compute temperature-dependent phonon dispersions and predict critical temperatures of charge density waves,but usually lead to a much higher result because of its ambiguousmechanism ...Smearing methods have been used to compute temperature-dependent phonon dispersions and predict critical temperatures of charge density waves,but usually lead to a much higher result because of its ambiguousmechanism for modeling temperature effects.Here,a three-temperaturemodel was developed to describe the energy transfer process between electrons,soft-mode and non-soft-mode phonons.In particular,mode-selective smearing induced soft-mode phonons were assigned a temperature to analyze its contributions to the relaxation between electrons and phonons.A relative standard was established to screen soft-mode phonons quantitatively for different materials.In addition,three smearing methods(Fermi–Dirac,Gaussian,and Methfessel–Paxton)and eight materials(monolayer or bulk TX_(2),T=Ti,Nb,Ta and X=Se,S)were tested.Critical temperatures corrected by the three-temperature model were in great agreement with experimental results.This work provides new insights into correctly predicting critical temperatures of charge density waves,addressing the relaxation process of electrons and phonons using smearing method and determining phase transitions by phonon softening.展开更多
The aims of this study are to develop the color density concept and to propose the color density based color difference formulas.The color density is defined using the metric coefficients that are based on the discrim...The aims of this study are to develop the color density concept and to propose the color density based color difference formulas.The color density is defined using the metric coefficients that are based on the discrimination ellipses and the locations of the colors in the color space.The ellipse sets are the MacAdam ellipses in the CIE 1931 xy-chromaticity diagram and the chromaticity-discrimination ellipses in the CIELAB space.The latter set was originally used to develop the CIEDE2000 color difference formula.The color difference can be calculated from the color density for the two colors under consideration.As a result,the color density represents the perceived color difference more accurately,and it could be used to characterize a color by a quantity attribute matching better to the perceived color difference from this color.Resulting from this,the color density concept provides simply a correction term for the estimation of the color differences.In the experiments,the line element formula and the CIEDE2000 color difference formula performed better than the color density based difference measures.The reason behind this is in the current modeling of the color density concept.The discrimination ellipses are typically described with three-dimensional data consisting of two axes,the major and the minor,and the inclination angle.The proposed color density is only a one-dimensional corrector for color differences;thus,it cannot capture all the details of the ellipse information.Still,the color density gives clearly more correct estimations to perceived color differences than Euclidean distances using directly the coordinates of the color space.展开更多
基金Project(2021YFF0500200) supported by the National Key R&D Program of ChinaProject(52105437) supported by the National Natural Science Foundation of China+1 种基金Project(202006120184) supported by the Heilongjiang Provincial Postdoctoral Science Foundation,ChinaProject(LBH-Z20054) supported by the China Scholarship Council。
文摘The model of heat source(MHS) which reflects the thermal interaction between materials and laser during processing determines the accuracy of simulation results. To acquire desirable simulations results, although various modifications of heat sources in the aspect of absorption process of laser by materials have been purposed, the distribution of laser power density(DLPD) in MHS is still modeled theoretically. However, in the actual situations of laser processing, the DLPD is definitely different from the ideal models. So, it is indispensable to build MHS using actual DLPD to improve the accuracy of simulation results. Besides, an automatic modeling method will be benefit to simplify the tedious pre-processing of simulations. This paper presents a modeling method and corresponding algorithm to model heat source using measured DLPD. This algorithm automatically processes original data to get modeling parameters and provides a step MHS combining with absorption models. Simulations and experiments of heat transfer in steel plates irradiated by laser prove the mothed and the step MHS. Moreover, the investigations of laser induced thermal-crack propagation in glass highlight the signification of modeling heat source based on actual DLPD and demonstrate the enormous application of this method in the simulation of laser processing.
基金supported by Liaoning Doctoral Research Start-up Fund project(Grant No.2023-BS-215).
文摘The relationships between the selective laser melting(SLM)processing parameters including laser power,scanning speed and hatch space,the relative density,the microstructure,and resulting mechanical properties of Ti-6Al-2Zr-1Mo-1V alloy were investigated in this work.The result shows that laser power acts a dominant role in determining the relative density in comparison with scanning speed and hatch space.The optimal SLM process window for fabricating relative density>99%samples is located in the energy density range of 34.72 J·mm^(-3)to 52.08 J·mm^(-3),where the laser power range is between 125 W and 175 W.An upward trend is found in the micro-hardness as the energy density is increased.The optimum SLM processing parameters of Ti-6Al-2Zr-1Mo-1V alloy are:laser power of 150 W,scanning speed of 1,600 mm·s^(-1),hatch space of 0.08 mm,and layer thickness of 0.03 mm.The highest ultimate tensile strength,yield strength,and ductility under the optimum processing parameter are achieved,which are 1,205 MPa,1,099 MPa,and 8%,respectively.The results of this study can be used to guide SLM production Ti-6Al-2Zr-1Mo-1V alloy parts.
文摘The schlieren interferograms used to be analyzed in a qualitative way. In this paper, by means of the powerful computational ability and the large memory of computer; the image processing method is investigated for the digitalization of an axisymmetric schlieren interferogram and the determination of the density field. This method includes the 2-D low-pass filtering, the thinning of interferometric fringes, the extraction of physical information and the numerical integration of the density field. The image processing results show that the accuracy of the quantitative analysis of the schlieren interferogram can be improved and a lot of time can be saved in dealing with optical experimental results. Therefore, the algorithm used here is useful and efficient.
基金Supported by the National Natural Science Foundation of China (No.60574047) and the Doctorate Foundation of the State Education Ministry of China (No.20050335018).
文摘Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.
文摘China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.
基金supported by the National Key Research and Development Program of China,China(2019YFA0705102)the National Natural Science Foundation of China,China(22179144,22005332)。
文摘Thick electrodes can increase incorporation of active electrode materials by diminishing the proportion of inactive constituents,improving the overall energy density of batteries.However,thick electrodes fabricated using the conventional slurry casting approach frequently exhibit an exacerbated accumulation of carbon additives and binders on their surfaces,invariably leading to compromised electrochemical properties.In this study,we introduce a designed conductive agent/binder composite synthesized from carbon nanotube and polytetrafluoroethylene.This agent/binder composite facilitates production of dry-process-prepared ultra-thick electrodes endowed with a three-dimensional and uniformly distributed percolative architecture,ensuring superior electronic conductivity and remarkable mechanical resilience.Using this approach,ultra-thick LiCoO_(2)(LCO) electrodes demonstrated superior cycling performance and rate capabilities,registering an impressive loading capacity of up to 101.4 mg/cm^(2),signifying a 242% increase in battery energy density.In another analytical endeavor,time-of-flight secondary ion mass spectroscopy was used to clarify the distribution of cathode electrolyte interphase(CEI) in cycled LCO electrodes.The results provide unprecedented evidence explaining the intricate correlation between CEI generation and carbon distribution,highlighting the intrinsic advantages of the proposed dry-process approach in fine-tu ning the CEI,with excellent cycling performance in batteries equipped with ultra-thick electrodes.
基金Project(61372136) supported by the National Natural Science Foundation of China
文摘The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.
文摘HIsarna is a promising ironmaking technology to reduce CO2 emission.Information of phase transformation is essential for reaction analysis of the cyclone reactor of the HIsarna process.In addition,data of density and volume of the ore particles are necessary for estimation of the residence time of the particles in the cyclone reactor.Phase transformation of iron ore particles was experimentally studied in a drop-tube furnace under simulated cyclone conditions and compared with thermodynamic calculation.During the pre-reduction process inside the reactor,the mineralogy of iron ore particles transforms sequentially from hematite to sub-oxides.The density changes of the particles during the melting and reduction can be predicted based on the phase composition and temperature.Therefore,density models in the studies were evaluated with reported experimental data of slag.As a result,a more reliable density model was developed to calculate the density of the formed slag containing mainly FeO–Fe2O3.The density and volume of the partially reduced ore particles or melt droplets were estimated based on this model.The results show that the density of the ore particles decreases by 15.1%at most along the progressive reduction process.Furthermore,the model results also indicate that heating,melting and reduction of the ore could lead to 6.63%–9.37%swelling of the particles,which is mostly contributed by thermal expansion.It would result in corresponding variation in velocity of the ore particles or melt droplets during the flight inside the reactor.
基金the National Natural Science Foundation of China(Grant No.11505115).
文摘The dynamics of zero-range processes on complex networks is expected to be influenced by the topological structure of underlying networks.A real space complete condensation phase transition in the stationary state may occur.We study the finite density effects of the condensation transition in both the stationary and dynamical zero-range processes on scale-free networks.By means of grand canonical ensemble method,we predict analytically the scaling laws of the average occupation number with respect to the finite density for the steady state.We further explore the relaxation dynamics of the condensation phase transition.By applying the hierarchical evolution and scaling ansatz,a scaling law for the relaxation dynamics is predicted.Monte Carlo simulations are performed and the predicted density scaling laws are nicely validated.
基金Supported in part by the National Natural Science Foundation of China.
文摘This paper shows that for a supercritical contact process in one dimension, if the initial distribution satisfies reasonable hypothesis, then the first hitting time of certain set with anomalous small density is asymptotically exponentially distributed. We also show the similar result for the contact process on the finite set [0, N].
基金Supported by the National Natural Science Foundation of China(61374044)Shanghai Science Technology Commission(12510709400)+1 种基金Shanghai Municipal Education Commission(14ZZ088)Shanghai Talent Development Plan
文摘This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary error model and the identification principle based on the probability density function(PDF). The main contribution is that the NFM parameter updating approach is transformed into the shape control for the PDF of modeling error. More specifically, a virtual adaptive control system is constructed with the aid of the auxiliary error model and then the PDF shape control idea is used to tune NFM parameters so that the PDF of modeling error is controlled to follow a targeted PDF, which is in Gaussian or uniform distribution. Examples are used to validate the applicability of the proposed method and comparisons are made with the minimum mean square error based approaches.
基金supported by the National Science Foundation for Distinguished Young Scholars of China(61225016)the State Key Program of National Natural Science of China(61533002)
文摘Modeling of energy consumption(EC) and effluent quality(EQ) are very essential problems that need to be solved for the multiobjective optimal control in the wastewater treatment process(WWTP). To address this issue, a density peaks-based adaptive fuzzy neural network(DP-AFNN) is proposed in this study. To obtain suitable fuzzy rules, a DP-based clustering method is applied to fit the cluster centers to process nonlinearity.The parameters of the extracted fuzzy rules are fine-tuned based on the improved Levenberg-Marquardt algorithm during the training process. Furthermore, the analysis of convergence is performed to guarantee the successful application of the DPAFNN. Finally, the proposed DP-AFNN is utilized to develop the models of EC and EQ in the WWTP. The experimental results show that the proposed DP-AFNN can achieve fast convergence speed and high prediction accuracy in comparison with some existing methods.
基金This work was supported by the Key Research and Development Program of Shandong Province(No.2019GHY112033)the National Natural Science Foundation of China(No.51609207).
文摘Atrazine causes concern due to its resistant to biodegradation and could be accumulated in aquatic organisms,causing pollution in lakes.This study measured the concentration of atrazine in ice and the water under ice through a simulated icing experiment and calculated the distribution coefficient K to characterize its migration ability in the freezing process.Furthermore,density functional theory(DFT)calculations were employed to expatiate the migration law of atrazine during icing process.According to the results,it could release more energy into the environment when atrazine staying in water phase(-15.077 kcal/mol)than staying in ice phase(-14.388 kcal/mol),therefore it was beneficial for the migration of atrazine from ice to water.This explains that during the freezing process,the concentration of atrazine in the ice was lower than that in the water.Thermodynamic calculations indicated thatwhen the temperature decreases from268 to 248 K,the internal energy contribution of the compound of atrazine and ice molecule(water cluster)decreases at the same vibrational frequency,resulting in an increase in the free energy difference of the compound from-167.946 to-165.390 kcal/mol.This demonstrated the diminished migratory capacity of atrazine.This study revealed the environmental behavior of atrazine during lake freezing,which was beneficial for the management of atrazine and other pollutants during freezing and environmental protection.
文摘Two new approaches for the accurate prediction of densities of the commonly used glycol solutions in the gasprocessing industry are presented in the article. The first approach is based on developing a simple-to-use polynomial correlation for an appropriate prediction of density of glycol solutions as a function of temperature and weight percent of glycols in water, where the obtained results show very good agreement with the reported experimental data. The second approach, however, is based on the artificial neural networks (ANN) methodology, wherein the results demonstrate the ability of the introduced method to predict reasonably accurate densities of glycols under operating conditions. Comparisons of the two novel approaches indicated that the simple-to-use correlation appears to be superior owing to its simplicity and clear numerical background, wherein the relevant coefficients can be retuned if new and more accurate data are available in the future. The average deviation of the new proposed polynomial correlation results from reported data is 0.64 kg/m^3 whereas the average deviation of artificial neural networks (ANN) methodology from reported data is 1.1 kg/m^3.
基金Projects (50275058) supported by the National Natural Science Foundation of China
文摘The combination of magnesium alloys with the low-pressure expendable pattern casting(LP-EPC) process would bright future for application of magnesium alloys. The researches are focused on the effect of process parameters on the internal casting quality of magnesium alloy parts. AZ91D magnesium alloy castings were produced for different combinations of the LP-EPC process parameters. Specifically,pouring temperature,vacuum,filling velocity and coupling action of these factors were manipulated to observe their effect on the casting porosity and density distribution. The results indicate that the pouring temperature with LP-EPC process is lower than it in gravity casting. The selected process parameters,such as vacuum,filling velocity and coupled modes of them,must ensure melt metal flowing front profile exhibiting smooth and convex shape. The optimal process parameters for the castings are pouring temperature 983-1 023 K,vacuum 0.02-0.03 MPa,filling velocity 60-95 mm/s,and simultaneous filling with sucking.
基金The Research of Ecological and Hydrogeological Geological Environment Infor mation of Remote Sensing in Datun MineShandong University of Science and Technology Graduate Innovation Fund(No.YCA120312)
文摘The Datun mining area is the test area in this paper,and the purpose is to obtain the mining land cover classification map in 2003.The data source used in this paper is the Landsat enhanced thematic mapper plus(ETM)remote sensing data.By obtaining the normalized difference vegetation index(NDVI)of the study area,the land cover classification information is extracted using the density segmentation method.Because the results cannot distinguish the construction land and wetland,this paper obtains the humidity information of multi-spectral data through the tasseled cap transformation.From the density segmentation image of humidity information,the construction and wetland types can be clearly distinguished.Finally,combining the two classification maps,the visual interpretation using Landsat ETM fusion image with 15-m resolution helps to get the final classification results.After classification accuracy assessment,the overall accuracy calculated from the classification confusion matrix is 86%.This result can be applied in actual project.
基金National Natural Science Foundation of China(No.61374044)Shanghai Municipal Science and Technology Commission,China(No.15510722100)+2 种基金Shanghai Municipal Education Commission,China(No.14ZZ088)Shanghai Talent Development Plan,ChinaShanghai Baoshan Science and Technology Commission,China(No.bkw2013120)
文摘A new identification method of neuro-uzzy Hammerstein model based on probability density function(PDF) is presented,which is different from the idea that mean squared error(MSE) is employed as the index function in traditional identification methods.Firstly,a neuro-fuzzy based Hammerstein model is constructed to describe the nonlinearity of Hammerstein process without any prior process knowledge.Secondly,a kind of special test signal is used to separate the link parts of the Hammerstein model.More specifically,the conception of PDF is introduced to solve the identification problem of the neuro-fuzzy Hammerstein model.The antecedent parameters are estimated by a clustering algorithm,while the consequent parameters of the model are identified by designing a virtual PDF control system in which the PDF of the modeling error is estimated and controlled to converge to the target.The proposed method not only guarantees the accuracy of the model but also dominates the spatial distribution of PDF of the model error to improve the generalization ability of the model.Simulated results show the effectiveness of the proposed method.
基金National Natural Science Foundation of China(No.61374140)the Youth Foundation of National Natural Science Foundation of China(No.61403072)
文摘Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.
基金supported by the Guangdong Special Support Program(Grant No.2021TQ06C953)the Shenzhen Science and Technology Programs(Grant No.JCYJ20241202123506009 and GXWD20220811164433002).
文摘Smearing methods have been used to compute temperature-dependent phonon dispersions and predict critical temperatures of charge density waves,but usually lead to a much higher result because of its ambiguousmechanism for modeling temperature effects.Here,a three-temperaturemodel was developed to describe the energy transfer process between electrons,soft-mode and non-soft-mode phonons.In particular,mode-selective smearing induced soft-mode phonons were assigned a temperature to analyze its contributions to the relaxation between electrons and phonons.A relative standard was established to screen soft-mode phonons quantitatively for different materials.In addition,three smearing methods(Fermi–Dirac,Gaussian,and Methfessel–Paxton)and eight materials(monolayer or bulk TX_(2),T=Ti,Nb,Ta and X=Se,S)were tested.Critical temperatures corrected by the three-temperature model were in great agreement with experimental results.This work provides new insights into correctly predicting critical temperatures of charge density waves,addressing the relaxation process of electrons and phonons using smearing method and determining phase transitions by phonon softening.
文摘The aims of this study are to develop the color density concept and to propose the color density based color difference formulas.The color density is defined using the metric coefficients that are based on the discrimination ellipses and the locations of the colors in the color space.The ellipse sets are the MacAdam ellipses in the CIE 1931 xy-chromaticity diagram and the chromaticity-discrimination ellipses in the CIELAB space.The latter set was originally used to develop the CIEDE2000 color difference formula.The color difference can be calculated from the color density for the two colors under consideration.As a result,the color density represents the perceived color difference more accurately,and it could be used to characterize a color by a quantity attribute matching better to the perceived color difference from this color.Resulting from this,the color density concept provides simply a correction term for the estimation of the color differences.In the experiments,the line element formula and the CIEDE2000 color difference formula performed better than the color density based difference measures.The reason behind this is in the current modeling of the color density concept.The discrimination ellipses are typically described with three-dimensional data consisting of two axes,the major and the minor,and the inclination angle.The proposed color density is only a one-dimensional corrector for color differences;thus,it cannot capture all the details of the ellipse information.Still,the color density gives clearly more correct estimations to perceived color differences than Euclidean distances using directly the coordinates of the color space.