Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional...Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.展开更多
Tin(Sn)-lead(Pb)mixed halide perovskites have attracted widespread interest due to their wider re-sponse wavelength and lower toxicity than lead halide perovskites,Among the preparation methods,the two-step method mor...Tin(Sn)-lead(Pb)mixed halide perovskites have attracted widespread interest due to their wider re-sponse wavelength and lower toxicity than lead halide perovskites,Among the preparation methods,the two-step method more easily controls the crystallization rate and is suitable for preparing large-area per-ovskite devices.However,the residual low-conductivity iodide layer in the two-step method can affect carrier transport and device stability,and the different crystallization rates of Sn-and Pb-based per-ovskites may result in poor film quality.Therefore,Sn-Pb mixed perovskites are mainly prepared by a one-step method.Herein,a MAPb_(0.5)Sn_(0.5)I_(3)-based self-powered photodetector without a hole transport layer is fabricated by a two-step method.By adjusting the concentration of the ascorbic acid(AA)addi-tive,the final perovskite film exhibited a pure phase without residues,and the optimal device exhibited a high responsivity(0.276 A W^(-1)),large specific detectivity(2.38×10^(12) Jones),and enhanced stability.This enhancement is mainly attributed to the inhibition of Sn2+oxidation,the control of crystal growth,and the sufficient reaction between organic ammonium salts and bottom halides due to the AA-induced pore structure.展开更多
High-performance graphite materials have important roles in aerospace and nuclear reactor technologies because of their outstanding chemical stability and high-temperature performance.Their traditional production meth...High-performance graphite materials have important roles in aerospace and nuclear reactor technologies because of their outstanding chemical stability and high-temperature performance.Their traditional production method relies on repeated impregnation-carbonization and graphitization,and is plagued by lengthy preparation cycles and high energy consumption.Phase transition-assisted self-pressurized selfsintering technology can rapidly produce high-strength graphite materials,but the fracture strain of the graphite materials produced is poor.To solve this problem,this study used a two-step sintering method to uniformly introduce micro-nano pores into natural graphite-based bulk graphite,achieving improved fracture strain of the samples without reducing their density and mechanical properties.Using natural graphite powder,micron-diamond,and nano-diamond as raw materials,and by precisely controlling the staged pressure release process,the degree of diamond phase transition expansion was effectively regulated.The strain-to-failure of the graphite samples reached 1.2%,a 35%increase compared to samples produced by fullpressure sintering.Meanwhile,their flexural strength exceeded 110 MPa,and their density was over 1.9 g/cm^(3).The process therefore produced both a high strength and a high fracture strain.The interface evolution and toughening mechanism during the two-step sintering process were investigated.It is believed that the micro-nano pores formed have two roles:as stress concentrators they induce yielding by shear and as multi-crack propagation paths they significantly lengthen the crack propagation path.The two-step sintering phase transition strategy introduces pores and provides a new approach for increasing the fracture strain of brittle materials.展开更多
A custom micro-arc oxidation(MAO)apparatus is employed to produce coatings under optimized constant voltage–current two-step power supply mode.Various analytical techniques,including scanning electron microscopy,conf...A custom micro-arc oxidation(MAO)apparatus is employed to produce coatings under optimized constant voltage–current two-step power supply mode.Various analytical techniques,including scanning electron microscopy,confocal laser microscopy,X-ray diffraction,X-ray photoelectron spectroscopy,transmission electron microscopy,and electrochemical analysis,are employed to characterize MAO coatings at different stages of preparation.MAO has MgO,hydroxyapatite,Ca_(3)(PO_(4))_(2),and Mg2SiO4 phases.Its microstructure of the coating is characterized by"multiple breakdowns,pores within pores",and"repaired blind pores".The porosity and the uniformity of MAO coating first declines in the constant voltage mode,then augments while the discharge phenomenon takes place,and finally decreases in the repair stage.These analyses reveal a four-stage growth pattern for MAO coatings:anodic oxidation stage,micro-arc oxidation stage,breakdown stage,and repairing stage.During anodic oxidation and MAO stages,inward growth prevails,while the breakdown stage sees outward and accelerated growth.Simultaneous inward and outward growth in the repair stage results in a denser,more uniform coating with increased thickness and improved corrosion resistance.展开更多
Magnesium alloy thin-walled cylindrical components with the advantages of high specific stiffness and strength present broad prospect for the lightweight of aerospace components.However,poor formability resulting from...Magnesium alloy thin-walled cylindrical components with the advantages of high specific stiffness and strength present broad prospect for the lightweight of aerospace components.However,poor formability resulting from the hexagonal close-packed crystal structure in magnesium alloy puts forwards a great challenge for thin-walled cylindrical components fabrication,especially for extreme structure with the thicknesschanging web and the high thin-wall.In this research,an ZK61 magnesium alloy thin-walled cylindrical component was successfully fabricated by two-step forging,i.e.,the pre-forging and final-forging is mainly used for wed and thin-wall formation,respectively.Microstructure and mechanical properties at the core,middle and margin of the web and the thin-wall of the pre-forged and final-forged components are studied in detail.Due to the large strain-effectiveness and metal flow along the radial direction(RD),the grains of the web are all elongated along RD for the pre-forged component,where an increasingly elongated trend is found from the core to the margin of the wed.A relatively low recrystallized degree occurs during pre-forging,and the web at different positions are all with prismatic and pyramid textures.During finalforging,the microstructures of the web and the thin-wall are almost equiaxed due to the remarkable occurrence of dynamic recrystallization.Similarity,except for few basal texture of the thin-wall,only prismatic and pyramid textures are found for the final-forged component.Compared with the initial billet,an obviously improved mechanical isotropy is achieved during pre-forging,which is well-maintained during final-forging.展开更多
Efficient and accurate simulation of unsteady flow presents a significant challenge that needs to be overcome in computational fluid dynamics.Temporal discretization method plays a crucial role in the simulation of un...Efficient and accurate simulation of unsteady flow presents a significant challenge that needs to be overcome in computational fluid dynamics.Temporal discretization method plays a crucial role in the simulation of unsteady flows.To enhance computational efficiency,we propose the Implicit-Explicit Two-Step Runge-Kutta(IMEX-TSRK)time-stepping discretization methods for unsteady flows,and develop a novel adaptive algorithm that correctly partitions spatial regions to apply implicit or explicit methods.The novel adaptive IMEX-TSRK schemes effectively handle the numerical stiffness of the small grid size and improve computational efficiency.Compared to implicit and explicit Runge-Kutta(RK)schemes,the IMEX-TSRK methods achieve the same order of accuracy with fewer first derivative calculations.Numerical case tests demonstrate that the IMEX-TSRK methods maintain numerical stability while enhancing computational efficiency.Specifically,in high Reynolds number flows,the computational efficiency of the IMEX-TSRK methods surpasses that of explicit RK schemes by more than one order of magnitude,and that of implicit RK schemes several times over.展开更多
In existing studies, most slope stability analyses concentrate on conditions with constant temperature, assuming the slope is intact, and employ the Mohr-Coulomb (M-C) failure criterion for saturated soil to character...In existing studies, most slope stability analyses concentrate on conditions with constant temperature, assuming the slope is intact, and employ the Mohr-Coulomb (M-C) failure criterion for saturated soil to characterize the strength of the backfill. However, the actual working temperature of slopes varies, and natural phenomena such as rainfall and groundwater infiltration commonly result in unsaturated soil conditions, with cracks typically present in cohesive slopes. This study introduces a novel approach for assessing the stability of unsaturated soil stepped slopes under varying temperatures, incorporating the effects of open and vertical cracks. Utilizing the kinematic approach and gravity increase method, we developed a three-dimensional (3D) rotational wedge failure mechanism to simulate slope collapse, enhancing the traditional two-dimensional analyses. We integrated temperature-dependent functions and nonlinear shear strength equations to evaluate the impact of temperature on four typical unsaturated soil types. A particle swarm optimization algorithm was employed to calculate the safety factor, ensuring our method’s accuracy by comparing it with existing studies. The results indicate that considering 3D effects yields a higher safety factor, while cracks reduce slope stability. Each unsaturated soil exhibits a distinctive temperature response curve, highlighting the importance of understanding soil types in the design phase.展开更多
Ti-based bulk metallic glasses(BMGs)have attracted increasing attention due to their high specific strength.However,a fundamental conflict exists between the specific strength and glass-forming ability(GFA)of Ti-based...Ti-based bulk metallic glasses(BMGs)have attracted increasing attention due to their high specific strength.However,a fundamental conflict exists between the specific strength and glass-forming ability(GFA)of Ti-based BMGs,restricting their commercial applications significantly.In this study,this challenge was addressed by introducing a two-step alloying strategy to mitigate the remarkable density increment effect associated with heavy alloying elements required for enhancing the GFA.Consequently,through two-step alloying with Al and Fe in sequence,simultaneous enhancements in specific strength and GFA were achieved based on a Ti-Zr-Be ternary metallic glass,resulting in the development of a series of centimeter-sized metallic glasses exhibiting ultrahigh-specific strength.Notably,the newly developed(Ti_(45)Zr_(20)Be_(31)A_(l4))_(94)Fe_(6)alloy established a new record for the specific strength of Ti-based BMGs.Along with a critical diameter(D_(c))of 10 mm,it offers the optimal scheme for balancing the specific strength and GFA of Ti-based BMGs.The present results further brighten the application prospects of Ti-based BMGs as lightweight materials.展开更多
Molybdenum disulfide(MoS_(2)) is an emerging two-dimensional(2D) semiconductor and has great potential for highend applications beyond the traditional silicon-based electronics. Compared to the monolayers, multilayer ...Molybdenum disulfide(MoS_(2)) is an emerging two-dimensional(2D) semiconductor and has great potential for highend applications beyond the traditional silicon-based electronics. Compared to the monolayers, multilayer MoS_(2) has improved electron mobility and current density, and therefore provides a more promising platform in terms of thin-film transistors, flexible electronic devices, etc. However, the synthesis of large-area, high-quality multilayer MoS_(2) films with controlled layer number remains a challenge. Here, we develop a two-step oxygen-assisted chemical vapor deposition(OA-CVD) methodology for the synthesis of 4-inch MoS_(2) films from monolayer to trilayer on sapphire substrates. The influence of critical growth parameters on the growth of multilayer MoS_(2) is systematically explored, such as the evaporation temperature of MoO_(3) and the flow rate of O_(2). Flexible field-effect transistor(FET) devices fabricated from bilayer/trilayer MoS_(2) show substantial improvements in mobility compared with flexible FETs based on monolayer films.展开更多
In this paper, we explore a novel ensemble method for spectral clustering. In contrast to the traditional clustering ensemble methods that combine all the obtained clustering results, we propose the adaptive spectral ...In this paper, we explore a novel ensemble method for spectral clustering. In contrast to the traditional clustering ensemble methods that combine all the obtained clustering results, we propose the adaptive spectral clustering ensemble method to achieve a better clustering solution. This method can adaptively assess the number of the component members, which is not owned by many other algorithms. The component clusterings of the ensemble system are generated by spectral clustering (SC) which bears some good characteristics to engender the diverse committees. The selection process works by evaluating the generated component spectral clustering through resampling technique and population-based incremental learning algorithm (PBIL). Experimental results on UCI datasets demonstrate that the proposed algorithm can achieve better results compared with traditional clustering ensemble methods, especially when the number of component clusterings is large.展开更多
The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as P...The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as PAN-sharpening. We employed a recent dataset derived from very high resolution of WorldView-2 satellite (PAN and MSI) for two test sites (one over an urban area and the other over Antarctica), to comprehensively evaluate the performance of six existing PAN-sharpening algorithms. The algorithms under consideration were the Gram-Schmidt (GS), Ehlers fusion (EF), modified hue-intensity-saturation (Mod-HIS), high pass filtering (HPF), the Brovey transform (BT), and wavelet-based principal component analysis (W-PC). Quality assessment of the sharpened images was carried out by using 20 quality indices. We also analyzed the performance of nearest neighbour (NN), bilinear interpolation (BI), and cubic convolution (CC) resampling methods to test their practicability in the PAN-sharpening process. Our results indicate that the comprehensive performance of PAN-sharpening methods decreased in the following order: GS > W-PC > EF > HPF > Mod-HIS > BT, while resampling methods followed the order: NN > BI > CC.展开更多
In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample r...In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time, and power optimization for field programmable gate array (FPGA) based architectures in an Mpath polyphase filter bank with modified Npath polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. A nonmaximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the Mdataload ' s time period. We present a loadprocess architecture (LPA) and a runtime architecture (RA) (based on serial polyphase structure) which have different scheduling. In LPA, Nsubfilters are loaded, and then M subfilters are processed at a clock rate that is a multiple of the input data rate. This is necessary to meet the output time constraint of the down-sampled data. In RA, Msubfilters processes are efficiently scheduled within Ndataload time while simultaneously loading N subfilters. This requires reduced clock rates compared with LPA, and potentially less power is consumed. A polyphase filter bank that uses different resampling factors for maximally decimated, underdecimated, overdecimated, and combined upand downsampled scenarios is used as a case study, and an analysis of area, time, and power for their FPGA architectures is given. For resourceoptimized SDR frontends, RA is superior for reducing operating clock rates and dynamic power consumption. RA is also superior for reducing area resources, except when indices are prestored in LUTs.展开更多
In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling a...In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling algorithm is proposed. First, the time marks of every rising edge of the rotating speed pulse and the corresponding amplitudes of faulty bearing vibration signal are determined. Then, every adjacent the rotating pulse is divided equally, and the time marks in every adjacent rotating speed pulses and the corresponding amplitudes of vibration signal are obtained by the interpolation algorithm. Finally, all the time marks and the corresponding amplitudes of vibration signal are arranged and the time marks are transformed into the angle domain to obtain the resampling signal. Speed-up and speed-down faulty bearing signals are employed to verify the validity of the proposed method, and experimental results show that the proposed method is effective for diagnosing faulty bearings. Furthermore, the traditional order tracking techniques are applied to the experimental bearing signals, and the results show that the proposed method produces higher accurate outcomes in less computation time.展开更多
Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and ...Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and nonlinear resampling is proposed in this paper. First,the sparse representation is used to compute particle weights by considering the fact that the weights are sparse when the object moves abruptly,so the potential object region can be predicted more precisely. Then,a nonlinear resampling process is proposed by utilizing the nonlinear sorting strategy,which can solve the problem of particle diversity impoverishment caused by traditional resampling methods. Experimental results based on videos containing objects with various abrupt motions have demonstrated the effectiveness of the proposed algorithm.展开更多
In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is trans...In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.展开更多
The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at di...The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.展开更多
An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tu...An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tunnels.This approach considers the effect of uncertainties in both distribution parameters(mean and standard deviation)and types of input properties.Further,the approach was generalized to make it capable of analyzing complex problems with explicit/implicit performance functions(PFs),single/multiple PFs,and correlated/non-correlated input properties.It couples resampling statistical tool,i.e.jackknife,with advanced reliability tools like Latin hypercube sampling(LHS),Sobol’s global sensitivity,moving least square-response surface method(MLS-RSM),and Nataf’s transformation.The developed approach was demonstrated for four cases encompassing different types.Results were compared with a recently developed bootstrap-based resampling reliability approach.The results show that the approach is accurate and significantly efficient compared with the bootstrap-based approach.The proposed approach reflects the effect of statistical uncertainties of input properties by estimating distributions/confidence intervals of reliability index/probability of failure(s)instead of their fixed-point estimates.Further,sufficiently accurate results were obtained by considering uncertainties in distribution parameters only and ignoring those in distribution types.展开更多
Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthr...Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.展开更多
The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted...The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted a lot of research interest.However,because of inherent ambiguity,spectrum-based methods fail to discriminate upscale and downscale operations without any prior information.In general,the application of resampling leaves detectable traces in both spatial domain and frequency domain of a resampled image.Firstly,the resampling process will introduce correlations between neighboring pixels.In this case,a set of periodic pixels that are correlated to their neighbors can be found in a resampled image.Secondly,the resampled image has distinct and strong peaks on spectrum while the spectrum of original image has no clear peaks.Hence,in this paper,we propose a dual-stream convolutional neural network for image resampling factors estimation.One of the two streams is gray stream whose purpose is to extract resampling traces features directly from the rescaled images.The other is frequency stream that discovers the differences of spectrum between rescaled and original images.The features from two streams are then fused to construct a feature representation including the resampling traces left in spatial and frequency domain,which is later fed into softmax layer for resampling factor estimation.Experimental results show that the proposed method is effective on resampling factor estimation and outperforms some CNN-based methods.展开更多
The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, wher...The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.展开更多
基金supported by Ongoing Research Funding Program(ORF-2025-488)King Saud University,Riyadh,Saudi Arabia.
文摘Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.
基金supported by the National Natural Science Foun-dation of China(Nos.52025028,52332008,52372214,52202273,and U22A20137)the Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institutions.
文摘Tin(Sn)-lead(Pb)mixed halide perovskites have attracted widespread interest due to their wider re-sponse wavelength and lower toxicity than lead halide perovskites,Among the preparation methods,the two-step method more easily controls the crystallization rate and is suitable for preparing large-area per-ovskite devices.However,the residual low-conductivity iodide layer in the two-step method can affect carrier transport and device stability,and the different crystallization rates of Sn-and Pb-based per-ovskites may result in poor film quality.Therefore,Sn-Pb mixed perovskites are mainly prepared by a one-step method.Herein,a MAPb_(0.5)Sn_(0.5)I_(3)-based self-powered photodetector without a hole transport layer is fabricated by a two-step method.By adjusting the concentration of the ascorbic acid(AA)addi-tive,the final perovskite film exhibited a pure phase without residues,and the optimal device exhibited a high responsivity(0.276 A W^(-1)),large specific detectivity(2.38×10^(12) Jones),and enhanced stability.This enhancement is mainly attributed to the inhibition of Sn2+oxidation,the control of crystal growth,and the sufficient reaction between organic ammonium salts and bottom halides due to the AA-induced pore structure.
基金Natural Science Foundation of Shanghai(24ZR1400800)he Natural Science Foundation of China(U23A20685,52073058,91963204)+1 种基金the National Key R&D Program of China(2021YFB3701400)Shanghai Sailing Program(23YF1400200)。
文摘High-performance graphite materials have important roles in aerospace and nuclear reactor technologies because of their outstanding chemical stability and high-temperature performance.Their traditional production method relies on repeated impregnation-carbonization and graphitization,and is plagued by lengthy preparation cycles and high energy consumption.Phase transition-assisted self-pressurized selfsintering technology can rapidly produce high-strength graphite materials,but the fracture strain of the graphite materials produced is poor.To solve this problem,this study used a two-step sintering method to uniformly introduce micro-nano pores into natural graphite-based bulk graphite,achieving improved fracture strain of the samples without reducing their density and mechanical properties.Using natural graphite powder,micron-diamond,and nano-diamond as raw materials,and by precisely controlling the staged pressure release process,the degree of diamond phase transition expansion was effectively regulated.The strain-to-failure of the graphite samples reached 1.2%,a 35%increase compared to samples produced by fullpressure sintering.Meanwhile,their flexural strength exceeded 110 MPa,and their density was over 1.9 g/cm^(3).The process therefore produced both a high strength and a high fracture strain.The interface evolution and toughening mechanism during the two-step sintering process were investigated.It is believed that the micro-nano pores formed have two roles:as stress concentrators they induce yielding by shear and as multi-crack propagation paths they significantly lengthen the crack propagation path.The two-step sintering phase transition strategy introduces pores and provides a new approach for increasing the fracture strain of brittle materials.
文摘A custom micro-arc oxidation(MAO)apparatus is employed to produce coatings under optimized constant voltage–current two-step power supply mode.Various analytical techniques,including scanning electron microscopy,confocal laser microscopy,X-ray diffraction,X-ray photoelectron spectroscopy,transmission electron microscopy,and electrochemical analysis,are employed to characterize MAO coatings at different stages of preparation.MAO has MgO,hydroxyapatite,Ca_(3)(PO_(4))_(2),and Mg2SiO4 phases.Its microstructure of the coating is characterized by"multiple breakdowns,pores within pores",and"repaired blind pores".The porosity and the uniformity of MAO coating first declines in the constant voltage mode,then augments while the discharge phenomenon takes place,and finally decreases in the repair stage.These analyses reveal a four-stage growth pattern for MAO coatings:anodic oxidation stage,micro-arc oxidation stage,breakdown stage,and repairing stage.During anodic oxidation and MAO stages,inward growth prevails,while the breakdown stage sees outward and accelerated growth.Simultaneous inward and outward growth in the repair stage results in a denser,more uniform coating with increased thickness and improved corrosion resistance.
基金supported by the National Natural Science Foundation of China(No.52405408,No.U21A20131,No.U2037204,No.52422510)the Natural Science Foundation of Hubei Province(No.2023AFB116)+1 种基金the State Key Laboratory of Materials Processing and Die&Mould TechnologyHuazhong University of Science and Technology(No.P2022-005)。
文摘Magnesium alloy thin-walled cylindrical components with the advantages of high specific stiffness and strength present broad prospect for the lightweight of aerospace components.However,poor formability resulting from the hexagonal close-packed crystal structure in magnesium alloy puts forwards a great challenge for thin-walled cylindrical components fabrication,especially for extreme structure with the thicknesschanging web and the high thin-wall.In this research,an ZK61 magnesium alloy thin-walled cylindrical component was successfully fabricated by two-step forging,i.e.,the pre-forging and final-forging is mainly used for wed and thin-wall formation,respectively.Microstructure and mechanical properties at the core,middle and margin of the web and the thin-wall of the pre-forged and final-forged components are studied in detail.Due to the large strain-effectiveness and metal flow along the radial direction(RD),the grains of the web are all elongated along RD for the pre-forged component,where an increasingly elongated trend is found from the core to the margin of the wed.A relatively low recrystallized degree occurs during pre-forging,and the web at different positions are all with prismatic and pyramid textures.During finalforging,the microstructures of the web and the thin-wall are almost equiaxed due to the remarkable occurrence of dynamic recrystallization.Similarity,except for few basal texture of the thin-wall,only prismatic and pyramid textures are found for the final-forged component.Compared with the initial billet,an obviously improved mechanical isotropy is achieved during pre-forging,which is well-maintained during final-forging.
基金supported by the National Natural Science Foundation of China(No.92252201)the Fundamental Research Funds for the Central Universitiesthe Academic Excellence Foundation of Beihang University(BUAA)for PhD Students。
文摘Efficient and accurate simulation of unsteady flow presents a significant challenge that needs to be overcome in computational fluid dynamics.Temporal discretization method plays a crucial role in the simulation of unsteady flows.To enhance computational efficiency,we propose the Implicit-Explicit Two-Step Runge-Kutta(IMEX-TSRK)time-stepping discretization methods for unsteady flows,and develop a novel adaptive algorithm that correctly partitions spatial regions to apply implicit or explicit methods.The novel adaptive IMEX-TSRK schemes effectively handle the numerical stiffness of the small grid size and improve computational efficiency.Compared to implicit and explicit Runge-Kutta(RK)schemes,the IMEX-TSRK methods achieve the same order of accuracy with fewer first derivative calculations.Numerical case tests demonstrate that the IMEX-TSRK methods maintain numerical stability while enhancing computational efficiency.Specifically,in high Reynolds number flows,the computational efficiency of the IMEX-TSRK methods surpasses that of explicit RK schemes by more than one order of magnitude,and that of implicit RK schemes several times over.
基金Project(51378510) supported by the National Natural Science Foundation of China。
文摘In existing studies, most slope stability analyses concentrate on conditions with constant temperature, assuming the slope is intact, and employ the Mohr-Coulomb (M-C) failure criterion for saturated soil to characterize the strength of the backfill. However, the actual working temperature of slopes varies, and natural phenomena such as rainfall and groundwater infiltration commonly result in unsaturated soil conditions, with cracks typically present in cohesive slopes. This study introduces a novel approach for assessing the stability of unsaturated soil stepped slopes under varying temperatures, incorporating the effects of open and vertical cracks. Utilizing the kinematic approach and gravity increase method, we developed a three-dimensional (3D) rotational wedge failure mechanism to simulate slope collapse, enhancing the traditional two-dimensional analyses. We integrated temperature-dependent functions and nonlinear shear strength equations to evaluate the impact of temperature on four typical unsaturated soil types. A particle swarm optimization algorithm was employed to calculate the safety factor, ensuring our method’s accuracy by comparing it with existing studies. The results indicate that considering 3D effects yields a higher safety factor, while cracks reduce slope stability. Each unsaturated soil exhibits a distinctive temperature response curve, highlighting the importance of understanding soil types in the design phase.
基金supported by the National Natural Science Foundation of China(Nos.52271148 and 51871129).
文摘Ti-based bulk metallic glasses(BMGs)have attracted increasing attention due to their high specific strength.However,a fundamental conflict exists between the specific strength and glass-forming ability(GFA)of Ti-based BMGs,restricting their commercial applications significantly.In this study,this challenge was addressed by introducing a two-step alloying strategy to mitigate the remarkable density increment effect associated with heavy alloying elements required for enhancing the GFA.Consequently,through two-step alloying with Al and Fe in sequence,simultaneous enhancements in specific strength and GFA were achieved based on a Ti-Zr-Be ternary metallic glass,resulting in the development of a series of centimeter-sized metallic glasses exhibiting ultrahigh-specific strength.Notably,the newly developed(Ti_(45)Zr_(20)Be_(31)A_(l4))_(94)Fe_(6)alloy established a new record for the specific strength of Ti-based BMGs.Along with a critical diameter(D_(c))of 10 mm,it offers the optimal scheme for balancing the specific strength and GFA of Ti-based BMGs.The present results further brighten the application prospects of Ti-based BMGs as lightweight materials.
基金Project supported by the National Key Research and Development Program of China (Grant No. 2021YFA1202900)the National Natural Science Foundation of China (Grant Nos. 12422402, 61888102, 12274447, and 62204166)+1 种基金Chinese Academy of Sciences Strategic Priority Research Program (Grant No. XDB067020302)Guangdong Major Project of Basic and Applied Basic Research (Grant No. 2021B0301030002)。
文摘Molybdenum disulfide(MoS_(2)) is an emerging two-dimensional(2D) semiconductor and has great potential for highend applications beyond the traditional silicon-based electronics. Compared to the monolayers, multilayer MoS_(2) has improved electron mobility and current density, and therefore provides a more promising platform in terms of thin-film transistors, flexible electronic devices, etc. However, the synthesis of large-area, high-quality multilayer MoS_(2) films with controlled layer number remains a challenge. Here, we develop a two-step oxygen-assisted chemical vapor deposition(OA-CVD) methodology for the synthesis of 4-inch MoS_(2) films from monolayer to trilayer on sapphire substrates. The influence of critical growth parameters on the growth of multilayer MoS_(2) is systematically explored, such as the evaporation temperature of MoO_(3) and the flow rate of O_(2). Flexible field-effect transistor(FET) devices fabricated from bilayer/trilayer MoS_(2) show substantial improvements in mobility compared with flexible FETs based on monolayer films.
基金Supported by the National Natural Science Foundation of China (60661003)the Research Project Department of Education of Jiangxi Province (GJJ10566)
文摘In this paper, we explore a novel ensemble method for spectral clustering. In contrast to the traditional clustering ensemble methods that combine all the obtained clustering results, we propose the adaptive spectral clustering ensemble method to achieve a better clustering solution. This method can adaptively assess the number of the component members, which is not owned by many other algorithms. The component clusterings of the ensemble system are generated by spectral clustering (SC) which bears some good characteristics to engender the diverse committees. The selection process works by evaluating the generated component spectral clustering through resampling technique and population-based incremental learning algorithm (PBIL). Experimental results on UCI datasets demonstrate that the proposed algorithm can achieve better results compared with traditional clustering ensemble methods, especially when the number of component clusterings is large.
文摘The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as PAN-sharpening. We employed a recent dataset derived from very high resolution of WorldView-2 satellite (PAN and MSI) for two test sites (one over an urban area and the other over Antarctica), to comprehensively evaluate the performance of six existing PAN-sharpening algorithms. The algorithms under consideration were the Gram-Schmidt (GS), Ehlers fusion (EF), modified hue-intensity-saturation (Mod-HIS), high pass filtering (HPF), the Brovey transform (BT), and wavelet-based principal component analysis (W-PC). Quality assessment of the sharpened images was carried out by using 20 quality indices. We also analyzed the performance of nearest neighbour (NN), bilinear interpolation (BI), and cubic convolution (CC) resampling methods to test their practicability in the PAN-sharpening process. Our results indicate that the comprehensive performance of PAN-sharpening methods decreased in the following order: GS > W-PC > EF > HPF > Mod-HIS > BT, while resampling methods followed the order: NN > BI > CC.
文摘In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time, and power optimization for field programmable gate array (FPGA) based architectures in an Mpath polyphase filter bank with modified Npath polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. A nonmaximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the Mdataload ' s time period. We present a loadprocess architecture (LPA) and a runtime architecture (RA) (based on serial polyphase structure) which have different scheduling. In LPA, Nsubfilters are loaded, and then M subfilters are processed at a clock rate that is a multiple of the input data rate. This is necessary to meet the output time constraint of the down-sampled data. In RA, Msubfilters processes are efficiently scheduled within Ndataload time while simultaneously loading N subfilters. This requires reduced clock rates compared with LPA, and potentially less power is consumed. A polyphase filter bank that uses different resampling factors for maximally decimated, underdecimated, overdecimated, and combined upand downsampled scenarios is used as a case study, and an analysis of area, time, and power for their FPGA architectures is given. For resourceoptimized SDR frontends, RA is superior for reducing operating clock rates and dynamic power consumption. RA is also superior for reducing area resources, except when indices are prestored in LUTs.
基金Fundamental Research Funds for the Central Universities(No.2016JBM051)
文摘In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling algorithm is proposed. First, the time marks of every rising edge of the rotating speed pulse and the corresponding amplitudes of faulty bearing vibration signal are determined. Then, every adjacent the rotating pulse is divided equally, and the time marks in every adjacent rotating speed pulses and the corresponding amplitudes of vibration signal are obtained by the interpolation algorithm. Finally, all the time marks and the corresponding amplitudes of vibration signal are arranged and the time marks are transformed into the angle domain to obtain the resampling signal. Speed-up and speed-down faulty bearing signals are employed to verify the validity of the proposed method, and experimental results show that the proposed method is effective for diagnosing faulty bearings. Furthermore, the traditional order tracking techniques are applied to the experimental bearing signals, and the results show that the proposed method produces higher accurate outcomes in less computation time.
基金Supported by the National Natural Science Foundation of China(61701029)
文摘Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and nonlinear resampling is proposed in this paper. First,the sparse representation is used to compute particle weights by considering the fact that the weights are sparse when the object moves abruptly,so the potential object region can be predicted more precisely. Then,a nonlinear resampling process is proposed by utilizing the nonlinear sorting strategy,which can solve the problem of particle diversity impoverishment caused by traditional resampling methods. Experimental results based on videos containing objects with various abrupt motions have demonstrated the effectiveness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China(61372136)
文摘In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.
基金supported by the National Natural Science Foundation of China(Grant Nos.61575205 and 62175022)Sichuan Natural Science Foundation(2022NSFSC0803)Sichuan Science and Technology Program(2021JDRC0035).
文摘The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.
文摘An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tunnels.This approach considers the effect of uncertainties in both distribution parameters(mean and standard deviation)and types of input properties.Further,the approach was generalized to make it capable of analyzing complex problems with explicit/implicit performance functions(PFs),single/multiple PFs,and correlated/non-correlated input properties.It couples resampling statistical tool,i.e.jackknife,with advanced reliability tools like Latin hypercube sampling(LHS),Sobol’s global sensitivity,moving least square-response surface method(MLS-RSM),and Nataf’s transformation.The developed approach was demonstrated for four cases encompassing different types.Results were compared with a recently developed bootstrap-based resampling reliability approach.The results show that the approach is accurate and significantly efficient compared with the bootstrap-based approach.The proposed approach reflects the effect of statistical uncertainties of input properties by estimating distributions/confidence intervals of reliability index/probability of failure(s)instead of their fixed-point estimates.Further,sufficiently accurate results were obtained by considering uncertainties in distribution parameters only and ignoring those in distribution types.
基金Project supported by the National Key Research and Development Program of China(Grant No.2020YFC1807905)the National Natural Science Foundation of China(Grant Nos.52079090 and U20A20316)the Basic Research Program of Qinghai Province(Grant No.2022-ZJ-704).
文摘Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.
基金the National Natural Science Foundation of China(No.62072480)the Key Areas R&D Program of Guangdong(No.2019B010136002)the Key ScientificResearch Program of Guangzhou(No.201804020068).
文摘The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted a lot of research interest.However,because of inherent ambiguity,spectrum-based methods fail to discriminate upscale and downscale operations without any prior information.In general,the application of resampling leaves detectable traces in both spatial domain and frequency domain of a resampled image.Firstly,the resampling process will introduce correlations between neighboring pixels.In this case,a set of periodic pixels that are correlated to their neighbors can be found in a resampled image.Secondly,the resampled image has distinct and strong peaks on spectrum while the spectrum of original image has no clear peaks.Hence,in this paper,we propose a dual-stream convolutional neural network for image resampling factors estimation.One of the two streams is gray stream whose purpose is to extract resampling traces features directly from the rescaled images.The other is frequency stream that discovers the differences of spectrum between rescaled and original images.The features from two streams are then fused to construct a feature representation including the resampling traces left in spatial and frequency domain,which is later fed into softmax layer for resampling factor estimation.Experimental results show that the proposed method is effective on resampling factor estimation and outperforms some CNN-based methods.
基金Project(61372136) supported by the National Natural Science Foundation of China
文摘The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.