The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial car...The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial carbon emissions.To mitigate these emissions,future data centers should be strategically planned and operated to fully utilize renewable energy resources while meeting growing computational demands.This paper aims to investigate how much carbon emission reduction can be achieved by using a carbonoriented demand response to guide the optimal planning and operation of data centers.A carbon-oriented data center planning model is proposed that considers the carbon-oriented demand response of the AI load.In the planning model,future operation simulations comprehensively coordinate the temporal‒spatial flexibility of computational loads and the quality of service(QoS).An empirical study based on the proposed models is conducted on real-world data from China.The results from the empirical analysis show that newly constructed data centers are recommended to be built in Gansu Province,Ningxia Hui Autonomous Region,Sichuan Province,Inner Mongolia Autonomous Region,and Qinghai Province,accounting for 57%of the total national increase in server capacity.33%of the computational load from Eastern China should be transferred to the West,which could reduce the overall load carbon emissions by 26%.展开更多
In the past decade,financial institutions have invested significant efforts in the development of accurate analytical credit scoring models.The evidence suggests that even small improvements in the accuracy of existin...In the past decade,financial institutions have invested significant efforts in the development of accurate analytical credit scoring models.The evidence suggests that even small improvements in the accuracy of existing credit-scoring models may optimize profits while effectively managing risk exposure.Despite continuing efforts,the majority of existing credit scoring models still include some judgment-based assumptions that are sometimes supported by the significant findings of previous studies but are not validated using the institution’s internal data.We argue that current studies related to the development of credit scoring models have largely ignored recent developments in statistical methods for sufficient dimension reduction.To contribute to the field of financial innovation,this study proposes a Dimension Reduction Assisted Credit Scoring(DRA-CS)method via distance covariance-based sufficient dimension reduction(DCOV-SDR)in Majorization-Minimization(MM)algorithm.First,in the presence of a large number of variables,the DRA-CS method results in greater dimension reduction and better prediction accuracy than the other methods used for dimension reduction.Second,when the DRA-CS method is employed with logistic regression,it outperforms existing methods based on different variable selection techniques.This study argues that the DRA-CS method should be used by financial institutions as a financial innovation tool to analyze high-dimensional customer datasets and improve the accuracy of existing credit scoring methods.展开更多
BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Pack...BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Package)dedicated to automatically reducing the long-slit and echelle spectra obtained by these two instruments.The package supports bias and flat-fielding correction,order location,background subtraction,automatic wavelength calibration,and absolute flux calibration.The optimal extraction method maximizes the signal-to-noise ratio and removes most of the cosmic rays imprinted in the spectra.A comparison with the 1D spectra reduced with IRAF verifies the reliability of the results.This open-source software is publicly available to the community.展开更多
Experts and officials shared their insights on poverty reduction cooperation and sustainable development during the 2025 International Seminar on Global Poverty Reduction Partnerships.
Heteroatom-doped carbon is considered a promising alternative to commercial Pt/C as an efficient catalyst for the oxygen reduction reaction(ORR).This study presents the synthesis of iron-loaded,sulfur and nitrogen co-...Heteroatom-doped carbon is considered a promising alternative to commercial Pt/C as an efficient catalyst for the oxygen reduction reaction(ORR).This study presents the synthesis of iron-loaded,sulfur and nitrogen co-doped carbon(Fe/SNC)via in situ incorporation of 2-aminothiazole molecules into zeolitic imidazolate framework-8(ZIF-8)through coordination between metal ions and organic ligands.Sulfur and nitrogen doping in carbon supports effectively modulates the electronic structure of the catalyst,increases the Brunauer-Emmett-Teller surface area,and exposes more Fe-N_(x)active centers.Fe-loaded,S and N co-doped carbon with Fe/S molar ratio of 1:10(Fe/SNC-10)exhibits a half-wave potential of 0.902 V vs.RHE.After 5000 cycles of cyclic voltammetry,its half-wave potential decreases by only 20 mV vs.RHE,indicating excellent stability.Due to sulfur s lower electronegativity,the electronic structure of the Fe-N_(x)active center is modulated.Additionally,the larger atomic radius of sulfur introduces defects into the carbon support.As a result,Fe/SNC-10 demonstrates superior ORR activity and stability in alkaline solution compared with Fe-loaded N-doped carbon(Fe/NC).Furthermore,the zinc-air battery assembled with the Fe/SNC-10 catalyst shows enhanced performance relative to those assembled with Fe/NC and Pt/C catalysts.This work offers a novel design strategy for advanced energy storage and conversion applications.展开更多
The development of Pt-free catalysts for the oxygen reduction reaction(ORR)is a great issue for meeting the cost challenges of proton exchange membrane fuel cells(PEMFCs)in commercial applications.In this work,a serie...The development of Pt-free catalysts for the oxygen reduction reaction(ORR)is a great issue for meeting the cost challenges of proton exchange membrane fuel cells(PEMFCs)in commercial applications.In this work,a series of RuCo/C catalysts were synthesized by NaBH4 reduction method under the premise that the total metal mass percentage was 20%.X-ray diffraction(XRD)patterns and scanning electron microscopy(SEM)confirmed the formation of single-phase nanoparticles with an average size of 33 nm.Cyclic voltammograms(CV)and linear sweep voltammograms(LSV)tests indicated that RuCo(2:1)/C catalyst had the optimal ORR properties.Additionally,the RuCo(2:1)/C catalyst remarkably sustained 98.1% of its activity even after 3000 cycles,surpassing the performance of Pt/C(84.8%).Analysis of the elemental state of the catalyst surface after cycling using X-ray photoelectron spectroscopy(XPS)revealed that the Ru^(0) percentage of RuCo(2:1)/C decreased by 2.2%(from 66.3% to 64.1%),while the Pt^(0) percentage of Pt/C decreased by 7.1%(from 53.3% to 46.2%).It is suggested that the synergy between Ru and Co holds the potential to pave the way for future low-cost and highly stable ORR catalysts,offering significant promise in the context of PEMFCs.展开更多
Using photoelectrocatalytic CO_(2) reduction reaction(CO_(2)RR)to produce valuable fuels is a fascinating way to alleviate environmental issues and energy crises.Bismuth-based(Bi-based)catalysts have attracted widespr...Using photoelectrocatalytic CO_(2) reduction reaction(CO_(2)RR)to produce valuable fuels is a fascinating way to alleviate environmental issues and energy crises.Bismuth-based(Bi-based)catalysts have attracted widespread attention for CO_(2)RR due to their high catalytic activity,selectivity,excellent stability,and low cost.However,they still need to be further improved to meet the needs of industrial applications.This review article comprehensively summarizes the recent advances in regulation strategies of Bi-based catalysts and can be divided into six categories:(1)defect engineering,(2)atomic doping engineering,(3)organic framework engineering,(4)inorganic heterojunction engineering,(5)crystal face engineering,and(6)alloying and polarization engineering.Meanwhile,the corresponding catalytic mechanisms of each regulation strategy will also be discussed in detail,aiming to enable researchers to understand the structure-property relationship of the improved Bibased catalysts fundamentally.Finally,the challenges and future opportunities of the Bi-based catalysts in the photoelectrocatalytic CO_(2)RR application field will also be featured from the perspectives of the(1)combination or synergy of multiple regulatory strategies,(2)revealing formation mechanism and realizing controllable synthesis,and(3)in situ multiscale investigation of activation pathways and uncovering the catalytic mechanisms.On the one hand,through the comparative analysis and mechanism explanation of the six major regulatory strategies,a multidimensional knowledge framework of the structure-activity relationship of Bi-based catalysts can be constructed for researchers,which not only deepens the atomic-level understanding of catalytic active sites,charge transport paths,and the adsorption behavior of intermediate products,but also provides theoretical guiding principles for the controllable design of new catalysts;on the other hand,the promising collaborative regulation strategies,controllable synthetic paths,and the in situ multiscale characterization techniques presented in this work provides a paradigm reference for shortening the research and development cycle of high-performance catalysts,conducive to facilitating the transition of photoelectrocatalytic CO_(2)RR technology from the laboratory routes to industrial application.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ...High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).展开更多
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ...Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.展开更多
This paper presents a reasonable gridding-parameters extraction method for setting the optimal interpolation nodes in the gridding of scattered observed data. The method can extract optimized gridding parameters based...This paper presents a reasonable gridding-parameters extraction method for setting the optimal interpolation nodes in the gridding of scattered observed data. The method can extract optimized gridding parameters based on the distribution of features in raw data. Modeling analysis proves that distortion caused by gridding can be greatly reduced when using such parameters. We also present some improved technical measures that use human- machine interaction and multi-thread parallel technology to solve inadequacies in traditional gridding software. On the basis of these methods, we have developed software that can be used to grid scattered data using a graphic interface. Finally, a comparison of different gridding parameters on field magnetic data from Ji Lin Province, North China demonstrates the superiority of the proposed method in eliminating the distortions and enhancing gridding efficiency.展开更多
In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number...In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.展开更多
A new method of nonlinear analysis is established by combining phase space reconstruction and data reduction sub-frequency band wavelet. This method is applied to two types of chaotic dynamic systems(Lorenz and Rssler...A new method of nonlinear analysis is established by combining phase space reconstruction and data reduction sub-frequency band wavelet. This method is applied to two types of chaotic dynamic systems(Lorenz and Rssler) to examine the anti-noise ability for complex systems. Results show that the nonlinear dynamic system analysis method resists noise and reveals the internal dynamics of a weak signal from noise pollution. On this basis, the vertical upward gas–liquid two-phase flow in a 2 mm × 0.81 mm small rectangular channel is investigated. The frequency and energy distributions of the main oscillation mode are revealed by analyzing the time–frequency spectra of the pressure signals of different flow patterns. The positive power spectral density of singular-value frequency entropy and the damping ratio are extracted to characterize the evolution of flow patterns and achieve accurate recognition of different vertical upward gas–liquid flow patterns(bubbly flow:100%, slug flow: 92%, churn flow: 96%, annular flow: 100%). The proposed analysis method will enrich the dynamics theory of multi-phase flow in small channel.展开更多
Imbalanced data classification is one of the major problems in machine learning.This imbalanced dataset typically has significant differences in the number of data samples between its classes.In most cases,the perform...Imbalanced data classification is one of the major problems in machine learning.This imbalanced dataset typically has significant differences in the number of data samples between its classes.In most cases,the performance of the machine learning algorithm such as Support Vector Machine(SVM)is affected when dealing with an imbalanced dataset.The classification accuracy is mostly skewed toward the majority class and poor results are exhibited in the prediction of minority-class samples.In this paper,a hybrid approach combining data pre-processing technique andSVMalgorithm based on improved Simulated Annealing(SA)was proposed.Firstly,the data preprocessing technique which primarily aims at solving the resampling strategy of handling imbalanced datasets was proposed.In this technique,the data were first synthetically generated to equalize the number of samples between classes and followed by a reduction step to remove redundancy and duplicated data.Next is the training of a balanced dataset using SVM.Since this algorithm requires an iterative process to search for the best penalty parameter during training,an improved SA algorithm was proposed for this task.In this proposed improvement,a new acceptance criterion for the solution to be accepted in the SA algorithm was introduced to enhance the accuracy of the optimization process.Experimental works based on ten publicly available imbalanced datasets have demonstrated higher accuracy in the classification tasks using the proposed approach in comparison with the conventional implementation of SVM.Registering at an average of 89.65%of accuracy for the binary class classification has demonstrated the good performance of the proposed works.展开更多
In order to increase the fault diagnosis efficiency and make the fault data mining be realized, the decision table containing numerical attributes must be discretized for further calculations. The discernibility matri...In order to increase the fault diagnosis efficiency and make the fault data mining be realized, the decision table containing numerical attributes must be discretized for further calculations. The discernibility matrix-based reduction method depends on whether the numerical attributes can be properly discretized or not.So a discretization algorithm based on particle swarm optimization(PSO) is proposed. Moreover, hybrid weights are adopted in the process of particles evolution. Comparative calculations for certain equipment are completed to demonstrate the effectiveness of the proposed algorithm. The results indicate that the proposed algorithm has better performance than other popular algorithms such as class-attribute interdependence maximization(CAIM)discretization method and entropy-based discretization method.展开更多
Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient m...Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared.展开更多
The vector transformation and pole reduction from the total-field anomaly are signifi cant for the interpretation.We examined these industry-standard processing procedures in the Fourier domain.We propose a novel iter...The vector transformation and pole reduction from the total-field anomaly are signifi cant for the interpretation.We examined these industry-standard processing procedures in the Fourier domain.We propose a novel iteration algorithm for regional magnetic anomalies transformations to derive the vertical-component data from the total-field measurements with the variation in the core-fi eld direction over the region.Additionally,we use the same algorithm to convert the calculated vertical-component data into the corresponding data at the pole and realize the processing of diff erential reduction to the pole(DRTP).Unlike Arkani-Hamed’s DRTP method,the two types of iterative algorithms have the same forms,and DRTP is realized by implementing this algorithm twice.The synthetic model’s calculation results show that the method has high accuracy,and the fi eld data processing confi rms its practicality.展开更多
A noise-reduction method with sliding called the local f-x Cadzow noise-reduction method, windows in the frequency-space (f-x) domain, is presented in this paper. This method is based on the assumption that the sign...A noise-reduction method with sliding called the local f-x Cadzow noise-reduction method, windows in the frequency-space (f-x) domain, is presented in this paper. This method is based on the assumption that the signal in each window is linearly predictable in the spatial direction while the random noise is not. For each Toeplitz matrix constructed by constant frequency slice, a singular value decomposition (SVD) is applied to separate signal from noise. To avoid edge artifacts caused by zero percent overlap between windows and to remove more noise, an appropriate overlap is adopted. Besides flat and dipping events, this method can enhance curved and conflicting events. However, it is not suitable for seismic data that contains big spikes or null traces. It is also compared with the SVD, f-x deconvolution, and Cadzow method without windows. The comparison results show that the local Cadzow method performs well in removing random noise and preserving signal. In addition, a real data example proves that it is a potential noise-reduction technique for seismic data obtained in areas of complex formations.展开更多
Multi-level searching is called Drill down search.Right now,no drill down search feature is available in the existing search engines like Google,Yahoo,Bing and Baidu.Drill down search is very much useful for the end u...Multi-level searching is called Drill down search.Right now,no drill down search feature is available in the existing search engines like Google,Yahoo,Bing and Baidu.Drill down search is very much useful for the end user tofind the exact search results among the huge paginated search results.Higher level of drill down search with category based search feature leads to get the most accurate search results but it increases the number and size of thefile system.The purpose of this manuscript is to implement a big data storage reduction binaryfile system model for category based drill down search engine that offers fast multi-levelfiltering capability.The basic methodology of the proposed model stores the search engine data in the binaryfile system model.To verify the effectiveness of the proposedfile system model,5 million unique keyword data are stored into a binaryfile,thereby analysing the proposedfile system with efficiency.Some experimental results are also provided based on real data that show our storage model speed and superiority.Experiments demonstrated that ourfile system expansion ratio is constant and it reduces the disk storage space up to 30%with conventional database/file system and it also increases the search performance for any levels of search.To discuss deeply,the paper starts with the short introduction of drill down search followed by the discussion of important technologies used to implement big data storage reduction system in detail.展开更多
To improve the efficiency of the attribute reduction, we present an attribute reduction algorithm based on background knowledge and information entropy by making use of background knowledge from research fields. Under...To improve the efficiency of the attribute reduction, we present an attribute reduction algorithm based on background knowledge and information entropy by making use of background knowledge from research fields. Under the condition of known background knowledge, the algorithm can not only greatly improve the efficiency of attribute reduction, but also avoid the defection of information entropy partial to attribute with much value. The experimental result verifies that the algorithm is effective. In the end, the algorithm produces better results when applied in the classification of the star spectra data.展开更多
基金supported by the Scientific&Technical Project of the State Grid(5700--202490228A--1--1-ZN).
文摘The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial carbon emissions.To mitigate these emissions,future data centers should be strategically planned and operated to fully utilize renewable energy resources while meeting growing computational demands.This paper aims to investigate how much carbon emission reduction can be achieved by using a carbonoriented demand response to guide the optimal planning and operation of data centers.A carbon-oriented data center planning model is proposed that considers the carbon-oriented demand response of the AI load.In the planning model,future operation simulations comprehensively coordinate the temporal‒spatial flexibility of computational loads and the quality of service(QoS).An empirical study based on the proposed models is conducted on real-world data from China.The results from the empirical analysis show that newly constructed data centers are recommended to be built in Gansu Province,Ningxia Hui Autonomous Region,Sichuan Province,Inner Mongolia Autonomous Region,and Qinghai Province,accounting for 57%of the total national increase in server capacity.33%of the computational load from Eastern China should be transferred to the West,which could reduce the overall load carbon emissions by 26%.
文摘In the past decade,financial institutions have invested significant efforts in the development of accurate analytical credit scoring models.The evidence suggests that even small improvements in the accuracy of existing credit-scoring models may optimize profits while effectively managing risk exposure.Despite continuing efforts,the majority of existing credit scoring models still include some judgment-based assumptions that are sometimes supported by the significant findings of previous studies but are not validated using the institution’s internal data.We argue that current studies related to the development of credit scoring models have largely ignored recent developments in statistical methods for sufficient dimension reduction.To contribute to the field of financial innovation,this study proposes a Dimension Reduction Assisted Credit Scoring(DRA-CS)method via distance covariance-based sufficient dimension reduction(DCOV-SDR)in Majorization-Minimization(MM)algorithm.First,in the presence of a large number of variables,the DRA-CS method results in greater dimension reduction and better prediction accuracy than the other methods used for dimension reduction.Second,when the DRA-CS method is employed with logistic regression,it outperforms existing methods based on different variable selection techniques.This study argues that the DRA-CS method should be used by financial institutions as a financial innovation tool to analyze high-dimensional customer datasets and improve the accuracy of existing credit scoring methods.
基金supported by the National Natural Science Foundation of China under grant No.U2031144partially supported by the Open Project Program of the Key Laboratory of Optical Astronomy,National Astronomical Observatories,Chinese Academy of Sciences+5 种基金supported by the National Key R&D Program of China with No.2021YFA1600404the National Natural Science Foundation of China(12173082)the Yunnan Fundamental Research Projects(grant 202201AT070069)the Top-notch Young Talents Program of Yunnan Provincethe Light of West China Program provided by the Chinese Academy of Sciencesthe International Centre of Supernovae,Yunnan Key Laboratory(No.202302AN360001)。
文摘BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Package)dedicated to automatically reducing the long-slit and echelle spectra obtained by these two instruments.The package supports bias and flat-fielding correction,order location,background subtraction,automatic wavelength calibration,and absolute flux calibration.The optimal extraction method maximizes the signal-to-noise ratio and removes most of the cosmic rays imprinted in the spectra.A comparison with the 1D spectra reduced with IRAF verifies the reliability of the results.This open-source software is publicly available to the community.
文摘Experts and officials shared their insights on poverty reduction cooperation and sustainable development during the 2025 International Seminar on Global Poverty Reduction Partnerships.
基金financial support of the National Natural Science Foundation of China(No.52472271)the National Key Research and Development Program of China(No.2023YFE0115800)。
文摘Heteroatom-doped carbon is considered a promising alternative to commercial Pt/C as an efficient catalyst for the oxygen reduction reaction(ORR).This study presents the synthesis of iron-loaded,sulfur and nitrogen co-doped carbon(Fe/SNC)via in situ incorporation of 2-aminothiazole molecules into zeolitic imidazolate framework-8(ZIF-8)through coordination between metal ions and organic ligands.Sulfur and nitrogen doping in carbon supports effectively modulates the electronic structure of the catalyst,increases the Brunauer-Emmett-Teller surface area,and exposes more Fe-N_(x)active centers.Fe-loaded,S and N co-doped carbon with Fe/S molar ratio of 1:10(Fe/SNC-10)exhibits a half-wave potential of 0.902 V vs.RHE.After 5000 cycles of cyclic voltammetry,its half-wave potential decreases by only 20 mV vs.RHE,indicating excellent stability.Due to sulfur s lower electronegativity,the electronic structure of the Fe-N_(x)active center is modulated.Additionally,the larger atomic radius of sulfur introduces defects into the carbon support.As a result,Fe/SNC-10 demonstrates superior ORR activity and stability in alkaline solution compared with Fe-loaded N-doped carbon(Fe/NC).Furthermore,the zinc-air battery assembled with the Fe/SNC-10 catalyst shows enhanced performance relative to those assembled with Fe/NC and Pt/C catalysts.This work offers a novel design strategy for advanced energy storage and conversion applications.
基金Funded by the 111 Project(No.B17034)Open Project of Hubei Key Laboratory of Power System Design and Test for Electrical Vehicle(No.ZDSYS202212)+1 种基金Innovative Research Team Development Program of Ministry of Education of China(No.IRT_17R83)the Science and Technology Project of China Southern Power Grid Co.,Ltd.(No.GDKJXM20222546)。
文摘The development of Pt-free catalysts for the oxygen reduction reaction(ORR)is a great issue for meeting the cost challenges of proton exchange membrane fuel cells(PEMFCs)in commercial applications.In this work,a series of RuCo/C catalysts were synthesized by NaBH4 reduction method under the premise that the total metal mass percentage was 20%.X-ray diffraction(XRD)patterns and scanning electron microscopy(SEM)confirmed the formation of single-phase nanoparticles with an average size of 33 nm.Cyclic voltammograms(CV)and linear sweep voltammograms(LSV)tests indicated that RuCo(2:1)/C catalyst had the optimal ORR properties.Additionally,the RuCo(2:1)/C catalyst remarkably sustained 98.1% of its activity even after 3000 cycles,surpassing the performance of Pt/C(84.8%).Analysis of the elemental state of the catalyst surface after cycling using X-ray photoelectron spectroscopy(XPS)revealed that the Ru^(0) percentage of RuCo(2:1)/C decreased by 2.2%(from 66.3% to 64.1%),while the Pt^(0) percentage of Pt/C decreased by 7.1%(from 53.3% to 46.2%).It is suggested that the synergy between Ru and Co holds the potential to pave the way for future low-cost and highly stable ORR catalysts,offering significant promise in the context of PEMFCs.
基金supports from the National Natural Science Foundation of China(Grant Nos.12305372 and 22376217)the National Key Research&Development Program of China(Grant Nos.2022YFA1603802 and 2022YFB3504100)+1 种基金the projects of the key laboratory of advanced energy materials chemistry,ministry of education(Nankai University)key laboratory of Jiangxi Province for persistent pollutants prevention control and resource reuse(2023SSY02061)are gratefully acknowledged.
文摘Using photoelectrocatalytic CO_(2) reduction reaction(CO_(2)RR)to produce valuable fuels is a fascinating way to alleviate environmental issues and energy crises.Bismuth-based(Bi-based)catalysts have attracted widespread attention for CO_(2)RR due to their high catalytic activity,selectivity,excellent stability,and low cost.However,they still need to be further improved to meet the needs of industrial applications.This review article comprehensively summarizes the recent advances in regulation strategies of Bi-based catalysts and can be divided into six categories:(1)defect engineering,(2)atomic doping engineering,(3)organic framework engineering,(4)inorganic heterojunction engineering,(5)crystal face engineering,and(6)alloying and polarization engineering.Meanwhile,the corresponding catalytic mechanisms of each regulation strategy will also be discussed in detail,aiming to enable researchers to understand the structure-property relationship of the improved Bibased catalysts fundamentally.Finally,the challenges and future opportunities of the Bi-based catalysts in the photoelectrocatalytic CO_(2)RR application field will also be featured from the perspectives of the(1)combination or synergy of multiple regulatory strategies,(2)revealing formation mechanism and realizing controllable synthesis,and(3)in situ multiscale investigation of activation pathways and uncovering the catalytic mechanisms.On the one hand,through the comparative analysis and mechanism explanation of the six major regulatory strategies,a multidimensional knowledge framework of the structure-activity relationship of Bi-based catalysts can be constructed for researchers,which not only deepens the atomic-level understanding of catalytic active sites,charge transport paths,and the adsorption behavior of intermediate products,but also provides theoretical guiding principles for the controllable design of new catalysts;on the other hand,the promising collaborative regulation strategies,controllable synthetic paths,and the in situ multiscale characterization techniques presented in this work provides a paradigm reference for shortening the research and development cycle of high-performance catalysts,conducive to facilitating the transition of photoelectrocatalytic CO_(2)RR technology from the laboratory routes to industrial application.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
文摘High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).
基金Supported by Xuhui District Health Commission,No.SHXH202214.
文摘Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.
基金partly supported by the Public Geological Survey Project(No.201011039)the National High Technology Research and Development Project of China(No.2007AA06Z134)the 111 Project under the Ministry of Education and the State Administration of Foreign Experts Affairs,China(No.B07011)
文摘This paper presents a reasonable gridding-parameters extraction method for setting the optimal interpolation nodes in the gridding of scattered observed data. The method can extract optimized gridding parameters based on the distribution of features in raw data. Modeling analysis proves that distortion caused by gridding can be greatly reduced when using such parameters. We also present some improved technical measures that use human- machine interaction and multi-thread parallel technology to solve inadequacies in traditional gridding software. On the basis of these methods, we have developed software that can be used to grid scattered data using a graphic interface. Finally, a comparison of different gridding parameters on field magnetic data from Ji Lin Province, North China demonstrates the superiority of the proposed method in eliminating the distortions and enhancing gridding efficiency.
基金supported by the National Natural Science Foundation of China (No. 11502211)
文摘In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.
基金Supported by the National Natural Science Foundation of China(51406031)
文摘A new method of nonlinear analysis is established by combining phase space reconstruction and data reduction sub-frequency band wavelet. This method is applied to two types of chaotic dynamic systems(Lorenz and Rssler) to examine the anti-noise ability for complex systems. Results show that the nonlinear dynamic system analysis method resists noise and reveals the internal dynamics of a weak signal from noise pollution. On this basis, the vertical upward gas–liquid two-phase flow in a 2 mm × 0.81 mm small rectangular channel is investigated. The frequency and energy distributions of the main oscillation mode are revealed by analyzing the time–frequency spectra of the pressure signals of different flow patterns. The positive power spectral density of singular-value frequency entropy and the damping ratio are extracted to characterize the evolution of flow patterns and achieve accurate recognition of different vertical upward gas–liquid flow patterns(bubbly flow:100%, slug flow: 92%, churn flow: 96%, annular flow: 100%). The proposed analysis method will enrich the dynamics theory of multi-phase flow in small channel.
文摘Imbalanced data classification is one of the major problems in machine learning.This imbalanced dataset typically has significant differences in the number of data samples between its classes.In most cases,the performance of the machine learning algorithm such as Support Vector Machine(SVM)is affected when dealing with an imbalanced dataset.The classification accuracy is mostly skewed toward the majority class and poor results are exhibited in the prediction of minority-class samples.In this paper,a hybrid approach combining data pre-processing technique andSVMalgorithm based on improved Simulated Annealing(SA)was proposed.Firstly,the data preprocessing technique which primarily aims at solving the resampling strategy of handling imbalanced datasets was proposed.In this technique,the data were first synthetically generated to equalize the number of samples between classes and followed by a reduction step to remove redundancy and duplicated data.Next is the training of a balanced dataset using SVM.Since this algorithm requires an iterative process to search for the best penalty parameter during training,an improved SA algorithm was proposed for this task.In this proposed improvement,a new acceptance criterion for the solution to be accepted in the SA algorithm was introduced to enhance the accuracy of the optimization process.Experimental works based on ten publicly available imbalanced datasets have demonstrated higher accuracy in the classification tasks using the proposed approach in comparison with the conventional implementation of SVM.Registering at an average of 89.65%of accuracy for the binary class classification has demonstrated the good performance of the proposed works.
基金the National Natural Science Foundation of China(No.51775090)the General Program of Civil Aviation Flight University of China(No.J2015-39)
文摘In order to increase the fault diagnosis efficiency and make the fault data mining be realized, the decision table containing numerical attributes must be discretized for further calculations. The discernibility matrix-based reduction method depends on whether the numerical attributes can be properly discretized or not.So a discretization algorithm based on particle swarm optimization(PSO) is proposed. Moreover, hybrid weights are adopted in the process of particles evolution. Comparative calculations for certain equipment are completed to demonstrate the effectiveness of the proposed algorithm. The results indicate that the proposed algorithm has better performance than other popular algorithms such as class-attribute interdependence maximization(CAIM)discretization method and entropy-based discretization method.
文摘Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared.
基金supported by the National Key R&D Program of China (No. 2017YFC0602000)the China Geological Survey Project (Nos. DD20191001 and DD20189410)。
文摘The vector transformation and pole reduction from the total-field anomaly are signifi cant for the interpretation.We examined these industry-standard processing procedures in the Fourier domain.We propose a novel iteration algorithm for regional magnetic anomalies transformations to derive the vertical-component data from the total-field measurements with the variation in the core-fi eld direction over the region.Additionally,we use the same algorithm to convert the calculated vertical-component data into the corresponding data at the pole and realize the processing of diff erential reduction to the pole(DRTP).Unlike Arkani-Hamed’s DRTP method,the two types of iterative algorithms have the same forms,and DRTP is realized by implementing this algorithm twice.The synthetic model’s calculation results show that the method has high accuracy,and the fi eld data processing confi rms its practicality.
基金support from the National Key Basic Research Development Program(Grant No.2007CB209600)National Major Science and Technology Program(Grant No.2008ZX05010-002)
文摘A noise-reduction method with sliding called the local f-x Cadzow noise-reduction method, windows in the frequency-space (f-x) domain, is presented in this paper. This method is based on the assumption that the signal in each window is linearly predictable in the spatial direction while the random noise is not. For each Toeplitz matrix constructed by constant frequency slice, a singular value decomposition (SVD) is applied to separate signal from noise. To avoid edge artifacts caused by zero percent overlap between windows and to remove more noise, an appropriate overlap is adopted. Besides flat and dipping events, this method can enhance curved and conflicting events. However, it is not suitable for seismic data that contains big spikes or null traces. It is also compared with the SVD, f-x deconvolution, and Cadzow method without windows. The comparison results show that the local Cadzow method performs well in removing random noise and preserving signal. In addition, a real data example proves that it is a potential noise-reduction technique for seismic data obtained in areas of complex formations.
文摘Multi-level searching is called Drill down search.Right now,no drill down search feature is available in the existing search engines like Google,Yahoo,Bing and Baidu.Drill down search is very much useful for the end user tofind the exact search results among the huge paginated search results.Higher level of drill down search with category based search feature leads to get the most accurate search results but it increases the number and size of thefile system.The purpose of this manuscript is to implement a big data storage reduction binaryfile system model for category based drill down search engine that offers fast multi-levelfiltering capability.The basic methodology of the proposed model stores the search engine data in the binaryfile system model.To verify the effectiveness of the proposedfile system model,5 million unique keyword data are stored into a binaryfile,thereby analysing the proposedfile system with efficiency.Some experimental results are also provided based on real data that show our storage model speed and superiority.Experiments demonstrated that ourfile system expansion ratio is constant and it reduces the disk storage space up to 30%with conventional database/file system and it also increases the search performance for any levels of search.To discuss deeply,the paper starts with the short introduction of drill down search followed by the discussion of important technologies used to implement big data storage reduction system in detail.
基金Supported by the National Natural Science Foundation of China(No. 60573075), the National High Technology Research and Development Program of China (No. 2003AA133060) and the Natural Science Foundation of Shanxi Province (No. 200601104).
文摘To improve the efficiency of the attribute reduction, we present an attribute reduction algorithm based on background knowledge and information entropy by making use of background knowledge from research fields. Under the condition of known background knowledge, the algorithm can not only greatly improve the efficiency of attribute reduction, but also avoid the defection of information entropy partial to attribute with much value. The experimental result verifies that the algorithm is effective. In the end, the algorithm produces better results when applied in the classification of the star spectra data.