LEO satellite communication systems have the characteristics of high-speed and periodic movement.The handover of user link occurs frequently,which has a serious impact on user terminal application and system capacity....LEO satellite communication systems have the characteristics of high-speed and periodic movement.The handover of user link occurs frequently,which has a serious impact on user terminal application and system capacity.To address this issue,we propose a handover strategy of LEO satellite user terminal based on multi-attribute and multi-point(MAMP)cooperation.Firstly,the satellite-user-time matrix is established by using the satellite constellation coverage and handover model.Then,combined with the visual time and signal quality,the user access matrix and satellite load matrix are extracted to determine the weight equation of the handover strategy with the channel reservation.According to the system modeling simulation,the algorithm improves the handover success rate by 2.5%,the lasted call access success rate by 3.2%,the load balancing degree by 20%,and the robustness by two orders of magnitude.展开更多
To address the challenges of complex fluvial sandbody distribution and difficult remaining oil recovery in mature continental oilfields,this study focuses on key issues in reservoir identification such as ambiguous na...To address the challenges of complex fluvial sandbody distribution and difficult remaining oil recovery in mature continental oilfields,this study focuses on key issues in reservoir identification such as ambiguous narrow-channel boundaries and subdivision of multi-stage superimposed sandbodies.Taking the Upper Cretaceous continental sandstone in the Sazhong Oilfield of the Daqing Placanticline as an example,a technical system integrating OVT high-resolution processing,multi-attribute fusion,and varible-scale inversion was developed to establish a complete workflow from seismic processing to reservoir prediction and remaining oil recovery.The following results are obtained.First,the Offset Vector Tile(OVT)seismic processing technology is extended,for the first time,from fracture imaging to sandbody prediction,in order to address the weak seismic responses from boundaries of narrow and thin sandbodies.A geology-oriented OVT partitioning method is developed to significantly improve the imaging accuracy,enabling identification of channel sandbodies as narrow as 50 m.Second,an amplitude-coherence dual-attribute fusion method is proposed for predicting narrow channel boundaries between wells.Constrained by a sedimentary unit-level sequence chronostratigraphic framework,this method accurately delineates 800-2000 m long subaqueous distributary channels with bifurcation-convergence features.Third,considering the superimposition of multi-stage channels,a three-level variable-scale stratigraphic model(sandstone groups,sublayers,sedimentary units)is constructed to overcome single-scale modeling limitations,successfully characterizing key sedimentary features like meandering river“cut-offs”through 3D seismic inversion.Based on these advances,a direct link between seismic prediction and remaining oil recovery is established.The horizontal wells deployed using narrow-channel predictions encountered oil-bearing sandstones in the horizontal section by 97%,and achieved initial daily production of 12.5 t per well.Precise identification of individual channel boundaries within 17 composite sandbodies guided recovery processes in 135 wells,yielding an average daily increase of 2.8 t per well and a cumulative increase of 13.6×10^(4)t.展开更多
Due to the numerous variables to take into account as well as the inherent ambiguity and uncertainty,evaluating educational institutions can be difficult.The concept of a possibility Pythagorean fuzzy hypersoft set(pP...Due to the numerous variables to take into account as well as the inherent ambiguity and uncertainty,evaluating educational institutions can be difficult.The concept of a possibility Pythagorean fuzzy hypersoft set(pPyFHSS)is more flexible in this regard than other theoretical fuzzy set-like models,even though some attempts have been made in the literature to address such uncertainties.This study investigates the elementary notions of pPyFHSS including its set-theoretic operations union,intersection,complement,OR-and AND-operations.Some results related to these operations are also modified for pPyFHSS.Additionally,the similarity measures between pPyFHSSs are formulated with the assistance of numerical examples and results.Lastly,an intelligent decision-assisted mechanism is developed with the proposal of a robust algorithm based on similarity measures for solving multi-attribute decision-making(MADM)problems.A case study that helps the decision-makers assess the best educational institution is discussed to validate the suggested system.The algorithmic results are compared with the most pertinent model to evaluate the adaptability of pPyFHSS,as it generalizes the classical possibility fuzzy set-like theoretical models.Similarly,while considering significant evaluating factors,the flexibility of pPyFHSS is observed through structural comparison.展开更多
Accurate medical diagnosis,which involves identifying diseases based on patient symptoms,is often hindered by uncertainties in data interpretation and retrieval.Advanced fuzzy set theories have emerged as effective to...Accurate medical diagnosis,which involves identifying diseases based on patient symptoms,is often hindered by uncertainties in data interpretation and retrieval.Advanced fuzzy set theories have emerged as effective tools to address these challenges.In this paper,new mathematical approaches for handling uncertainty in medical diagnosis are introduced using q-rung orthopair fuzzy sets(q-ROFS)and interval-valued q-rung orthopair fuzzy sets(IVq-ROFS).Three aggregation operators are proposed in our methodologies:the q-ROF weighted averaging(q-ROFWA),the q-ROF weighted geometric(q-ROFWG),and the q-ROF weighted neutrality averaging(qROFWNA),which enhance decision-making under uncertainty.These operators are paired with ranking methods such as the similarity measure,score function,and inverse score function to improve the accuracy of disease identification.Additionally,the impact of varying q-rung values is explored through a sensitivity analysis,extending the analysis beyond the typical maximum value of 3.The Basic Uncertain Information(BUI)method is employed to simulate expert opinions,and aggregation operators are used to combine these opinions in a group decisionmaking context.Our results provide a comprehensive comparison of methodologies,highlighting their strengths and limitations in diagnosing diseases based on uncertain patient data.展开更多
Cultural landscape zoning research of traditional villages is the basic premise for carrying out overall protection and regional development.Through the clustering algorithm,cultural area zoning research of traditiona...Cultural landscape zoning research of traditional villages is the basic premise for carrying out overall protection and regional development.Through the clustering algorithm,cultural area zoning research of traditional villages can provide objective basis for its overall protection and development.Based on the field research,drawing on the theory of cultural landscape,southwest Hubei is taken as the research object,and the index system of cultural landscape type division of traditional villages is constructed from three levels of culture,geography and village carrier.Adopting the multi-attribute weighted k-modes clustering algorithm,92 traditional villages in southwest Hubei are divided into three major types,which are the western Tujia cultural characteristic area,the southern Tujia-Miao cultural penetration area,and the northern multi-ethnic cultural mixed area,and the characteristics of each area are summarized.The regional characteristics of traditional villages in southwest Hubei at the cultural landscape level are analysed from a macro point of view,which provides a reference for more objective cognition of the distribution law of traditional villages in southwest Hubei,and carrying out the contiguous protection of traditional villages.展开更多
Graphs have been widely used in fields ranging from chemical informatics to social network analysis.Graph-related problems become increasingly significant,with subgraph matching standing out as one of the most challen...Graphs have been widely used in fields ranging from chemical informatics to social network analysis.Graph-related problems become increasingly significant,with subgraph matching standing out as one of the most challenging tasks.The goal of subgraph matching is to find all subgraphs in the data graph that are isomorphic to the query graph.Traditional methods mostly rely on search strategies with high computational complexity and are hard to apply to large-scale real datasets.With the advent of graph neural networks(GNNs),researchers have turned to GNNs to address subgraph matching problems.However,the multi-attributed features on nodes and edges are overlooked during the learning of graphs,which causes inaccurate results in real-world scenarios.To tackle this problem,we propose a novel model called subgraph matching on multi-attributed graph network(SGMAN).SGMAN first utilizes improved line graphs to capture node and edge features.Then,SGMAN integrates GNN and contrastive learning(CL)to derive graph representation embeddings and calculate the matching matrix to represent the matching results.We conduct experiments on public datasets,and the results affirm the superior performance of our model.展开更多
Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei...Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.展开更多
Learning unlabeled data is a significant challenge that needs to han-dle complicated relationships between nominal values and attributes.Increas-ingly,recent research on learning value relations within and between att...Learning unlabeled data is a significant challenge that needs to han-dle complicated relationships between nominal values and attributes.Increas-ingly,recent research on learning value relations within and between attributes has shown significant improvement in clustering and outlier detection,etc.However,typical existing work relies on learning pairwise value relations but weakens or overlooks the direct couplings between multiple attributes.This paper thus proposes two novel and flexible multi-attribute couplings-based distance(MCD)metrics,which learn the multi-attribute couplings and their strengths in nominal data based on information theories:self-information,entropy,and mutual information,for measuring both numerical and nominal distances.MCD enables the application of numerical and nominal clustering methods on nominal data and quantifies the influence of involving and filtering multi-attribute couplings on distance learning and clustering perfor-mance.Substantial experiments evidence the above conclusions on 15 data sets against seven state-of-the-art distance measures with various feature selection methods for both numerical and nominal clustering.展开更多
This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data e...This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data exchanges transitioning from quasi-public to marketoriented operations.To address the complex dynamics among data exchanges,suppliers,and consumers,the authors develop a threestage Stackelberg game framework.In this model,the data exchange acts as a leader setting transaction commission rates,suppliers are intermediate leaders determining unit prices,and consumers are followers making purchasing decisions.Two pricing strategies are examined:the Independent Pricing Approach(IPA)and the novel Perfectly Competitive Pricing Approach(PCPA),which accounts for competition among data providers.Using backward induction,the study derives subgame-perfect equilibria and proves the existence and uniqueness of Stackelberg equilibria under both approaches.Extensive numerical simulations are carried out in the model,demonstrating that PCPA enhances data demander utility,encourages supplier competition,increases transaction volume,and improves the overall profitability and sustainability of data exchanges.Social welfare analysis further confirms PCPA’s superiority in promoting efficient and fair data markets.展开更多
Ovarian cancer(OC)is one of the leading causes of death related to gynecological cancer,with the main difficulty of its early diagnosis and a heterogeneous nature of tumor biomarkers.Machine learning(ML)has the potent...Ovarian cancer(OC)is one of the leading causes of death related to gynecological cancer,with the main difficulty of its early diagnosis and a heterogeneous nature of tumor biomarkers.Machine learning(ML)has the potential to process complex datasets and support decision-making in OC diagnosis.Nevertheless,traditional ML models tend to be biased,overfitting,noisy,and less generalized.Moreover,their black-box nature reduces interpretability and limits their practical clinical applicability.In this study,we introduce an explainable ensemble learning(EL)model,TreeX-Stack,based on a stacking architecture that employs tree-based learners such as Decision Tree(DT),Random Forest(RF),Gradient Boosting(GB),and Extreme Gradient Boosting(XGBoost)as base learners,and Logistic Regression(LR)as the meta-learner to enhance ovarian cancer(OC)diagnosis.Local Interpretable ModelAgnostic Explanations(LIME)are used to explain individual predictions,making the model outputs more clinically interpretable and applicable.The model is trained on the dataset that includes demographic information,blood test,general chemistry,and tumor markers.Extensive preprocessing includes handling missing data using iterative imputation with Bayesian Ridge and addressing multicollinearity by removing features with correlation coefficients above 0.7.Relevant features are then selected using the Boruta feature selection method.To obtain robust and unbiased performance estimates during hyperparameter tuning,nested cross-validation(CV)with grid search is employed,and all experiments are repeated five times to ensure statistical reliability.TreeX-Stack demonstrates excellent diagnostic performance,achieving an accuracy of 0.9027,a precision of 0.8673,a recall of 0.9391,and an F1-score of 0.9012.Feature-importance analyses using LIME and permutation importance highlight Human Epididymis Protein 4(HE4)as the most significant biomarker for OC.The combination of high predictive performance and interpretability makes TreeX-Stack a reliable tool for clinical decision support in OC diagnosis.展开更多
Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and...Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research.展开更多
To address the severe challenges of PM_(2.5) and ozone co-control during the"14^(th) Five-Year Plan"period and to enhance the precision and intelligence level of air environment governance,it is imperative t...To address the severe challenges of PM_(2.5) and ozone co-control during the"14^(th) Five-Year Plan"period and to enhance the precision and intelligence level of air environment governance,it is imperative to build an efficient comprehensive management platform for regional air quality.In this paper,the specific practice in Zibo City,Shandong Province is as an example to systematically analyze the top-level design,technical implementation,and innovative application of a comprehensive management platform for regional air quality integrating"perception monitoring,data fusion,research judgment of early warnings,analysis of sources,collaborative dispatching,and evaluation assessment".Through the construction of an"sky-air-ground"integrated three-dimensional monitoring network,the platform integrates multi-source heterogeneous environmental data,and employs big data,cloud computing,artificial intelligence,CALPUFF/CMAQ,and other numerical model technologies to achieve comprehensive perception,precise prediction,intelligent source tracing,and closed-loop management of air pollution.The platform innovatively establishes a full-process closed-loop management mechanism of"data-early warning-disposition-evaluation",and achieves a fundamental transformation from passive response to active anticipation and from experience-based judgment to data driving in environmental supervision.The application results show that this platform significantly improves the scientific decision-making ability and collaborative execution efficiency of air pollution governance in Zibo City,providing a replicable and scalable comprehensive solution for similar industrial cities to achieve the continuous improvement of air quality.展开更多
tRNA-derived small RNAs(tsRNAs),as a class of regulatory small noncoding RNA,have been implicated in a wide variety of human diseases.Large amounts of tsRNA–disease associations have been identified in recent years f...tRNA-derived small RNAs(tsRNAs),as a class of regulatory small noncoding RNA,have been implicated in a wide variety of human diseases.Large amounts of tsRNA–disease associations have been identified in recent years from accumulating studies.However,repositories for cataloging the detailed information on tsRNA–disease associations are scarce.In this study,we provide a tsRNADisease database by integrating experimentally and computationally supported tsRNA–disease associations from manual curation of literatures and other related resources.tsRNADisease contains 5571 manually curated associations between 4759 tsRNAs and 166 diseases with experimental evidence from 346 studies.In addition,it also contains 5013 predicted associations between 1297 tsRNAs and 111 diseases.tsRNADisease provides a user-friendly interface to browse,retrieve,and download data conveniently.This database can improve our understanding of tsRNA deregulation in diseases and serve as a valuable resource for investigating the mechanism of disease-related tsRNAs.tsRNADisease is freely available at http://www.compgenelab.info/tsRNADisease.展开更多
0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has...0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has largely been discipline-based,relying on field investigations,data collection,experimental analyses,and data interpretation to study individual components of the Earth system.展开更多
Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.Howev...Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.展开更多
Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy a...Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.展开更多
With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-...With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause.展开更多
Artificial Intelligence(AI)in healthcare enables predicting diabetes using data-driven methods instead of the traditional ways of screening the disease,which include hemoglobin A1c(HbA1c),oral glucose tolerance test(O...Artificial Intelligence(AI)in healthcare enables predicting diabetes using data-driven methods instead of the traditional ways of screening the disease,which include hemoglobin A1c(HbA1c),oral glucose tolerance test(OGTT),and fasting plasma glucose(FPG)screening techniques,which are invasive and limited in scale.Machine learning(ML)and deep neural network(DNN)models that use large datasets to learn the complex,nonlinear feature interactions,but the conventional ML algorithms are data sensitive and often show unstable predictive accuracy.Conversely,DNN models are more robust,though the ability to reach a high accuracy rate consistently on heterogeneous datasets is still an open challenge.For predicting diabetes,this work proposed a hybrid DNN approach by integrating a bidirectional long short-term memory(BiLSTM)network with a bidirectional gated recurrent unit(BiGRU).A robust DL model,developed by combining various datasets with weighted coefficients,dense operations in the connection of deep layers,and the output aggregation using batch normalization and dropout functions to avoid overfitting.The goal of this hybrid model is better generalization and consistency among various datasets,which facilitates the effective management and early intervention.The proposed DNN model exhibits an excellent predictive performance as compared to the state-of-the-art and baseline ML and DNN models for diabetes prediction tasks.The robust performance indicates the possible usefulness of DL-based models in the development of disease prediction in healthcare and other areas that demand high-quality analytics.展开更多
Reducing carbon emissions is fundamental to achieving carbon neutrality.Existing studies have typically estimated emissions by predicting fossil fuel consumption across sectors under different socioeconomic scenarios;...Reducing carbon emissions is fundamental to achieving carbon neutrality.Existing studies have typically estimated emissions by predicting fossil fuel consumption across sectors under different socioeconomic scenarios;however,uncertainties in future development often lead to deviations from these assumptions.To address this limitation,this study proposes a data-driven approach for evaluating national carbon emissions using historical data.Countries with similar energy consumption patterns were selected as reference samples,and their emission pathways were analyzed to predict future emissions for countries that have not yet reached their peak.Key indicators,including peak levels,timing,plateau duration,and post-peak decline rates,were identified.The results indicate that the trends in unpeaked economies can be effectively assessed based on the emission patterns of countries with comparable energy structures.Applying this framework to China suggests a carbon peak between 2027 and 2030,in the range of 14.207 to 16.234 Gt,followed by a gradual decline from 2031 to 2036.Compared with the average results of the existing studies,the predicted minimum and maximum emissions show error margins of 10.1% and 1.41%,respectively.This study proposes a top-down methodology that provides a transparent,reproducible,and empirical framework for forecasting carbon emission pathways,thereby offering a scientific basis for assessing countries that have not yet reached their emissions peak.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
基金supported by the Innovation Funding of ICT,CAS under Grant(No.E261020)Jiangsu Key Research and Development Program of China(No.BE2021013-2)Zhejiang Key Research and Development Program(No.2021C01040).
文摘LEO satellite communication systems have the characteristics of high-speed and periodic movement.The handover of user link occurs frequently,which has a serious impact on user terminal application and system capacity.To address this issue,we propose a handover strategy of LEO satellite user terminal based on multi-attribute and multi-point(MAMP)cooperation.Firstly,the satellite-user-time matrix is established by using the satellite constellation coverage and handover model.Then,combined with the visual time and signal quality,the user access matrix and satellite load matrix are extracted to determine the weight equation of the handover strategy with the channel reservation.According to the system modeling simulation,the algorithm improves the handover success rate by 2.5%,the lasted call access success rate by 3.2%,the load balancing degree by 20%,and the robustness by two orders of magnitude.
基金Supported by the China National Science and Technology Major Project(2025ZD1407000)PetroChina Science and Technology Major Project(2023ZZ22)。
文摘To address the challenges of complex fluvial sandbody distribution and difficult remaining oil recovery in mature continental oilfields,this study focuses on key issues in reservoir identification such as ambiguous narrow-channel boundaries and subdivision of multi-stage superimposed sandbodies.Taking the Upper Cretaceous continental sandstone in the Sazhong Oilfield of the Daqing Placanticline as an example,a technical system integrating OVT high-resolution processing,multi-attribute fusion,and varible-scale inversion was developed to establish a complete workflow from seismic processing to reservoir prediction and remaining oil recovery.The following results are obtained.First,the Offset Vector Tile(OVT)seismic processing technology is extended,for the first time,from fracture imaging to sandbody prediction,in order to address the weak seismic responses from boundaries of narrow and thin sandbodies.A geology-oriented OVT partitioning method is developed to significantly improve the imaging accuracy,enabling identification of channel sandbodies as narrow as 50 m.Second,an amplitude-coherence dual-attribute fusion method is proposed for predicting narrow channel boundaries between wells.Constrained by a sedimentary unit-level sequence chronostratigraphic framework,this method accurately delineates 800-2000 m long subaqueous distributary channels with bifurcation-convergence features.Third,considering the superimposition of multi-stage channels,a three-level variable-scale stratigraphic model(sandstone groups,sublayers,sedimentary units)is constructed to overcome single-scale modeling limitations,successfully characterizing key sedimentary features like meandering river“cut-offs”through 3D seismic inversion.Based on these advances,a direct link between seismic prediction and remaining oil recovery is established.The horizontal wells deployed using narrow-channel predictions encountered oil-bearing sandstones in the horizontal section by 97%,and achieved initial daily production of 12.5 t per well.Precise identification of individual channel boundaries within 17 composite sandbodies guided recovery processes in 135 wells,yielding an average daily increase of 2.8 t per well and a cumulative increase of 13.6×10^(4)t.
基金supported by the Deanship of Graduate Studies and Scientific Research at Qassim University(QU-APC-2024-9/1).
文摘Due to the numerous variables to take into account as well as the inherent ambiguity and uncertainty,evaluating educational institutions can be difficult.The concept of a possibility Pythagorean fuzzy hypersoft set(pPyFHSS)is more flexible in this regard than other theoretical fuzzy set-like models,even though some attempts have been made in the literature to address such uncertainties.This study investigates the elementary notions of pPyFHSS including its set-theoretic operations union,intersection,complement,OR-and AND-operations.Some results related to these operations are also modified for pPyFHSS.Additionally,the similarity measures between pPyFHSSs are formulated with the assistance of numerical examples and results.Lastly,an intelligent decision-assisted mechanism is developed with the proposal of a robust algorithm based on similarity measures for solving multi-attribute decision-making(MADM)problems.A case study that helps the decision-makers assess the best educational institution is discussed to validate the suggested system.The algorithmic results are compared with the most pertinent model to evaluate the adaptability of pPyFHSS,as it generalizes the classical possibility fuzzy set-like theoretical models.Similarly,while considering significant evaluating factors,the flexibility of pPyFHSS is observed through structural comparison.
文摘Accurate medical diagnosis,which involves identifying diseases based on patient symptoms,is often hindered by uncertainties in data interpretation and retrieval.Advanced fuzzy set theories have emerged as effective tools to address these challenges.In this paper,new mathematical approaches for handling uncertainty in medical diagnosis are introduced using q-rung orthopair fuzzy sets(q-ROFS)and interval-valued q-rung orthopair fuzzy sets(IVq-ROFS).Three aggregation operators are proposed in our methodologies:the q-ROF weighted averaging(q-ROFWA),the q-ROF weighted geometric(q-ROFWG),and the q-ROF weighted neutrality averaging(qROFWNA),which enhance decision-making under uncertainty.These operators are paired with ranking methods such as the similarity measure,score function,and inverse score function to improve the accuracy of disease identification.Additionally,the impact of varying q-rung values is explored through a sensitivity analysis,extending the analysis beyond the typical maximum value of 3.The Basic Uncertain Information(BUI)method is employed to simulate expert opinions,and aggregation operators are used to combine these opinions in a group decisionmaking context.Our results provide a comprehensive comparison of methodologies,highlighting their strengths and limitations in diagnosing diseases based on uncertain patient data.
基金Philosophy and Social Sciences Research Project of Hubei Provincial Department of Education(22D057).
文摘Cultural landscape zoning research of traditional villages is the basic premise for carrying out overall protection and regional development.Through the clustering algorithm,cultural area zoning research of traditional villages can provide objective basis for its overall protection and development.Based on the field research,drawing on the theory of cultural landscape,southwest Hubei is taken as the research object,and the index system of cultural landscape type division of traditional villages is constructed from three levels of culture,geography and village carrier.Adopting the multi-attribute weighted k-modes clustering algorithm,92 traditional villages in southwest Hubei are divided into three major types,which are the western Tujia cultural characteristic area,the southern Tujia-Miao cultural penetration area,and the northern multi-ethnic cultural mixed area,and the characteristics of each area are summarized.The regional characteristics of traditional villages in southwest Hubei at the cultural landscape level are analysed from a macro point of view,which provides a reference for more objective cognition of the distribution law of traditional villages in southwest Hubei,and carrying out the contiguous protection of traditional villages.
文摘Graphs have been widely used in fields ranging from chemical informatics to social network analysis.Graph-related problems become increasingly significant,with subgraph matching standing out as one of the most challenging tasks.The goal of subgraph matching is to find all subgraphs in the data graph that are isomorphic to the query graph.Traditional methods mostly rely on search strategies with high computational complexity and are hard to apply to large-scale real datasets.With the advent of graph neural networks(GNNs),researchers have turned to GNNs to address subgraph matching problems.However,the multi-attributed features on nodes and edges are overlooked during the learning of graphs,which causes inaccurate results in real-world scenarios.To tackle this problem,we propose a novel model called subgraph matching on multi-attributed graph network(SGMAN).SGMAN first utilizes improved line graphs to capture node and edge features.Then,SGMAN integrates GNN and contrastive learning(CL)to derive graph representation embeddings and calculate the matching matrix to represent the matching results.We conduct experiments on public datasets,and the results affirm the superior performance of our model.
文摘Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.
基金funded by the MOE(Ministry of Education in China)Project of Humanities and Social Sciences(Project Number:18YJC870006)from China.
文摘Learning unlabeled data is a significant challenge that needs to han-dle complicated relationships between nominal values and attributes.Increas-ingly,recent research on learning value relations within and between attributes has shown significant improvement in clustering and outlier detection,etc.However,typical existing work relies on learning pairwise value relations but weakens or overlooks the direct couplings between multiple attributes.This paper thus proposes two novel and flexible multi-attribute couplings-based distance(MCD)metrics,which learn the multi-attribute couplings and their strengths in nominal data based on information theories:self-information,entropy,and mutual information,for measuring both numerical and nominal distances.MCD enables the application of numerical and nominal clustering methods on nominal data and quantifies the influence of involving and filtering multi-attribute couplings on distance learning and clustering perfor-mance.Substantial experiments evidence the above conclusions on 15 data sets against seven state-of-the-art distance measures with various feature selection methods for both numerical and nominal clustering.
基金supported by the National Natural Science Foundation of China[grant numbers 12171158,12371474 and 12571510]Fundamental Research Funds for the Central Universities[grant number 2025ECNU-WLJC006].
文摘This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data exchanges transitioning from quasi-public to marketoriented operations.To address the complex dynamics among data exchanges,suppliers,and consumers,the authors develop a threestage Stackelberg game framework.In this model,the data exchange acts as a leader setting transaction commission rates,suppliers are intermediate leaders determining unit prices,and consumers are followers making purchasing decisions.Two pricing strategies are examined:the Independent Pricing Approach(IPA)and the novel Perfectly Competitive Pricing Approach(PCPA),which accounts for competition among data providers.Using backward induction,the study derives subgame-perfect equilibria and proves the existence and uniqueness of Stackelberg equilibria under both approaches.Extensive numerical simulations are carried out in the model,demonstrating that PCPA enhances data demander utility,encourages supplier competition,increases transaction volume,and improves the overall profitability and sustainability of data exchanges.Social welfare analysis further confirms PCPA’s superiority in promoting efficient and fair data markets.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)under the grant number IMSIU-DDRSP2601.
文摘Ovarian cancer(OC)is one of the leading causes of death related to gynecological cancer,with the main difficulty of its early diagnosis and a heterogeneous nature of tumor biomarkers.Machine learning(ML)has the potential to process complex datasets and support decision-making in OC diagnosis.Nevertheless,traditional ML models tend to be biased,overfitting,noisy,and less generalized.Moreover,their black-box nature reduces interpretability and limits their practical clinical applicability.In this study,we introduce an explainable ensemble learning(EL)model,TreeX-Stack,based on a stacking architecture that employs tree-based learners such as Decision Tree(DT),Random Forest(RF),Gradient Boosting(GB),and Extreme Gradient Boosting(XGBoost)as base learners,and Logistic Regression(LR)as the meta-learner to enhance ovarian cancer(OC)diagnosis.Local Interpretable ModelAgnostic Explanations(LIME)are used to explain individual predictions,making the model outputs more clinically interpretable and applicable.The model is trained on the dataset that includes demographic information,blood test,general chemistry,and tumor markers.Extensive preprocessing includes handling missing data using iterative imputation with Bayesian Ridge and addressing multicollinearity by removing features with correlation coefficients above 0.7.Relevant features are then selected using the Boruta feature selection method.To obtain robust and unbiased performance estimates during hyperparameter tuning,nested cross-validation(CV)with grid search is employed,and all experiments are repeated five times to ensure statistical reliability.TreeX-Stack demonstrates excellent diagnostic performance,achieving an accuracy of 0.9027,a precision of 0.8673,a recall of 0.9391,and an F1-score of 0.9012.Feature-importance analyses using LIME and permutation importance highlight Human Epididymis Protein 4(HE4)as the most significant biomarker for OC.The combination of high predictive performance and interpretability makes TreeX-Stack a reliable tool for clinical decision support in OC diagnosis.
基金supported by the International Partnership program of the Chinese Academy of Sciences(170GJHZ2023074GC)National Natural Science Foundation of China(42425706 and 42488201)+1 种基金National Key Research and Development Program of China(2024YFF0807902)Beijing Natural Science Foundation(8242041),and China Postdoctoral Science Foundation(2025M770353).
文摘Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research.
文摘To address the severe challenges of PM_(2.5) and ozone co-control during the"14^(th) Five-Year Plan"period and to enhance the precision and intelligence level of air environment governance,it is imperative to build an efficient comprehensive management platform for regional air quality.In this paper,the specific practice in Zibo City,Shandong Province is as an example to systematically analyze the top-level design,technical implementation,and innovative application of a comprehensive management platform for regional air quality integrating"perception monitoring,data fusion,research judgment of early warnings,analysis of sources,collaborative dispatching,and evaluation assessment".Through the construction of an"sky-air-ground"integrated three-dimensional monitoring network,the platform integrates multi-source heterogeneous environmental data,and employs big data,cloud computing,artificial intelligence,CALPUFF/CMAQ,and other numerical model technologies to achieve comprehensive perception,precise prediction,intelligent source tracing,and closed-loop management of air pollution.The platform innovatively establishes a full-process closed-loop management mechanism of"data-early warning-disposition-evaluation",and achieves a fundamental transformation from passive response to active anticipation and from experience-based judgment to data driving in environmental supervision.The application results show that this platform significantly improves the scientific decision-making ability and collaborative execution efficiency of air pollution governance in Zibo City,providing a replicable and scalable comprehensive solution for similar industrial cities to achieve the continuous improvement of air quality.
基金supported by the National Natural Science Foundation of China(91959106)the Foundation of the Shanghai Municipal Education Commission(24RGZNC02)+4 种基金Shanghai Key Laboratory of Intelligent Information Processing,Fudan University(IIPL-2025-RD3-02)Key University Science Research Project of Anhui Province(2023AH030108)Climbing Peak Training Program for Innovative Technology team of Yijishan Hospital,Wannan Medical College(PF201904)Peak Training Program for Scientific Research of Yijishan Hospital,Wannan Medical College(GF2019G15)the talent project of the First Affiliated Hospital of Wannan Medical College(Yijishan Hospital of Wannan Medical College)(YR202422).
文摘tRNA-derived small RNAs(tsRNAs),as a class of regulatory small noncoding RNA,have been implicated in a wide variety of human diseases.Large amounts of tsRNA–disease associations have been identified in recent years from accumulating studies.However,repositories for cataloging the detailed information on tsRNA–disease associations are scarce.In this study,we provide a tsRNADisease database by integrating experimentally and computationally supported tsRNA–disease associations from manual curation of literatures and other related resources.tsRNADisease contains 5571 manually curated associations between 4759 tsRNAs and 166 diseases with experimental evidence from 346 studies.In addition,it also contains 5013 predicted associations between 1297 tsRNAs and 111 diseases.tsRNADisease provides a user-friendly interface to browse,retrieve,and download data conveniently.This database can improve our understanding of tsRNA deregulation in diseases and serve as a valuable resource for investigating the mechanism of disease-related tsRNAs.tsRNADisease is freely available at http://www.compgenelab.info/tsRNADisease.
基金supported by National Key R&D Program of China(No.2021YFF0501301)the National Natural Science Foundation of China(No.42172231)。
文摘0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has largely been discipline-based,relying on field investigations,data collection,experimental analyses,and data interpretation to study individual components of the Earth system.
基金supported by the National Key R&D Program of China[Grant No.2023YFF0713600]the National Natural Science Foundation of China[Grant No.62275062]+3 种基金Project of Shandong Innovation and Startup Community of High-end Medical Apparatus and Instruments[Grant No.2023-SGTTXM-002 and 2024-SGTTXM-005]the Shandong Province Technology Innovation Guidance Plan(Central Leading Local Science and Technology Development Fund)[Grant No.YDZX2023115]the Taishan Scholar Special Funding Project of Shandong Provincethe Shandong Laboratory of Advanced Biomaterials and Medical Devices in Weihai[Grant No.ZL202402].
文摘Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.
文摘Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.
文摘With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause.
基金supported by the School of Digital Science,Universiti Brunei Darussalam,Brunei.
文摘Artificial Intelligence(AI)in healthcare enables predicting diabetes using data-driven methods instead of the traditional ways of screening the disease,which include hemoglobin A1c(HbA1c),oral glucose tolerance test(OGTT),and fasting plasma glucose(FPG)screening techniques,which are invasive and limited in scale.Machine learning(ML)and deep neural network(DNN)models that use large datasets to learn the complex,nonlinear feature interactions,but the conventional ML algorithms are data sensitive and often show unstable predictive accuracy.Conversely,DNN models are more robust,though the ability to reach a high accuracy rate consistently on heterogeneous datasets is still an open challenge.For predicting diabetes,this work proposed a hybrid DNN approach by integrating a bidirectional long short-term memory(BiLSTM)network with a bidirectional gated recurrent unit(BiGRU).A robust DL model,developed by combining various datasets with weighted coefficients,dense operations in the connection of deep layers,and the output aggregation using batch normalization and dropout functions to avoid overfitting.The goal of this hybrid model is better generalization and consistency among various datasets,which facilitates the effective management and early intervention.The proposed DNN model exhibits an excellent predictive performance as compared to the state-of-the-art and baseline ML and DNN models for diabetes prediction tasks.The robust performance indicates the possible usefulness of DL-based models in the development of disease prediction in healthcare and other areas that demand high-quality analytics.
基金The National Natural Science Foundation of China(No.52470211)Special Foundation of Jiangsu Province Science and Technology Plan(No.BZ2024017)RECLAIM Network Plus Project(No.EP/W034034/1).
文摘Reducing carbon emissions is fundamental to achieving carbon neutrality.Existing studies have typically estimated emissions by predicting fossil fuel consumption across sectors under different socioeconomic scenarios;however,uncertainties in future development often lead to deviations from these assumptions.To address this limitation,this study proposes a data-driven approach for evaluating national carbon emissions using historical data.Countries with similar energy consumption patterns were selected as reference samples,and their emission pathways were analyzed to predict future emissions for countries that have not yet reached their peak.Key indicators,including peak levels,timing,plateau duration,and post-peak decline rates,were identified.The results indicate that the trends in unpeaked economies can be effectively assessed based on the emission patterns of countries with comparable energy structures.Applying this framework to China suggests a carbon peak between 2027 and 2030,in the range of 14.207 to 16.234 Gt,followed by a gradual decline from 2031 to 2036.Compared with the average results of the existing studies,the predicted minimum and maximum emissions show error margins of 10.1% and 1.41%,respectively.This study proposes a top-down methodology that provides a transparent,reproducible,and empirical framework for forecasting carbon emission pathways,thereby offering a scientific basis for assessing countries that have not yet reached their emissions peak.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.