Accurate state of health(SOH)estimation is a cornerstone for ensuring the safety,performance and longevity of lithium-ion batteries,especially in electric vehicle(EV)applications.While numerous studies have demonstrat...Accurate state of health(SOH)estimation is a cornerstone for ensuring the safety,performance and longevity of lithium-ion batteries,especially in electric vehicle(EV)applications.While numerous studies have demonstrated the significant advantages of data-driven methods in SOH estimation,most rely on laboratory-standardized test data.This raises concerns about the generalization and robustness of the models under real-world operating conditions,where batteries undergo irregular driving patterns,incomplete charging cycles,and unpredictable environments.Notably,real-world EV data reflects the coupling between battery aging characteristics and actual operating conditions,providing an unprecedented perspective for developing SOH estimation models.This review provides a comprehensive and systematic overview of data-driven SOH estimation using real-world data,a topic that has received increasing attention but lacks a consolidated research framework.The paper begins by reviewing the established SOH estimation methodologies and points out the specific challenges arising from the transition to real-world data.It then probes practical issues across the pipeline:data pre-processing for anomalies,solutions for the lack of labels,feature extraction from complex operating data,machine learning model construction,and performance evaluation across various system deployments.Key insights are presented on how to handle noisy,unlabeled,and heterogeneous data using robust modeling strategies.Moreover,a valuable extension focusing on applying the advancements to battery reuse and recycling is discussed,with the goal of developing a whole lifecycle health diagnosis framework.The paper concludes with promising prospects,encompassing open-source standardized dataset establishment,weakly supervised learning,physics-reinforced modeling,real-world deployment,and advanced sensing technology,emphasizing that real-world data makes the transition of data-driven methods from theoretical validation to industrial deployment promising.This paper aims to assist researchers and practitioners in navigating the complexities of real-world SOH estimation,accelerating the collaborative innovation and industrial adoption in battery health management.展开更多
Hepatocellular carcinoma(HCC)is a leading cause of cancer-associated mortality worldwide.HCC is an inflammation-associated immunogenic cancer that frequently arises in chronically inflamed livers.Advanced HCC is manag...Hepatocellular carcinoma(HCC)is a leading cause of cancer-associated mortality worldwide.HCC is an inflammation-associated immunogenic cancer that frequently arises in chronically inflamed livers.Advanced HCC is managed with systemic therapies;the tyrosine kinase inhibitor(TKI)sorafenib has been used in 1st-line setting since 2007.Immunotherapies have emerged as promising treatments across solid tumors including HCC for which immune checkpoint inhibitors(ICIs)are licensed in 1st-and 2nd-line treatment setting.The treatment field of advanced HCC is continuously evolving.Several clinical trials are investigating novel ICI candidates as well as new ICI regimens in combination with other therapeutic modalities including systemic agents,such as other ICIs,TKIs,and anti-angiogenics.Novel immunotherapies including adoptive cell transfer,vaccine-based approaches,and virotherapy are also being brought to the fore.Yet,despite advances,several challenges persist.Lack of real-world data on the use of immunotherapy for advanced HCC in patients outside of clinical trials constitutes a main limitation hindering the breadth of application and generalizability of data to this larger and more diverse patient cohort.Consequently,issues encountered in real-world practice include patient ineligibly for immunotherapy because of contraindications,comorbidities,or poor performance status;lack of response,efficacy,and safety data;and cost-effectiveness.Further real-world data from high-quality large prospective cohort studies of immunotherapy in patients with advanced HCC is mandated to aid evidence-based clinical decision-making.This review provides a critical and comprehensive overview of clinical trials and real-world data of immunotherapy for HCC,with a focus on ICIs,as well as novel immunotherapy strategies underway.展开更多
Randomized clinical trials(RCTs)have long been recognized the gold standard for regulatory approval in the drug development.However,RCTs may not be feasible in some diseases and/or under certain situations,and finding...Randomized clinical trials(RCTs)have long been recognized the gold standard for regulatory approval in the drug development.However,RCTs may not be feasible in some diseases and/or under certain situations,and findings from RCTs may not be generalized to real-world patients in routine clinical practice.Real-world evidence(RWE),which is generated from various real-world data(RWD),has become more and more important for the drug development and clinical decision-making in the digital era.This paper described RWD and real-world data studies(RWDSs),followed by the characteristics and differences between RCTs and RWDSs.Furthermore,the challenges and limitations of RWD and RWE were discussed.Finally,this paper highlights that the efforts must be made during RWE generation from data collection/database selection,study design,statistical analysis,and interpretation of the results to minimize the biases and confounding effects.展开更多
Objective To study the research status,research hotspots and development trends in the field of real-world data(RWD)through social network analysis and knowledge graph analysis.Methods RWD of the past 10 years were re...Objective To study the research status,research hotspots and development trends in the field of real-world data(RWD)through social network analysis and knowledge graph analysis.Methods RWD of the past 10 years were retrieved,and literature metrological analysis was made by using UCINET and CiteSpace from CNKI.Results and Conclusion The frequency and centrality of related keywords such as real-world study,hospital information system(HIS),drug combination,data mining and TCM are high.The clusters labeled as clinical medication and RWD contain more keywords.In recent 4 years,there are more articles involving the keywords of data specification,data authenticity,data security and information security.Among them,compound Kushen injection,HIS database and RWD are the top three keywords.It is a long-term research hotspot for Chinese and western medicine to use HIS to study clinical medication,clinical characteristics,diseases and injections.Besides,the research of RWD database has changed from construction to standardized collection and governance,which can make RWD effective.Data authenticity,data security and information security will become the new hotspots in the research of RWD.展开更多
With the rapid development of modern science and technology, traditional randomized controlled trials have become insufficient to meet current scientific research needs, particularly in the field of clinical research....With the rapid development of modern science and technology, traditional randomized controlled trials have become insufficient to meet current scientific research needs, particularly in the field of clinical research. The emergence of real-world data studies, which align more closely with actual clinical evidence, has garnered significant attention in recent years. The following is a brief overview of the specific utilization of real-world data in drug development, which often involves large sample sizes and analyses covering a relatively diverse population without strict inclusion and exclusion criteria. Real-world data often reflects real clinical practice: treatment options are chosen according to the actual conditions and willingness of patients rather than through random assignment. Analysis based on real-world data also focuses on endpoints highly relevant to clinical benefits and the quality of life of patients. The booming big data technology supports the utilization of real-world data to accelerate new drug development, serving as an important supplement to traditional clinical trials.展开更多
Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clini...Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clinical settings.This review examines the evolving landscape of RWSs in oncology,focusing on their implementation,methodological considerations,and impact on precision medicine.We systematically analyze how RWSs leverage diverse data sources,including electronic health records(EHRs),insurance claims,and patient registries,to generate evidence that bridges the gap between controlled clinical trials and real-world clinical practice.The review underscores the key contributions of RWSs,including capturing therapeutic outcomes in traditionally underrepresented populations,expanding drug indications,and evaluating long-term safety and effectiveness in routine clinical settings.While acknowledging significant challenges,including data quality variability and privacy concerns,we discuss how emerging technologies like artificial intelligence are helping to address these limitations.The integration of RWSs with traditional clinical research is revolutionizing the paradigm of precision oncology and enabling more personalized treatment approaches based on real-world evidence.展开更多
Objective To investigate the effectiveness of Xuanshen Yishen Decoction(XYD)in the treatment of hypertension.Methods Hospital electronic medical records from 2019–2023 were utilized to emulate a randomized pragmatic ...Objective To investigate the effectiveness of Xuanshen Yishen Decoction(XYD)in the treatment of hypertension.Methods Hospital electronic medical records from 2019–2023 were utilized to emulate a randomized pragmatic clinical trial.Hypertensive participants were eligible if they were aged≥40 years with baseline systolic blood pressure(BP)≥140 mm Hg.Patients treated with XYD plus antihypertensive regimen were assigned to the treatment group,whereas those who followed only antihypertensive regimen were assigned to the control group.The primary outcome assessed was the attainment rate of intensive BP control at discharge,with the secondary outcome focusing on the 6-month all-cause readmission rate.Results The study included 3,302 patients,comprising 2,943 individuals in the control group and 359 in the treatment group.Compared with the control group,a higher proportion in the treatment group achieved the target BP for intensive BP control[8.09%vs.17.5%;odds ratio(OR)=2.29,95%confidence interval(CI)=1.68 to 3.13;P<0.001],particularly in individuals with high homocysteine levels(OR=3.13;95%CI=1.72 to 5.71;P<0.001;P for interaction=0.041).Furthermore,the 6-month all-cause readmission rate in the treatment group was lower than in the control group(hazard ratio=0.58;95%CI=0.36 to 0.91;P=0.019),and the robustness of the results was confirmed by sensitivity analyse.Conclusions XYD could be a complementary therapy for intensive BP control.Our study offers real-world evidence and guides the choice of complementary and alternative therapies.(Registration No.ChiCTR2400086589)展开更多
The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicti...The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications.展开更多
Background Medical informatics accumulated vast amounts of data for clinical diagnosis and treatment.However,limited access to follow-up data and the difficulty in integrating data across diverse platforms continue to...Background Medical informatics accumulated vast amounts of data for clinical diagnosis and treatment.However,limited access to follow-up data and the difficulty in integrating data across diverse platforms continue to pose significant barriers to clinical research progress.In response,our research team has embarked on the development of a specialized clinical research database for cardiology,thereby establishing a comprehensive digital platform that facilitates both clinical decision-making and research endeavors.Methods The database incorporated actual clinical data from patients who received treatment at the Cardiovascular Medicine Department of Chinese PLA General Hospital from 2012 to 2021.It included comprehensive data on patients'basic information,medical history,non-invasive imaging studies,laboratory test results,as well as peri-procedural information related to interventional surgeries,extracted from the Hospital Information System.Additionally,an innovative artificial intelligence(AI)-powered interactive follow-up system had been developed,ensuring that nearly all myocardial infarction patients received at least one post-discharge follow-up,thereby achieving comprehensive data management throughout the entire care continuum for highrisk patients.Results This database integrates extensive cross-sectional and longitudinal patient data,with a focus on higher-risk acute coronary syndrome patients.It achieves the integration of structured and unstructured clinical data,while innovatively incorporating AI and automatic speech recognition technologies to enhance data integration and workflow efficiency.It creates a comprehensive patient view,thereby improving diagnostic and follow-up quality,and provides high-quality data to support clinical research.Despite limitations in unstructured data standardization and biological sample integrity,the database's development is accompanied by ongoing optimization efforts.Conclusion The cardiovascular specialty clinical database is a comprehensive digital archive integrating clinical treatment and research,which facilitates the digital and intelligent transformation of clinical diagnosis and treatment processes.It supports clinical decision-making and offers data support and potential research directions for the specialized management of cardiovascular diseases.展开更多
Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei...Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.展开更多
Background:The aim of this study was to develop and validate a fatty liver index based on laboratory data(FLI-L)and a fatty liver index based on both physical examination and laboratory data(FLI-PL)in the hope of prov...Background:The aim of this study was to develop and validate a fatty liver index based on laboratory data(FLI-L)and a fatty liver index based on both physical examination and laboratory data(FLI-PL)in the hope of providing a more convenient,accurate,and quantitative method for the diagnosis of fatty liver disease.Methods:The study included data for 12,391 patients obtained from the Third Medical Center of Chinese PLA General Hospital.FLI-L and FLI-PL were developed using binary logistic regression analysis.The diagnostic performance of the FLI-L and FLI-PL was evaluated using the area under the receiver-operating characteristic curve(AUC-ROC)with sensitivity,specificity,and positive and negative likelihood ratios.FLI-L and FLI-PL were subsequently validated in 3170 patients from the same hospital.Results:The AUC-ROC for FLI-L was 0.876 with a cut-off value of 55.03.Sensitivity was 81.35 and specificity was 78.28,with an accuracy of 79.99%for discriminating between patients with and without fatty liver disease.The AUC-ROC for FLI-PL was 0.902 with a cut-off value of 20.51.Sensitivity was 85.10 and specificity was 79.64.FLI-PL classified 91.65%of patients correctly.Conclusion:FLI-L and FLI-PL is used for simple and accurate quantitative diagnosis of fatty liver disease.This study provides evidence to support the use of this index in clinical management.展开更多
Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and...Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research.展开更多
Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.Howev...Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.展开更多
Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy a...Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.展开更多
With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-...With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either re...Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either require trusted setups or suffer from high communication overhead and low verification efficiency.This paper presents ISTIRDA,a DAS scheme that lets light clients certify availability by sampling small random codeword symbols.Built on ISTIR,an improved Reed–Solomon interactive oracle proof of proximity,ISTIRDA combines adaptive folding with dynamic code rate adjustment to preserve soundness while lowering communication.This paper formalizes opening consistency and prove security with bounded error in the random oracle model,giving polylogarithmic verifier queries and no trusted setup.In a prototype compared with FRIDA under equal soundness,ISTIRDA reduces communication by 40.65%to 80%.For data larger than 16 MB,ISTIRDA verifies faster and the advantage widens;at 128 MB,proofs are about 60%smaller and verification time is roughly 25%shorter,while prover overhead remains modest.In peer-to-peer emulation under injected latency and loss,ISTIRDA reaches confidence more quickly and is less sensitive to packet loss and load.These results indicate that ISTIRDA is a scalable and provably secure DAS scheme suitable for high-throughput,large-block public blockchains,substantially easing bandwidth and latency pressure on lightweight nodes.展开更多
With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service...With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry.展开更多
Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data co...Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data collection process,resulting in temporal misalignment or displacement.Due to these factors,the node representations carry substantial noise,which reduces the adaptability of the multivariate coupled network structure and subsequently degrades anomaly detection performance.Accordingly,this study proposes a novel multivariate anomaly detection model grounded in graph structure learning.Firstly,a recommendation strategy is employed to identify strongly coupled variable pairs,which are then used to construct a recommendation-driven multivariate coupling network.Secondly,a multi-channel graph encoding layer is used to dynamically optimize the structural properties of the multivariate coupling network,while a multi-head attention mechanism enhances the spatial characteristics of the multivariate data.Finally,unsupervised anomaly detection is conducted using a dynamic threshold selection algorithm.Experimental results demonstrate that effectively integrating the structural and spatial features of multivariate data significantly mitigates anomalies caused by temporal dependency misalignment.展开更多
As an important resource in data link,time slots should be strategically allocated to enhance transmission efficiency and resist eavesdropping,especially considering the tremendous increase in the number of nodes and ...As an important resource in data link,time slots should be strategically allocated to enhance transmission efficiency and resist eavesdropping,especially considering the tremendous increase in the number of nodes and diverse communication needs.It is crucial to design control sequences with robust randomness and conflict-freeness to properly address differentiated access control in data link.In this paper,we propose a hierarchical access control scheme based on control sequences to achieve high utilization of time slots and differentiated access control.A theoretical bound of the hierarchical control sequence set is derived to characterize the constraints on the parameters of the sequence set.Moreover,two classes of optimal hierarchical control sequence sets satisfying the theoretical bound are constructed,both of which enable the scheme to achieve maximum utilization of time slots.Compared with the fixed time slot allocation scheme,our scheme reduces the symbol error rate by up to 9%,which indicates a significant improvement in anti-interference and eavesdropping capabilities.展开更多
基金supported by the National Natural Science Foundation of China(52375144 and 52205153)the Shanghai Pujiang Programme(23PJD019)the Shanghai Gaofeng Project for University Academic Program Development。
文摘Accurate state of health(SOH)estimation is a cornerstone for ensuring the safety,performance and longevity of lithium-ion batteries,especially in electric vehicle(EV)applications.While numerous studies have demonstrated the significant advantages of data-driven methods in SOH estimation,most rely on laboratory-standardized test data.This raises concerns about the generalization and robustness of the models under real-world operating conditions,where batteries undergo irregular driving patterns,incomplete charging cycles,and unpredictable environments.Notably,real-world EV data reflects the coupling between battery aging characteristics and actual operating conditions,providing an unprecedented perspective for developing SOH estimation models.This review provides a comprehensive and systematic overview of data-driven SOH estimation using real-world data,a topic that has received increasing attention but lacks a consolidated research framework.The paper begins by reviewing the established SOH estimation methodologies and points out the specific challenges arising from the transition to real-world data.It then probes practical issues across the pipeline:data pre-processing for anomalies,solutions for the lack of labels,feature extraction from complex operating data,machine learning model construction,and performance evaluation across various system deployments.Key insights are presented on how to handle noisy,unlabeled,and heterogeneous data using robust modeling strategies.Moreover,a valuable extension focusing on applying the advancements to battery reuse and recycling is discussed,with the goal of developing a whole lifecycle health diagnosis framework.The paper concludes with promising prospects,encompassing open-source standardized dataset establishment,weakly supervised learning,physics-reinforced modeling,real-world deployment,and advanced sensing technology,emphasizing that real-world data makes the transition of data-driven methods from theoretical validation to industrial deployment promising.This paper aims to assist researchers and practitioners in navigating the complexities of real-world SOH estimation,accelerating the collaborative innovation and industrial adoption in battery health management.
文摘Hepatocellular carcinoma(HCC)is a leading cause of cancer-associated mortality worldwide.HCC is an inflammation-associated immunogenic cancer that frequently arises in chronically inflamed livers.Advanced HCC is managed with systemic therapies;the tyrosine kinase inhibitor(TKI)sorafenib has been used in 1st-line setting since 2007.Immunotherapies have emerged as promising treatments across solid tumors including HCC for which immune checkpoint inhibitors(ICIs)are licensed in 1st-and 2nd-line treatment setting.The treatment field of advanced HCC is continuously evolving.Several clinical trials are investigating novel ICI candidates as well as new ICI regimens in combination with other therapeutic modalities including systemic agents,such as other ICIs,TKIs,and anti-angiogenics.Novel immunotherapies including adoptive cell transfer,vaccine-based approaches,and virotherapy are also being brought to the fore.Yet,despite advances,several challenges persist.Lack of real-world data on the use of immunotherapy for advanced HCC in patients outside of clinical trials constitutes a main limitation hindering the breadth of application and generalizability of data to this larger and more diverse patient cohort.Consequently,issues encountered in real-world practice include patient ineligibly for immunotherapy because of contraindications,comorbidities,or poor performance status;lack of response,efficacy,and safety data;and cost-effectiveness.Further real-world data from high-quality large prospective cohort studies of immunotherapy in patients with advanced HCC is mandated to aid evidence-based clinical decision-making.This review provides a critical and comprehensive overview of clinical trials and real-world data of immunotherapy for HCC,with a focus on ICIs,as well as novel immunotherapy strategies underway.
文摘Randomized clinical trials(RCTs)have long been recognized the gold standard for regulatory approval in the drug development.However,RCTs may not be feasible in some diseases and/or under certain situations,and findings from RCTs may not be generalized to real-world patients in routine clinical practice.Real-world evidence(RWE),which is generated from various real-world data(RWD),has become more and more important for the drug development and clinical decision-making in the digital era.This paper described RWD and real-world data studies(RWDSs),followed by the characteristics and differences between RCTs and RWDSs.Furthermore,the challenges and limitations of RWD and RWE were discussed.Finally,this paper highlights that the efforts must be made during RWE generation from data collection/database selection,study design,statistical analysis,and interpretation of the results to minimize the biases and confounding effects.
文摘Objective To study the research status,research hotspots and development trends in the field of real-world data(RWD)through social network analysis and knowledge graph analysis.Methods RWD of the past 10 years were retrieved,and literature metrological analysis was made by using UCINET and CiteSpace from CNKI.Results and Conclusion The frequency and centrality of related keywords such as real-world study,hospital information system(HIS),drug combination,data mining and TCM are high.The clusters labeled as clinical medication and RWD contain more keywords.In recent 4 years,there are more articles involving the keywords of data specification,data authenticity,data security and information security.Among them,compound Kushen injection,HIS database and RWD are the top three keywords.It is a long-term research hotspot for Chinese and western medicine to use HIS to study clinical medication,clinical characteristics,diseases and injections.Besides,the research of RWD database has changed from construction to standardized collection and governance,which can make RWD effective.Data authenticity,data security and information security will become the new hotspots in the research of RWD.
文摘With the rapid development of modern science and technology, traditional randomized controlled trials have become insufficient to meet current scientific research needs, particularly in the field of clinical research. The emergence of real-world data studies, which align more closely with actual clinical evidence, has garnered significant attention in recent years. The following is a brief overview of the specific utilization of real-world data in drug development, which often involves large sample sizes and analyses covering a relatively diverse population without strict inclusion and exclusion criteria. Real-world data often reflects real clinical practice: treatment options are chosen according to the actual conditions and willingness of patients rather than through random assignment. Analysis based on real-world data also focuses on endpoints highly relevant to clinical benefits and the quality of life of patients. The booming big data technology supports the utilization of real-world data to accelerate new drug development, serving as an important supplement to traditional clinical trials.
基金supported by the Zhejiang Provincial Natural Science Foundation(No.ZCLY24H1601)the National Natural Science Foundation of China(No.82403697)+1 种基金the Medical and Health Science and Technology Project of Zhejiang Province(No.2025KY411)the National Key R&D Program of China(No.2022YFC2505100).
文摘Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clinical settings.This review examines the evolving landscape of RWSs in oncology,focusing on their implementation,methodological considerations,and impact on precision medicine.We systematically analyze how RWSs leverage diverse data sources,including electronic health records(EHRs),insurance claims,and patient registries,to generate evidence that bridges the gap between controlled clinical trials and real-world clinical practice.The review underscores the key contributions of RWSs,including capturing therapeutic outcomes in traditionally underrepresented populations,expanding drug indications,and evaluating long-term safety and effectiveness in routine clinical settings.While acknowledging significant challenges,including data quality variability and privacy concerns,we discuss how emerging technologies like artificial intelligence are helping to address these limitations.The integration of RWSs with traditional clinical research is revolutionizing the paradigm of precision oncology and enabling more personalized treatment approaches based on real-world evidence.
基金Supported by State Administration of Traditional Chinese Medicine High-Level Chinese Medicine Key Discipline Construction Project(No.zyyzdxk-2023120)the Joint Scientific and Technological Projects of Department of Science and Technology,National Administration of Traditional Chinese Medicine(No.GZY-KJS-SD-2023-036)。
文摘Objective To investigate the effectiveness of Xuanshen Yishen Decoction(XYD)in the treatment of hypertension.Methods Hospital electronic medical records from 2019–2023 were utilized to emulate a randomized pragmatic clinical trial.Hypertensive participants were eligible if they were aged≥40 years with baseline systolic blood pressure(BP)≥140 mm Hg.Patients treated with XYD plus antihypertensive regimen were assigned to the treatment group,whereas those who followed only antihypertensive regimen were assigned to the control group.The primary outcome assessed was the attainment rate of intensive BP control at discharge,with the secondary outcome focusing on the 6-month all-cause readmission rate.Results The study included 3,302 patients,comprising 2,943 individuals in the control group and 359 in the treatment group.Compared with the control group,a higher proportion in the treatment group achieved the target BP for intensive BP control[8.09%vs.17.5%;odds ratio(OR)=2.29,95%confidence interval(CI)=1.68 to 3.13;P<0.001],particularly in individuals with high homocysteine levels(OR=3.13;95%CI=1.72 to 5.71;P<0.001;P for interaction=0.041).Furthermore,the 6-month all-cause readmission rate in the treatment group was lower than in the control group(hazard ratio=0.58;95%CI=0.36 to 0.91;P=0.019),and the robustness of the results was confirmed by sensitivity analyse.Conclusions XYD could be a complementary therapy for intensive BP control.Our study offers real-world evidence and guides the choice of complementary and alternative therapies.(Registration No.ChiCTR2400086589)
基金supported in part by the Science and Technology Department of Sichuan Province(No.2025ZNSFSC0427,No.2024ZDZX0035)the Open Project Fund of Vehicle Measurement,Control and Safety Key Laboratory of Sichuan Province(No.QCCK2024-004)the Industrial and Educational Integration Project of Yibin(No.YB-XHU-20240001)。
文摘The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications.
基金Noncommunicable Chronic Diseases-National Science and Technology Major Project(2023ZD0503906)。
文摘Background Medical informatics accumulated vast amounts of data for clinical diagnosis and treatment.However,limited access to follow-up data and the difficulty in integrating data across diverse platforms continue to pose significant barriers to clinical research progress.In response,our research team has embarked on the development of a specialized clinical research database for cardiology,thereby establishing a comprehensive digital platform that facilitates both clinical decision-making and research endeavors.Methods The database incorporated actual clinical data from patients who received treatment at the Cardiovascular Medicine Department of Chinese PLA General Hospital from 2012 to 2021.It included comprehensive data on patients'basic information,medical history,non-invasive imaging studies,laboratory test results,as well as peri-procedural information related to interventional surgeries,extracted from the Hospital Information System.Additionally,an innovative artificial intelligence(AI)-powered interactive follow-up system had been developed,ensuring that nearly all myocardial infarction patients received at least one post-discharge follow-up,thereby achieving comprehensive data management throughout the entire care continuum for highrisk patients.Results This database integrates extensive cross-sectional and longitudinal patient data,with a focus on higher-risk acute coronary syndrome patients.It achieves the integration of structured and unstructured clinical data,while innovatively incorporating AI and automatic speech recognition technologies to enhance data integration and workflow efficiency.It creates a comprehensive patient view,thereby improving diagnostic and follow-up quality,and provides high-quality data to support clinical research.Despite limitations in unstructured data standardization and biological sample integrity,the database's development is accompanied by ongoing optimization efforts.Conclusion The cardiovascular specialty clinical database is a comprehensive digital archive integrating clinical treatment and research,which facilitates the digital and intelligent transformation of clinical diagnosis and treatment processes.It supports clinical decision-making and offers data support and potential research directions for the specialized management of cardiovascular diseases.
文摘Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.
基金supported by the National Natural Science Foundation of China(grant number 81972696).
文摘Background:The aim of this study was to develop and validate a fatty liver index based on laboratory data(FLI-L)and a fatty liver index based on both physical examination and laboratory data(FLI-PL)in the hope of providing a more convenient,accurate,and quantitative method for the diagnosis of fatty liver disease.Methods:The study included data for 12,391 patients obtained from the Third Medical Center of Chinese PLA General Hospital.FLI-L and FLI-PL were developed using binary logistic regression analysis.The diagnostic performance of the FLI-L and FLI-PL was evaluated using the area under the receiver-operating characteristic curve(AUC-ROC)with sensitivity,specificity,and positive and negative likelihood ratios.FLI-L and FLI-PL were subsequently validated in 3170 patients from the same hospital.Results:The AUC-ROC for FLI-L was 0.876 with a cut-off value of 55.03.Sensitivity was 81.35 and specificity was 78.28,with an accuracy of 79.99%for discriminating between patients with and without fatty liver disease.The AUC-ROC for FLI-PL was 0.902 with a cut-off value of 20.51.Sensitivity was 85.10 and specificity was 79.64.FLI-PL classified 91.65%of patients correctly.Conclusion:FLI-L and FLI-PL is used for simple and accurate quantitative diagnosis of fatty liver disease.This study provides evidence to support the use of this index in clinical management.
基金supported by the International Partnership program of the Chinese Academy of Sciences(170GJHZ2023074GC)National Natural Science Foundation of China(42425706 and 42488201)+1 种基金National Key Research and Development Program of China(2024YFF0807902)Beijing Natural Science Foundation(8242041),and China Postdoctoral Science Foundation(2025M770353).
文摘Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research.
基金supported by the National Key R&D Program of China[Grant No.2023YFF0713600]the National Natural Science Foundation of China[Grant No.62275062]+3 种基金Project of Shandong Innovation and Startup Community of High-end Medical Apparatus and Instruments[Grant No.2023-SGTTXM-002 and 2024-SGTTXM-005]the Shandong Province Technology Innovation Guidance Plan(Central Leading Local Science and Technology Development Fund)[Grant No.YDZX2023115]the Taishan Scholar Special Funding Project of Shandong Provincethe Shandong Laboratory of Advanced Biomaterials and Medical Devices in Weihai[Grant No.ZL202402].
文摘Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.
文摘Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.
文摘With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.
基金supported in part by the Research Fund of Key Lab of Education Blockchain and Intelligent Technology,Ministry of Education(EBME25-F-08).
文摘Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either require trusted setups or suffer from high communication overhead and low verification efficiency.This paper presents ISTIRDA,a DAS scheme that lets light clients certify availability by sampling small random codeword symbols.Built on ISTIR,an improved Reed–Solomon interactive oracle proof of proximity,ISTIRDA combines adaptive folding with dynamic code rate adjustment to preserve soundness while lowering communication.This paper formalizes opening consistency and prove security with bounded error in the random oracle model,giving polylogarithmic verifier queries and no trusted setup.In a prototype compared with FRIDA under equal soundness,ISTIRDA reduces communication by 40.65%to 80%.For data larger than 16 MB,ISTIRDA verifies faster and the advantage widens;at 128 MB,proofs are about 60%smaller and verification time is roughly 25%shorter,while prover overhead remains modest.In peer-to-peer emulation under injected latency and loss,ISTIRDA reaches confidence more quickly and is less sensitive to packet loss and load.These results indicate that ISTIRDA is a scalable and provably secure DAS scheme suitable for high-throughput,large-block public blockchains,substantially easing bandwidth and latency pressure on lightweight nodes.
文摘With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry.
基金supported by Natural Science Foundation of Qinghai Province(2025-ZJ-994M)Scientific Research Innovation Capability Support Project for Young Faculty(SRICSPYF-BS2025007)National Natural Science Foundation of China(62566050).
文摘Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data collection process,resulting in temporal misalignment or displacement.Due to these factors,the node representations carry substantial noise,which reduces the adaptability of the multivariate coupled network structure and subsequently degrades anomaly detection performance.Accordingly,this study proposes a novel multivariate anomaly detection model grounded in graph structure learning.Firstly,a recommendation strategy is employed to identify strongly coupled variable pairs,which are then used to construct a recommendation-driven multivariate coupling network.Secondly,a multi-channel graph encoding layer is used to dynamically optimize the structural properties of the multivariate coupling network,while a multi-head attention mechanism enhances the spatial characteristics of the multivariate data.Finally,unsupervised anomaly detection is conducted using a dynamic threshold selection algorithm.Experimental results demonstrate that effectively integrating the structural and spatial features of multivariate data significantly mitigates anomalies caused by temporal dependency misalignment.
基金supported by the National Science Foundation of China(No.62171387)the Science and Technology Program of Sichuan Province(No.2024NSFSC0468)the China Postdoctoral Science Foundation(No.2019M663475).
文摘As an important resource in data link,time slots should be strategically allocated to enhance transmission efficiency and resist eavesdropping,especially considering the tremendous increase in the number of nodes and diverse communication needs.It is crucial to design control sequences with robust randomness and conflict-freeness to properly address differentiated access control in data link.In this paper,we propose a hierarchical access control scheme based on control sequences to achieve high utilization of time slots and differentiated access control.A theoretical bound of the hierarchical control sequence set is derived to characterize the constraints on the parameters of the sequence set.Moreover,two classes of optimal hierarchical control sequence sets satisfying the theoretical bound are constructed,both of which enable the scheme to achieve maximum utilization of time slots.Compared with the fixed time slot allocation scheme,our scheme reduces the symbol error rate by up to 9%,which indicates a significant improvement in anti-interference and eavesdropping capabilities.