Many complex systems are frequently subject to the influence of uncertain disturbances,which can exert a profound effect on the critical transitions(CTs),potentially resulting in catastrophic consequences.Consequently...Many complex systems are frequently subject to the influence of uncertain disturbances,which can exert a profound effect on the critical transitions(CTs),potentially resulting in catastrophic consequences.Consequently,it is of uttermost importance to provide warnings for noise-induced CTs in various applications.Although capturing certain generic symptoms of transition behaviors from observational and simulated data poses a challenging problem,this work attempts to extract information regarding CTs from simulated data of a Gaussian white noise-induced tri-stable system.Using the extended dynamic mode decomposition(EDMD)algorithm,we initially obtain finite-dimensional approximations of both the stochastic Koopman operator and the generator.Subsequently,the drift parameters and the noise intensity within the system are identified from the simulated data.Utilizing the identified system,the parameter-dependent basin of the unsafe regime(PDBUR)is quantified,enabling data-driven early warning of Gaussian white noise-induced CTs.Finally,an error analysis is carried out to verify the effectiveness of the data-driven results.Our findings may serve as a paradigm for understanding and predicting noise-induced CTs in complex systems based on data.展开更多
Remote sensing plays a pivotal role in forest inventory by enabling efficient large-scale monitoring while minimizing fieldwork costs.However,missing values pose a critical challenge in remote sensing applications,as ...Remote sensing plays a pivotal role in forest inventory by enabling efficient large-scale monitoring while minimizing fieldwork costs.However,missing values pose a critical challenge in remote sensing applications,as ignoring or mishandling such data gaps can introduce systematic bias into the estimation of target variables for natural resource monitoring.This can lead to cascading errors that propagate through forest and ecosystem management decisions,ultimately hindering progress toward sustainable forest management,biodiversity conservation,and climate change mitigation strategies.This study aims to propose and demonstrate a procedure that employs hybrid estimators to address the limitations of missing remotely sensed data in forest inventory,using Landsat 7 ETM+SLC-off data as an archived source for forest resource monitoring as a case in point.We compared forest inventory estimates from the hybrid estimator with those from a conventional model-based(CMB)estimator using Sentinel-2 data without missing values.Monte Carlo simulations revealed three key findings:(1)The hybrid estimator,leveraging missing-data remote sensing represented by Landsat 7 ETM+SLCoff data,achieved a sampling precision of over 90%,meeting China's national standard for the National Forest Inventory(NFI);(2)The hybrid estimator demonstrated comparable efficiency to the CMB estimator;(3)The uncertainty associated with hybrid estimators was primarily dominated by model parameter estimation,which could be effectively mitigated by slightly increasing the training sample size or refining model specification.Overall,in forest inventory,the hybrid estimator can surmount the limitations posed by missing values in remotely sensed auxiliary data,effectively balancing cost-effectiveness and flexibility.展开更多
The data production elements are driving profound transformations in the real economy across production objects,methods,and tools,generating significant economic effects such as industrial structure upgrading.This pap...The data production elements are driving profound transformations in the real economy across production objects,methods,and tools,generating significant economic effects such as industrial structure upgrading.This paper aims to reveal the impact mechanism of the data elements on the“three transformations”(high-end,intelligent,and green)in the manufacturing sector,theoretically elucidating the intrinsic mechanisms by which the data elements influence these transformations.The study finds that the data elements significantly enhance the high-end,intelligent,and green levels of China's manufacturing industry.In terms of the pathways of impact,the data elements primarily influence the development of high-tech industries and overall green technological innovation,thereby affecting the high-end,intelligent,and green transformation of the industry.展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
Cancer deaths and new cases worldwide are projected to rise by 47%by 2040,with transitioning countries experiencing an even higher increase of up to 95%.Tumor severity is profoundly influenced by the timing,accuracy,a...Cancer deaths and new cases worldwide are projected to rise by 47%by 2040,with transitioning countries experiencing an even higher increase of up to 95%.Tumor severity is profoundly influenced by the timing,accuracy,and stage of diagnosis,which directly impacts clinical decision-making.Various biological entities,including genes,proteins,mRNAs,miRNAs,and metabolites,contribute to cancer development.The emergence of multi-omics technologies has transformed cancer research by revealing molecular alterations across multiple biological layers.This integrative approach supports the notion that cancer is fundamentally driven by such alterations,enabling the discovery ofmolecular signatures for precision oncology.This reviewexplores the role of AI-drivenmulti-omics analyses in cancer medicine,emphasizing their potential to identify novel biomarkers and therapeutic targets,enhance understanding of Tumor biology,and address integration challenges in clinical workflows.Network biology analyzes identified ERBB2,KRAS,and TP53 as top hub genes in lung cancer based on Maximal Clique Centrality(MCC)scores.In contrast,TP53,ERBB2,ESR1,MYC,and BRCA1 emerged as central regulators in breast cancer,linked to cell proliferation,hormonal signaling,and genomic stability.The review also discusses how specific Artificial Intelligence(AI)algorithms can streamline the integration of heterogeneous datasets,facilitate the interpretation of the tumor microenvironment,and support data-driven clinical strategies.展开更多
BACKGROUND Hepatocellular carcinoma(HCC)remains a significant public health concern in South Korea even though the incidence rates are declining.While medical travel for cancer treatment is common,its patterns and inf...BACKGROUND Hepatocellular carcinoma(HCC)remains a significant public health concern in South Korea even though the incidence rates are declining.While medical travel for cancer treatment is common,its patterns and influencing factors for patients with HCC are unknown.AIM To assess medical travel patterns and determinants and their policy implications among patients with newly diagnosed HCC in South Korea.METHODS This retrospective cohort study used the National Health Insurance Service database to identify patients with newly diagnosed HCC from 2013 to 2021.Medical travel was defined as receiving initial treatment outside one’s residential region.Patient characteristics and regional trends were analyzed,and factors influencing medical travel were identified using logistic regression analysis.RESULTS Among 64808 patients 52.4%received treatment in the capital.This proportion increased to 67.4%when including the surrounding metropolitan area.Medical travel was significantly more common among younger and wealthier patients.Patients with greater comorbidity burden or liver cirrhosis were less likely to travel.While geographic distance influenced travel patterns,high-volume academic centers in the capital attracted patients nationwide regardless of proximity.CONCLUSION This nationwide study highlighted the centralization of HCC care in the capital.This observation indicates that regional cancer hubs should be strengthened and promoted for equitable healthcare access.展开更多
The widespread usage of rechargeable batteries in portable devices,electric vehicles,and energy storage systems has underscored the importance for accurately predicting their lifetimes.However,data scarcity often limi...The widespread usage of rechargeable batteries in portable devices,electric vehicles,and energy storage systems has underscored the importance for accurately predicting their lifetimes.However,data scarcity often limits the accuracy of prediction models,which is escalated by the incompletion of data induced by the issues such as sensor failures.To address these challenges,we propose a novel approach to accommodate data insufficiency through achieving external information from incomplete data samples,which are usually discarded in existing studies.In order to fully unleash the prediction power of incomplete data,we have investigated the Multiple Imputation by Chained Equations(MICE)method that diversifies the training data through exploring the potential data patterns.The experimental results demonstrate that the proposed method significantly outperforms the baselines in the most considered scenarios while reducing the prediction root mean square error(RMSE)by up to 18.9%.Furthermore,we have also observed that the penetration of incomplete data benefits the explainability of the prediction model through facilitating the feature selection.展开更多
Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensi...Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.展开更多
Electric Vehicle Charging Systems(EVCS)are increasingly vulnerable to cybersecurity threats as they integrate deeply into smart grids and Internet ofThings(IoT)environments,raising significant security challenges.Most...Electric Vehicle Charging Systems(EVCS)are increasingly vulnerable to cybersecurity threats as they integrate deeply into smart grids and Internet ofThings(IoT)environments,raising significant security challenges.Most existing research primarily emphasizes network-level anomaly detection,leaving critical vulnerabilities at the host level underexplored.This study introduces a novel forensic analysis framework leveraging host-level data,including system logs,kernel events,and Hardware Performance Counters(HPC),to detect and analyze sophisticated cyberattacks such as cryptojacking,Denial-of-Service(DoS),and reconnaissance activities targeting EVCS.Using comprehensive forensic analysis and machine learning models,the proposed framework significantly outperforms existing methods,achieving an accuracy of 98.81%.The findings offer insights into distinct behavioral signatures associated with specific cyber threats,enabling improved cybersecurity strategies and actionable recommendations for robust EVCS infrastructure protection.展开更多
Accurate capacity and State of Charge(SOC)estimation are crucial for ensuring the safety and longevity of lithium-ion batteries in electric vehicles.This study examines ten machine learning architectures,Including Dee...Accurate capacity and State of Charge(SOC)estimation are crucial for ensuring the safety and longevity of lithium-ion batteries in electric vehicles.This study examines ten machine learning architectures,Including Deep Belief Network(DBN),Bidirectional Recurrent Neural Network(BiDirRNN),Gated Recurrent Unit(GRU),and others using the NASA B0005 dataset of 591,458 instances.Results indicate that DBN excels in capacity estimation,achieving orders-of-magnitude lower error values and explaining over 99.97%of the predicted variable’s variance.When computational efficiency is paramount,the Deep Neural Network(DNN)offers a strong alternative,delivering near-competitive accuracy with significantly reduced prediction times.The GRU achieves the best overall performance for SOC estimation,attaining an R^(2) of 0.9999,while the BiDirRNN provides a marginally lower error at a slightly higher computational speed.In contrast,Convolutional Neural Networks(CNN)and Radial Basis Function Networks(RBFN)exhibit relatively high error rates,making them less viable for real-world battery management.Analyses of error distributions reveal that the top-performing models cluster most predictions within tight bounds,limiting the risk of overcharging or deep discharging.These findings highlight the trade-off between accuracy and computational overhead,offering valuable guidance for battery management system(BMS)designers seeking optimal performance under constrained resources.Future work may further explore advanced data augmentation and domain adaptation techniques to enhance these models’robustness in diverse operating conditions.展开更多
Metaheuristic optimization methods are iterative search processes that aim to efficiently solve complexoptimization problems. These basically find the solution space very efficiently, often without utilizing the gradi...Metaheuristic optimization methods are iterative search processes that aim to efficiently solve complexoptimization problems. These basically find the solution space very efficiently, often without utilizing the gradientinformation, and are inspired by the bio-inspired and socially motivated heuristics. Metaheuristic optimizationalgorithms are increasingly applied to complex feature selection problems in high-dimensional medical datasets.Among these, Teaching-Learning-Based optimization (TLBO) has proven effective for continuous design tasks bybalancing exploration and exploitation phases. However, its binary version (BTLBO) suffers from limited exploitationability, often converging prematurely or getting trapped in local optima, particularly when applied to discrete featureselection tasks. Previous studies reported that BTLBO yields lower classification accuracy and higher feature subsetvariance compared to other hybrid methods in benchmark tests, motivating the development of hybrid approaches.This study proposes a novel hybrid algorithm, BTLBO-Cheetah Optimizer (BTLBO-CO), which integrates the globalexploration strength of BTLBO with the local exploitation efficiency of the Cheetah Optimization (CO) algorithm. Theobjective is to enhance the feature selection process for cancer classification tasks involving high-dimensional data. Theproposed BTLBO-CO algorithm was evaluated on six benchmark cancer datasets: 11 tumors (T), Lung Cancer (LUC),Leukemia (LEU), Small Round Blue Cell Tumor or SRBCT (SR), Diffuse Large B-cell Lymphoma or DLBCL (DL), andProstate Tumor (PT).The results demonstrate superior classification accuracy across all six datasets, achieving 93.71%,96.12%, 98.13%, 97.11%, 98.44%, and 98.84%, respectively.These results validate the effectiveness of the hybrid approachin addressing diverse feature selection challenges using a Support Vector Machine (SVM) classifier.展开更多
This paper studies certain estimates for the lower bound of distance between unitary orbits of normal elements.We show that the distance between unitary orbits of normal elements of simple C^(*)-algebras of tracial ra...This paper studies certain estimates for the lower bound of distance between unitary orbits of normal elements.We show that the distance between unitary orbits of normal elements of simple C^(*)-algebras of tracial rank no more than k has a lower bound.Furthermore,if k≤1 and normal elements are commuting,then the lower bound will be better.Another result establishes a connection involving the spectrum distance operator Dc between a C^(*)-algebra of stable rank one C^(*)-algebra and its hereditary C^(*)-subalgebra.展开更多
In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asy...In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.展开更多
Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data,a challenge that becomes especially pronounced when commissioning new facil-iti...Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data,a challenge that becomes especially pronounced when commissioning new facil-ities where operational records are scarce.This review aims to synthesize recent progress in data-efficient deep learning approaches for addressing such“cold-start”forecasting problems.It primarily covers three interrelated domains—solar photovoltaic(PV),wind power,and electrical load forecasting—where data scarcity and operational variability are most critical,while also including representative studies on hydropower and carbon emission prediction to provide a broader systems perspective.To this end,we examined trends from over 150 predominantly peer-reviewed studies published between 2019 and mid-2025,highlighting advances in zero-shot and few-shot meta-learning frameworks that enable rapid model adaptation with minimal labeled data.Moreover,transfer learning approaches combined with spatiotemporal graph neural networks have been employed to transfer knowledge from existing energy assets to new,data-sparse environments,effectively capturing hidden dependencies among geographic features,meteorological dynamics,and grid structures.Synthetic data generation has further proven valuable for expanding training samples and mitigating overfitting in cold-start scenarios.In addition,large language models and explainable artificial intelligence(XAI)—notably conversational XAI systems—have been used to interpret and communicate complex model behaviors in accessible terms,fostering operator trust from the earliest deployment stages.By consolidating methodological advances,unresolved challenges,and open-source resources,this review provides a coherent overview of deep learning strategies that can shorten the data-sparse ramp-up period of new energy infrastructures and accelerate the transition toward resilient,low-carbon electricity grids.展开更多
Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imagin...Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources,system noise,and environmental interference,posing challenges to real-time processing of large-scale datasets.To address this issue,this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit.This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters,dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning.Additionally,a graphics processing unit integrated 3D reconstruction framework is developed,enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism.Experimental results demonstrate significant improvements in structural similarity(0.92),peak signal-to-noise ratio(31.62 dB),and stripe suppression ratio(15.73 dB)compared with existing methods.On the RTX 4090 platform,the proposed system achieved an end-to-end delay of 94.36 milliseconds,a frame rate of 10.3 frames per second,and a throughput of 121.5 million voxels per second,effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance.展开更多
Wind turbine blade defect detection faces persistent challenges in separating small,low-contrast surface faults from complex backgrounds while maintaining reliability under variable illumination and viewpoints.Conven-...Wind turbine blade defect detection faces persistent challenges in separating small,low-contrast surface faults from complex backgrounds while maintaining reliability under variable illumination and viewpoints.Conven-tional image-processing pipelines struggle with scalability and robustness,and recent deep learning methods remain sensitive to class imbalance and acquisition variability.This paper introduces TurbineBladeDetNet,a convolutional architecture combining dual-attention mechanisms with multi-path feature extraction for detecting five distinct blade fault types.Our approach employs both channel-wise and spatial attention modules alongside an Albumentations-driven augmentation strategy to handle dataset imbalance and capture condition variability.The model achieves 97.14%accuracy,98.65%precision,and 98.68%recall,yielding a 98.66%F1-score with 0.0110 s inference time.Class-specific analysis shows uniformly high sensitivity and specificity;lightning damage reaches 99.80%for sensitivity,precision,and F1-score,and crack achieves perfect precision and specificity with a 98.94%F1-score.Comparative evaluation against recent wind-turbine inspection approaches indicates higher performance in both accuracy and F1-score.The resulting balance of sensitivity and specificity limits both missed defects and false alarms,supporting reliable deployment in routine unmanned aerial vehicle(UAV)inspection.展开更多
In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In additi...In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In addition,methods for deploying multiple cameras for motion capture of users(e.g.,performers)are widely used in computer graphics.As the need to minimize and optimize the number of cameras grows to reduce costs,various technologies and research approaches focused on Optimal Camera Placement(OCP)are continually being proposed.However,as most existing studies assume homogeneous camera setups,there is a growing demand for studies on heterogeneous camera setups.For instance,technical demands keep emerging in scenarios with minimal camera configurations,especially regarding cost factors,the physical placement of cameras given the spatial structure,and image capture strategies for heterogeneous cameras,such as high-resolution RGB cameras and depth cameras.In this study,we propose a pre-visualization and simulation method for the optimal placement of heterogeneous cameras in XR environments,accounting for both the specifications of heterogeneous cameras(e.g.,field of view)and the physical configuration(e.g.,wall configuration)in real-world spaces.The proposed method performs a visibility analysis of cameras by considering each camera’s field-of-view volume,resolution,and unique characteristics,along with physicalspace constraints.This approach enables the optimal position and rotation of each camera to be recommended,along with the minimum number of cameras required.In the results of our study conducted in heterogeneous camera combinations,the proposed method achieved 81.7%~82.7%coverage of the target visual information using only 2~3 cameras.In contrast,single(or homogeneous)-typed cameras were required to use 11 cameras for 81.6%coverage.Accordingly,we found that camera deployment resources can be reduced with the proposed approaches.展开更多
Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to ...Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field.展开更多
Background There is insufficient evidence to provide recommendations for leisure-time physical activity among workers across various occupational physical activity levels.This study aimed to assess the association of ...Background There is insufficient evidence to provide recommendations for leisure-time physical activity among workers across various occupational physical activity levels.This study aimed to assess the association of leisure-time physical activity with cardiovascular and all-cause mortality across occupational physical activity levels.Methods This study utilized individual participant data from 21 cohort studies,comprising both published and unpublished data.Eligibility criteria included individual-level data on leisure-time and occupational physical activity(categorized as sedentary,low,moderate,and high)along with data on all-cause and/or cardiovascular mortality.A 2-stage individual participant data meta-analysis was conducted,with separate analysis of each study using Cox proportional hazards models(Stage 1).These results were combined using random-effects models(Stage 2).Results Higher leisure-time physical activity levels were associated with lower all-cause and cardiovascular mortality risk across most occupational physical activity levels,for both males and females.Among males with sedentary work,high compared to sedentary leisure-time physical activity was associated with lower all-cause(hazard ratios(HR)=0.77,95%confidence interval(95%CI):0.70-0.85)and cardiovascular mortality(HR=0.76,95%CI:0.66-0.87)risk.Among males with high levels of occupational physical activity,high compared to sedentary leisure-time physical activity was associated with lower all-cause(HR=0.84,95%CI:0.74-0.97)and cardiovascular mortality(HR=0.79,95%CI:0.60-1.04)risk,while HRs for low and moderate levels of leisure-time physical activity ranged between 0.87 and 0.97 and were not statistically significant.Among females,most effects were similar but more imprecise,especially in the higher occupational physical activity levels.Conclusion Higher levels of leisure-time physical activity were generally associated with lower mortality risks.However,results for workers with moderate and high occupational physical activity levels,especially women,were more imprecise.Our findings suggests that workers may benefit from engaging in high levels of leisure-time physical activity,irrespective of their level of occupational physical activity.展开更多
Alzheimer’s disease(AD)is the most common form of dementia,affecting over 50 million people worldwide.This figure is projected to nearly double every 20 years,reaching 82 million by 2030 and 152 million by 2050(Alzhe...Alzheimer’s disease(AD)is the most common form of dementia,affecting over 50 million people worldwide.This figure is projected to nearly double every 20 years,reaching 82 million by 2030 and 152 million by 2050(Alzheimer’s Disease International).The apolipoproteinε4(APOE4)allele is the strongest genetic risk factor for late-onset AD(after age 65 years).Apolipoprotein E,a lipid transporter,exists in three variants:ε2,ε3,andε4.APOEε2(APOE2)is protective against AD,APOEε3(APOE3)is neutral,while APOE4 significantly increases the risk.Individuals with one copy of APOE4 have a 4-fold greater risk of developing AD,and those with two copies face an 8-fold risk compared to non-carriers.Even in cognitively normal individuals,APOE4 carriers exhibit brain metabolic and vascular deficits decades before amyloid-beta(Aβ)plaques and neurofibrillary tau tangles emerge-the hallmark pathologies of AD(Reiman et al.,2001,2005;Thambisetty et al.,2010).Notably,studies have demonstrated reduced glucose uptake,or hypometabolism,in brain regions vulnerable to AD in asymptomatic middle-aged APOE4 carriers,long before clinical symptoms arise(Reiman et al.,2001,2005).展开更多
基金Project supported by the National Natural Science Foundation of China(No.12402033)the National Natural Science Foundation for Distinguished Young Scholars of China(No.52225211)。
文摘Many complex systems are frequently subject to the influence of uncertain disturbances,which can exert a profound effect on the critical transitions(CTs),potentially resulting in catastrophic consequences.Consequently,it is of uttermost importance to provide warnings for noise-induced CTs in various applications.Although capturing certain generic symptoms of transition behaviors from observational and simulated data poses a challenging problem,this work attempts to extract information regarding CTs from simulated data of a Gaussian white noise-induced tri-stable system.Using the extended dynamic mode decomposition(EDMD)algorithm,we initially obtain finite-dimensional approximations of both the stochastic Koopman operator and the generator.Subsequently,the drift parameters and the noise intensity within the system are identified from the simulated data.Utilizing the identified system,the parameter-dependent basin of the unsafe regime(PDBUR)is quantified,enabling data-driven early warning of Gaussian white noise-induced CTs.Finally,an error analysis is carried out to verify the effectiveness of the data-driven results.Our findings may serve as a paradigm for understanding and predicting noise-induced CTs in complex systems based on data.
基金supported by the National Key R&D Program of China(No.2023YFF1304002-05)the National Social Science Fund of China(No.22BTJ005)the National Natural Science Foundation of China(No.32572049)。
文摘Remote sensing plays a pivotal role in forest inventory by enabling efficient large-scale monitoring while minimizing fieldwork costs.However,missing values pose a critical challenge in remote sensing applications,as ignoring or mishandling such data gaps can introduce systematic bias into the estimation of target variables for natural resource monitoring.This can lead to cascading errors that propagate through forest and ecosystem management decisions,ultimately hindering progress toward sustainable forest management,biodiversity conservation,and climate change mitigation strategies.This study aims to propose and demonstrate a procedure that employs hybrid estimators to address the limitations of missing remotely sensed data in forest inventory,using Landsat 7 ETM+SLC-off data as an archived source for forest resource monitoring as a case in point.We compared forest inventory estimates from the hybrid estimator with those from a conventional model-based(CMB)estimator using Sentinel-2 data without missing values.Monte Carlo simulations revealed three key findings:(1)The hybrid estimator,leveraging missing-data remote sensing represented by Landsat 7 ETM+SLCoff data,achieved a sampling precision of over 90%,meeting China's national standard for the National Forest Inventory(NFI);(2)The hybrid estimator demonstrated comparable efficiency to the CMB estimator;(3)The uncertainty associated with hybrid estimators was primarily dominated by model parameter estimation,which could be effectively mitigated by slightly increasing the training sample size or refining model specification.Overall,in forest inventory,the hybrid estimator can surmount the limitations posed by missing values in remotely sensed auxiliary data,effectively balancing cost-effectiveness and flexibility.
文摘The data production elements are driving profound transformations in the real economy across production objects,methods,and tools,generating significant economic effects such as industrial structure upgrading.This paper aims to reveal the impact mechanism of the data elements on the“three transformations”(high-end,intelligent,and green)in the manufacturing sector,theoretically elucidating the intrinsic mechanisms by which the data elements influence these transformations.The study finds that the data elements significantly enhance the high-end,intelligent,and green levels of China's manufacturing industry.In terms of the pathways of impact,the data elements primarily influence the development of high-tech industries and overall green technological innovation,thereby affecting the high-end,intelligent,and green transformation of the industry.
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
基金funded by KAU Endowment(WAQF)at King Abdulaziz University,Jeddah,Saudi Arabia.
文摘Cancer deaths and new cases worldwide are projected to rise by 47%by 2040,with transitioning countries experiencing an even higher increase of up to 95%.Tumor severity is profoundly influenced by the timing,accuracy,and stage of diagnosis,which directly impacts clinical decision-making.Various biological entities,including genes,proteins,mRNAs,miRNAs,and metabolites,contribute to cancer development.The emergence of multi-omics technologies has transformed cancer research by revealing molecular alterations across multiple biological layers.This integrative approach supports the notion that cancer is fundamentally driven by such alterations,enabling the discovery ofmolecular signatures for precision oncology.This reviewexplores the role of AI-drivenmulti-omics analyses in cancer medicine,emphasizing their potential to identify novel biomarkers and therapeutic targets,enhance understanding of Tumor biology,and address integration challenges in clinical workflows.Network biology analyzes identified ERBB2,KRAS,and TP53 as top hub genes in lung cancer based on Maximal Clique Centrality(MCC)scores.In contrast,TP53,ERBB2,ESR1,MYC,and BRCA1 emerged as central regulators in breast cancer,linked to cell proliferation,hormonal signaling,and genomic stability.The review also discusses how specific Artificial Intelligence(AI)algorithms can streamline the integration of heterogeneous datasets,facilitate the interpretation of the tumor microenvironment,and support data-driven clinical strategies.
基金Supported by Dong-A University Research Fund,No.20230598.
文摘BACKGROUND Hepatocellular carcinoma(HCC)remains a significant public health concern in South Korea even though the incidence rates are declining.While medical travel for cancer treatment is common,its patterns and influencing factors for patients with HCC are unknown.AIM To assess medical travel patterns and determinants and their policy implications among patients with newly diagnosed HCC in South Korea.METHODS This retrospective cohort study used the National Health Insurance Service database to identify patients with newly diagnosed HCC from 2013 to 2021.Medical travel was defined as receiving initial treatment outside one’s residential region.Patient characteristics and regional trends were analyzed,and factors influencing medical travel were identified using logistic regression analysis.RESULTS Among 64808 patients 52.4%received treatment in the capital.This proportion increased to 67.4%when including the surrounding metropolitan area.Medical travel was significantly more common among younger and wealthier patients.Patients with greater comorbidity burden or liver cirrhosis were less likely to travel.While geographic distance influenced travel patterns,high-volume academic centers in the capital attracted patients nationwide regardless of proximity.CONCLUSION This nationwide study highlighted the centralization of HCC care in the capital.This observation indicates that regional cancer hubs should be strengthened and promoted for equitable healthcare access.
文摘The widespread usage of rechargeable batteries in portable devices,electric vehicles,and energy storage systems has underscored the importance for accurately predicting their lifetimes.However,data scarcity often limits the accuracy of prediction models,which is escalated by the incompletion of data induced by the issues such as sensor failures.To address these challenges,we propose a novel approach to accommodate data insufficiency through achieving external information from incomplete data samples,which are usually discarded in existing studies.In order to fully unleash the prediction power of incomplete data,we have investigated the Multiple Imputation by Chained Equations(MICE)method that diversifies the training data through exploring the potential data patterns.The experimental results demonstrate that the proposed method significantly outperforms the baselines in the most considered scenarios while reducing the prediction root mean square error(RMSE)by up to 18.9%.Furthermore,we have also observed that the penetration of incomplete data benefits the explainability of the prediction model through facilitating the feature selection.
文摘Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.
文摘Electric Vehicle Charging Systems(EVCS)are increasingly vulnerable to cybersecurity threats as they integrate deeply into smart grids and Internet ofThings(IoT)environments,raising significant security challenges.Most existing research primarily emphasizes network-level anomaly detection,leaving critical vulnerabilities at the host level underexplored.This study introduces a novel forensic analysis framework leveraging host-level data,including system logs,kernel events,and Hardware Performance Counters(HPC),to detect and analyze sophisticated cyberattacks such as cryptojacking,Denial-of-Service(DoS),and reconnaissance activities targeting EVCS.Using comprehensive forensic analysis and machine learning models,the proposed framework significantly outperforms existing methods,achieving an accuracy of 98.81%.The findings offer insights into distinct behavioral signatures associated with specific cyber threats,enabling improved cybersecurity strategies and actionable recommendations for robust EVCS infrastructure protection.
文摘Accurate capacity and State of Charge(SOC)estimation are crucial for ensuring the safety and longevity of lithium-ion batteries in electric vehicles.This study examines ten machine learning architectures,Including Deep Belief Network(DBN),Bidirectional Recurrent Neural Network(BiDirRNN),Gated Recurrent Unit(GRU),and others using the NASA B0005 dataset of 591,458 instances.Results indicate that DBN excels in capacity estimation,achieving orders-of-magnitude lower error values and explaining over 99.97%of the predicted variable’s variance.When computational efficiency is paramount,the Deep Neural Network(DNN)offers a strong alternative,delivering near-competitive accuracy with significantly reduced prediction times.The GRU achieves the best overall performance for SOC estimation,attaining an R^(2) of 0.9999,while the BiDirRNN provides a marginally lower error at a slightly higher computational speed.In contrast,Convolutional Neural Networks(CNN)and Radial Basis Function Networks(RBFN)exhibit relatively high error rates,making them less viable for real-world battery management.Analyses of error distributions reveal that the top-performing models cluster most predictions within tight bounds,limiting the risk of overcharging or deep discharging.These findings highlight the trade-off between accuracy and computational overhead,offering valuable guidance for battery management system(BMS)designers seeking optimal performance under constrained resources.Future work may further explore advanced data augmentation and domain adaptation techniques to enhance these models’robustness in diverse operating conditions.
基金funded by the Deanship of Research andGraduate Studies at King Khalid University through the Large Research Project under grant number RGP2/417/46.
文摘Metaheuristic optimization methods are iterative search processes that aim to efficiently solve complexoptimization problems. These basically find the solution space very efficiently, often without utilizing the gradientinformation, and are inspired by the bio-inspired and socially motivated heuristics. Metaheuristic optimizationalgorithms are increasingly applied to complex feature selection problems in high-dimensional medical datasets.Among these, Teaching-Learning-Based optimization (TLBO) has proven effective for continuous design tasks bybalancing exploration and exploitation phases. However, its binary version (BTLBO) suffers from limited exploitationability, often converging prematurely or getting trapped in local optima, particularly when applied to discrete featureselection tasks. Previous studies reported that BTLBO yields lower classification accuracy and higher feature subsetvariance compared to other hybrid methods in benchmark tests, motivating the development of hybrid approaches.This study proposes a novel hybrid algorithm, BTLBO-Cheetah Optimizer (BTLBO-CO), which integrates the globalexploration strength of BTLBO with the local exploitation efficiency of the Cheetah Optimization (CO) algorithm. Theobjective is to enhance the feature selection process for cancer classification tasks involving high-dimensional data. Theproposed BTLBO-CO algorithm was evaluated on six benchmark cancer datasets: 11 tumors (T), Lung Cancer (LUC),Leukemia (LEU), Small Round Blue Cell Tumor or SRBCT (SR), Diffuse Large B-cell Lymphoma or DLBCL (DL), andProstate Tumor (PT).The results demonstrate superior classification accuracy across all six datasets, achieving 93.71%,96.12%, 98.13%, 97.11%, 98.44%, and 98.84%, respectively.These results validate the effectiveness of the hybrid approachin addressing diverse feature selection challenges using a Support Vector Machine (SVM) classifier.
基金Supported by Zhejiang Provincial Natural Science Foundation of China(No.ZCLQN25A0103)。
文摘This paper studies certain estimates for the lower bound of distance between unitary orbits of normal elements.We show that the distance between unitary orbits of normal elements of simple C^(*)-algebras of tracial rank no more than k has a lower bound.Furthermore,if k≤1 and normal elements are commuting,then the lower bound will be better.Another result establishes a connection involving the spectrum distance operator Dc between a C^(*)-algebra of stable rank one C^(*)-algebra and its hereditary C^(*)-subalgebra.
基金Supported by the National Natural Science Foundation of China(12261018)Universities Key Laboratory of Mathematical Modeling and Data Mining in Guizhou Province(2023013)。
文摘In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.
文摘Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data,a challenge that becomes especially pronounced when commissioning new facil-ities where operational records are scarce.This review aims to synthesize recent progress in data-efficient deep learning approaches for addressing such“cold-start”forecasting problems.It primarily covers three interrelated domains—solar photovoltaic(PV),wind power,and electrical load forecasting—where data scarcity and operational variability are most critical,while also including representative studies on hydropower and carbon emission prediction to provide a broader systems perspective.To this end,we examined trends from over 150 predominantly peer-reviewed studies published between 2019 and mid-2025,highlighting advances in zero-shot and few-shot meta-learning frameworks that enable rapid model adaptation with minimal labeled data.Moreover,transfer learning approaches combined with spatiotemporal graph neural networks have been employed to transfer knowledge from existing energy assets to new,data-sparse environments,effectively capturing hidden dependencies among geographic features,meteorological dynamics,and grid structures.Synthetic data generation has further proven valuable for expanding training samples and mitigating overfitting in cold-start scenarios.In addition,large language models and explainable artificial intelligence(XAI)—notably conversational XAI systems—have been used to interpret and communicate complex model behaviors in accessible terms,fostering operator trust from the earliest deployment stages.By consolidating methodological advances,unresolved challenges,and open-source resources,this review provides a coherent overview of deep learning strategies that can shorten the data-sparse ramp-up period of new energy infrastructures and accelerate the transition toward resilient,low-carbon electricity grids.
文摘Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources,system noise,and environmental interference,posing challenges to real-time processing of large-scale datasets.To address this issue,this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit.This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters,dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning.Additionally,a graphics processing unit integrated 3D reconstruction framework is developed,enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism.Experimental results demonstrate significant improvements in structural similarity(0.92),peak signal-to-noise ratio(31.62 dB),and stripe suppression ratio(15.73 dB)compared with existing methods.On the RTX 4090 platform,the proposed system achieved an end-to-end delay of 94.36 milliseconds,a frame rate of 10.3 frames per second,and a throughput of 121.5 million voxels per second,effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance.
文摘Wind turbine blade defect detection faces persistent challenges in separating small,low-contrast surface faults from complex backgrounds while maintaining reliability under variable illumination and viewpoints.Conven-tional image-processing pipelines struggle with scalability and robustness,and recent deep learning methods remain sensitive to class imbalance and acquisition variability.This paper introduces TurbineBladeDetNet,a convolutional architecture combining dual-attention mechanisms with multi-path feature extraction for detecting five distinct blade fault types.Our approach employs both channel-wise and spatial attention modules alongside an Albumentations-driven augmentation strategy to handle dataset imbalance and capture condition variability.The model achieves 97.14%accuracy,98.65%precision,and 98.68%recall,yielding a 98.66%F1-score with 0.0110 s inference time.Class-specific analysis shows uniformly high sensitivity and specificity;lightning damage reaches 99.80%for sensitivity,precision,and F1-score,and crack achieves perfect precision and specificity with a 98.94%F1-score.Comparative evaluation against recent wind-turbine inspection approaches indicates higher performance in both accuracy and F1-score.The resulting balance of sensitivity and specificity limits both missed defects and false alarms,supporting reliable deployment in routine unmanned aerial vehicle(UAV)inspection.
基金supported by the 2024 Research Fund of University of Ulsan.
文摘In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In addition,methods for deploying multiple cameras for motion capture of users(e.g.,performers)are widely used in computer graphics.As the need to minimize and optimize the number of cameras grows to reduce costs,various technologies and research approaches focused on Optimal Camera Placement(OCP)are continually being proposed.However,as most existing studies assume homogeneous camera setups,there is a growing demand for studies on heterogeneous camera setups.For instance,technical demands keep emerging in scenarios with minimal camera configurations,especially regarding cost factors,the physical placement of cameras given the spatial structure,and image capture strategies for heterogeneous cameras,such as high-resolution RGB cameras and depth cameras.In this study,we propose a pre-visualization and simulation method for the optimal placement of heterogeneous cameras in XR environments,accounting for both the specifications of heterogeneous cameras(e.g.,field of view)and the physical configuration(e.g.,wall configuration)in real-world spaces.The proposed method performs a visibility analysis of cameras by considering each camera’s field-of-view volume,resolution,and unique characteristics,along with physicalspace constraints.This approach enables the optimal position and rotation of each camera to be recommended,along with the minimum number of cameras required.In the results of our study conducted in heterogeneous camera combinations,the proposed method achieved 81.7%~82.7%coverage of the target visual information using only 2~3 cameras.In contrast,single(or homogeneous)-typed cameras were required to use 11 cameras for 81.6%coverage.Accordingly,we found that camera deployment resources can be reduced with the proposed approaches.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)support program(IITP-2025-RS-2023-00259497)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)and was supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Republic of Korea government(MSIT)(No.IITP-2025-RS-2023-00254129+1 种基金Graduate School of Metaverse Convergence(Sungkyunkwan University))was supported by the Basic Science Research Program of the National Research Foundation(NRF)funded by the Republic of Korean government(MSIT)(No.RS-2024-00346737).
文摘Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field.
基金The Trùndelag Health Study (HUNT) is a collaboration between HUNT Research Centre (Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology), Trùndelag County Council, Central Norway Regional Health Authority, and the Norwegian Institute of Public HealthThe coordination of European Prospective Investigation into Cancer and Nutrition - Spain study (EPIC) is financially supported by the International Agency for Research on Cancer (IARC)+7 种基金by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London, which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC)supported by Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andaluc 1a, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain)funded by The Netherlands Organisation for Health Research and DevelopmentZon Mw (Grant No.: 531-00141-3)Funding for the SHIP study has been provided by the Federal Ministry for Education and Research (BMBFidentification codes 01 ZZ96030, 01 ZZ0103, and 01 ZZ0701)support from the Swedish Research Council (2018-02527 and 2019-00193)financed by the Helmholtz Zentrum München - German Research Center for Environmental Health, which is funded by the German Federal Ministry of Education and Research (BMBF) and by the State of Bavaria.
文摘Background There is insufficient evidence to provide recommendations for leisure-time physical activity among workers across various occupational physical activity levels.This study aimed to assess the association of leisure-time physical activity with cardiovascular and all-cause mortality across occupational physical activity levels.Methods This study utilized individual participant data from 21 cohort studies,comprising both published and unpublished data.Eligibility criteria included individual-level data on leisure-time and occupational physical activity(categorized as sedentary,low,moderate,and high)along with data on all-cause and/or cardiovascular mortality.A 2-stage individual participant data meta-analysis was conducted,with separate analysis of each study using Cox proportional hazards models(Stage 1).These results were combined using random-effects models(Stage 2).Results Higher leisure-time physical activity levels were associated with lower all-cause and cardiovascular mortality risk across most occupational physical activity levels,for both males and females.Among males with sedentary work,high compared to sedentary leisure-time physical activity was associated with lower all-cause(hazard ratios(HR)=0.77,95%confidence interval(95%CI):0.70-0.85)and cardiovascular mortality(HR=0.76,95%CI:0.66-0.87)risk.Among males with high levels of occupational physical activity,high compared to sedentary leisure-time physical activity was associated with lower all-cause(HR=0.84,95%CI:0.74-0.97)and cardiovascular mortality(HR=0.79,95%CI:0.60-1.04)risk,while HRs for low and moderate levels of leisure-time physical activity ranged between 0.87 and 0.97 and were not statistically significant.Among females,most effects were similar but more imprecise,especially in the higher occupational physical activity levels.Conclusion Higher levels of leisure-time physical activity were generally associated with lower mortality risks.However,results for workers with moderate and high occupational physical activity levels,especially women,were more imprecise.Our findings suggests that workers may benefit from engaging in high levels of leisure-time physical activity,irrespective of their level of occupational physical activity.
基金supported by National Institute on Aging(NIH-NIA)R01AG054459(to ALL).
文摘Alzheimer’s disease(AD)is the most common form of dementia,affecting over 50 million people worldwide.This figure is projected to nearly double every 20 years,reaching 82 million by 2030 and 152 million by 2050(Alzheimer’s Disease International).The apolipoproteinε4(APOE4)allele is the strongest genetic risk factor for late-onset AD(after age 65 years).Apolipoprotein E,a lipid transporter,exists in three variants:ε2,ε3,andε4.APOEε2(APOE2)is protective against AD,APOEε3(APOE3)is neutral,while APOE4 significantly increases the risk.Individuals with one copy of APOE4 have a 4-fold greater risk of developing AD,and those with two copies face an 8-fold risk compared to non-carriers.Even in cognitively normal individuals,APOE4 carriers exhibit brain metabolic and vascular deficits decades before amyloid-beta(Aβ)plaques and neurofibrillary tau tangles emerge-the hallmark pathologies of AD(Reiman et al.,2001,2005;Thambisetty et al.,2010).Notably,studies have demonstrated reduced glucose uptake,or hypometabolism,in brain regions vulnerable to AD in asymptomatic middle-aged APOE4 carriers,long before clinical symptoms arise(Reiman et al.,2001,2005).