期刊文献+
共找到405,222篇文章
< 1 2 250 >
每页显示 20 50 100
OKR-STEP双引擎驱动的混合式教学模式设计与实践
1
作者 马丽 高敬礼 +1 位作者 李真 吕海莲 《计算机教育》 2026年第3期334-340,共7页
针对目前混合式教学中存在的目标不够明确、过程管理薄弱、实践不足、评价单一等问题,提出OKR-STEP双引擎驱动的混合式教学模式,以软件工程课程为例,从OKR-STEP模式内涵、课程OKR体系构建、混合式教学模式设计等方面,介绍该模式的实施过... 针对目前混合式教学中存在的目标不够明确、过程管理薄弱、实践不足、评价单一等问题,提出OKR-STEP双引擎驱动的混合式教学模式,以软件工程课程为例,从OKR-STEP模式内涵、课程OKR体系构建、混合式教学模式设计等方面,介绍该模式的实施过程,最后说明模式改革成效。 展开更多
关键词 OKR-step 项目驱动 混合式教学 能力进阶
在线阅读 下载PDF
On the Riemann-Hilbert problem for the reverse space-time nonlocal Hirota equation with step-like initial data
2
作者 Bei-Bei Hu Ling Zhang +1 位作者 Zu-Yi Shen Ji Lin 《Communications in Theoretical Physics》 2025年第2期30-38,共9页
In this paper,we use the Riemann-Hilbert(RH)method to investigate the Cauchy problem of the reverse space-time nonlocal Hirota equation with step-like initial data:q(z,0)=o(1)as z→-∞and q(z,0)=δ+o(1)as z→∞,where... In this paper,we use the Riemann-Hilbert(RH)method to investigate the Cauchy problem of the reverse space-time nonlocal Hirota equation with step-like initial data:q(z,0)=o(1)as z→-∞and q(z,0)=δ+o(1)as z→∞,whereδis an arbitrary positive constant.We show that the solution of the Cauchy problem can be determined by the solution of the corresponding matrix RH problem established on the plane of complex spectral parameterλ.As an example,we construct an exact solution of the reverse space-time nonlocal Hirota equation in a special case via this RH problem. 展开更多
关键词 nonlocal Hirota equation Cauchy problem Riemann-Hilbert problem step-like initial data
原文传递
A small step towards the epistemic decentralization of science:A dataset of journals and publications indexed in African Journals Online
3
作者 Patricia Alonso-Álvarez 《Journal of Data and Information Science》 2025年第4期104-121,共18页
Purpose:This paper examines African Journals Online(AJOL)as a bibliometric resource,providing a structured dataset of journal and publication metadata.In addition,it integrates AJOL data with OpenAlex to enhance metad... Purpose:This paper examines African Journals Online(AJOL)as a bibliometric resource,providing a structured dataset of journal and publication metadata.In addition,it integrates AJOL data with OpenAlex to enhance metadata coverage and improve interoperability with other bibliometric sources.Design/methodology/approach:The journal list and publications indexed in AJOL were retrieved using web scraping techniques.This paper details the database construction process,highlighting its strengths and limitations,and presents a descriptive analysis of AJOL’s indexed journals and publications.Findings:The publication analysis demonstrates a steady growth in the number of publications over time but reveals significant disparities in their distribution across African countries.This paper presents an example of the possibility of integrating both sources using author country data from OpenAlex.The analysis of author contributions reveals that African journals serve as both regional and international venues,confirming that African journals play a dual role in fostering both regional and global research engagement.Research limitations:While AJOL contains relevant information for identifying and providing insights about African publications and journals,its metadata are limited.Therefore,the kind of analysis that can be performed with the database presented here is also limited.The integration with OpenAlex aims to overcome some of the limitations.Finally,although some automatic citation procedures have been performed,the metadata has not been manually curated.Therefore,if errors or inaccuracies are present in the AJOL,they may be reproduced in this database.Practical implications:The database introduced in this article contributes to the accessibility of African scholarly publications by providing structured,accessible metadata derived from the AJOL.It facilitates bibliometric analyses that are more representative of African research activities.This contribution complements ongoing efforts to develop alternative data sources and infrastructure that better reflect the diversity of global knowledge production.Originality/value:This paper presents a novel database for bibliometric analysis and offers a detailed report of the retrieval and construction procedures.The inclusion of matched data with OpenAlex further enhances the database’s utility.By showcasing AJOL’s potential,this study contributes to the broader goal of fostering inclusivity and improving the representation of African research in global bibliometric analyses. 展开更多
关键词 Decentralization of science African journals online African science data paper
在线阅读 下载PDF
Spatio-Temporal Earthquake Analysis via Data Warehousing for Big Data-Driven Decision Systems
4
作者 Georgia Garani George Pramantiotis Francisco Javier Moreno Arboleda 《Computers, Materials & Continua》 2026年第3期1963-1988,共26页
Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei... Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management. 展开更多
关键词 data warehouse data analysis big data decision systems SEISMOLOGY data visualization
在线阅读 下载PDF
Optimal pricing approaches for data markets in market-operated data exchanges
5
作者 Yangming Lyu Linyi Qian +2 位作者 Zhixin Yang Jing Yao Xiaochen Zuo 《Statistical Theory and Related Fields》 2026年第1期23-45,共23页
This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data e... This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data exchanges transitioning from quasi-public to marketoriented operations.To address the complex dynamics among data exchanges,suppliers,and consumers,the authors develop a threestage Stackelberg game framework.In this model,the data exchange acts as a leader setting transaction commission rates,suppliers are intermediate leaders determining unit prices,and consumers are followers making purchasing decisions.Two pricing strategies are examined:the Independent Pricing Approach(IPA)and the novel Perfectly Competitive Pricing Approach(PCPA),which accounts for competition among data providers.Using backward induction,the study derives subgame-perfect equilibria and proves the existence and uniqueness of Stackelberg equilibria under both approaches.Extensive numerical simulations are carried out in the model,demonstrating that PCPA enhances data demander utility,encourages supplier competition,increases transaction volume,and improves the overall profitability and sustainability of data exchanges.Social welfare analysis further confirms PCPA’s superiority in promoting efficient and fair data markets. 展开更多
关键词 data exchange data market digital economy perfectly competitive pricing approach Stackelberg game
原文传递
Rheological behaviors of step ladder-structured nitrocellulose in solution and gelatinization process
6
作者 Yu Luan Jiayi Du +2 位作者 Teng Ren Chengkai Pu Zhenggang Xiao 《Defence Technology(防务技术)》 2026年第2期110-124,共15页
Step ladder-structured nitrocellulose(LNC)is a novel energetic binder prepared by chemically modifying nitrocellulose(NC)with the introduction of flexible polyethylene glycol(PEG-400)chain segments,with a regular stru... Step ladder-structured nitrocellulose(LNC)is a novel energetic binder prepared by chemically modifying nitrocellulose(NC)with the introduction of flexible polyethylene glycol(PEG-400)chain segments,with a regular structure and good performance of bonding.The step ladder-structured addresses critical limitations of NC-based propellants,including low-temperature brittleness and high sensitivity,while enhancing process safety.Although the structural,thermal,and other properties of LNC have been investigated in our previous research,there is a lack of systematic studies on the rheological properties during solution and gelatinization.The study of the relationship between the structural features and rheological properties of LNC is a key factor in guiding its gelatinization and improving the properties of LNC-based propellants.Steady-state rheology flow experiments revealed that LNC exhibited shear thinning in different solutions,which decreased with increasing concentration.It has desirable solu-bility and dispersion in N,N-dimethylformamide(DMF)solvent.The effect of solvents on the entan-glement or orientation of LNC molecular chains may be reduced.These results can be quantitatively demonstrated using the Herschel-Bulkley model.Dynamic viscoelastic studies identified a critical point of concentration-frequency of 2.5 rad/s.This particular frequency point is a turning point in the law of the effect of concentration on the loss factor(tanδ).For gelatinized systems,increasing the solvent content reduces the temperature sensitivity of the gelatinized materials.The viscosity-temperature correlation based on the Arrhenius equation allowed the optimization of the solvent content through the derived equilibrium relationship.These structure-rheological performance relationships establish basic guidelines for the precision gelatinization of LNC-based propellant,provide theoretical support for the replacement of conventional NC by LNC,and guide the gelatinization process to improve the performance of gun propellants. 展开更多
关键词 step ladder-structured nitrocellulose Rheological properties GELATINIZATION
在线阅读 下载PDF
Effectiveness of a stepped self-care program for stroke survivors:A quasi-experimental study
7
作者 Zihao Ruan Dan Wang +5 位作者 Wenna Wang Yongxia Mei Hui Wang Suyan Chen Qiushi Zhang Zhenxiang Zhang 《International Journal of Nursing Sciences》 2026年第1期45-52,I0004,共9页
Objectives This study aimed to evaluate the effectiveness of the stepped self-care program on the self-care,self-efficacy,and quality of life of stroke survivors.Methods This quasi-experimental study allocated 110 str... Objectives This study aimed to evaluate the effectiveness of the stepped self-care program on the self-care,self-efficacy,and quality of life of stroke survivors.Methods This quasi-experimental study allocated 110 stroke survivors from two neurology wards into an intervention group(n=55)who received the stepped self-care program and a control group(n=55)who received usual care from June to December 2023.The Self-Care of Stroke Inventory,Stroke Self-Efficacy Questionnaire,and the short version of the Stroke Specific Quality of Life Scale were administered at baseline(T0),immediately post-intervention(T_(1)),and at 1-month(T_(2))and 3-month(T_(3))follow-ups.Data were analyzed using repeated measures analyses of variance,and generalized estimating equations.Results A total of 48 participants in the intervention group and 50 participants in the control group completed the study.No statistically significant differences were observed at T0 in any of the measured indicators(all P>0.05).The study showed significant group,time,and group×time interaction effects across the assessed outcomes(all P<0.05).Follow-up between-group comparisons at T_(1),T_(2),and T_(3) indicated that the intervention group had significantly higher scores in self-care maintenance,self-care monitoring,self-care management,self-efficacy,and quality of life than the control group(all P<0.001).Conclusions The stepped self-care program significantly improved self-care behaviors,self-efficacy,and quality of life among stroke survivors.These findings support the broader implementation of this approach in post-discharge home self-care. 展开更多
关键词 Quality of life SELF-CARE SELF-EFFICACY stepped care program STROKE
暂未订购
Explainable Ensemble Learning Approach for Ovarian Cancer Diagnosis Using Clinical Data
8
作者 Daniyal Asif Nabil Kerdid +1 位作者 Muhammad Shoaib Arif Mairaj Bibi 《Computer Modeling in Engineering & Sciences》 2026年第3期1050-1076,共27页
Ovarian cancer(OC)is one of the leading causes of death related to gynecological cancer,with the main difficulty of its early diagnosis and a heterogeneous nature of tumor biomarkers.Machine learning(ML)has the potent... Ovarian cancer(OC)is one of the leading causes of death related to gynecological cancer,with the main difficulty of its early diagnosis and a heterogeneous nature of tumor biomarkers.Machine learning(ML)has the potential to process complex datasets and support decision-making in OC diagnosis.Nevertheless,traditional ML models tend to be biased,overfitting,noisy,and less generalized.Moreover,their black-box nature reduces interpretability and limits their practical clinical applicability.In this study,we introduce an explainable ensemble learning(EL)model,TreeX-Stack,based on a stacking architecture that employs tree-based learners such as Decision Tree(DT),Random Forest(RF),Gradient Boosting(GB),and Extreme Gradient Boosting(XGBoost)as base learners,and Logistic Regression(LR)as the meta-learner to enhance ovarian cancer(OC)diagnosis.Local Interpretable ModelAgnostic Explanations(LIME)are used to explain individual predictions,making the model outputs more clinically interpretable and applicable.The model is trained on the dataset that includes demographic information,blood test,general chemistry,and tumor markers.Extensive preprocessing includes handling missing data using iterative imputation with Bayesian Ridge and addressing multicollinearity by removing features with correlation coefficients above 0.7.Relevant features are then selected using the Boruta feature selection method.To obtain robust and unbiased performance estimates during hyperparameter tuning,nested cross-validation(CV)with grid search is employed,and all experiments are repeated five times to ensure statistical reliability.TreeX-Stack demonstrates excellent diagnostic performance,achieving an accuracy of 0.9027,a precision of 0.8673,a recall of 0.9391,and an F1-score of 0.9012.Feature-importance analyses using LIME and permutation importance highlight Human Epididymis Protein 4(HE4)as the most significant biomarker for OC.The combination of high predictive performance and interpretability makes TreeX-Stack a reliable tool for clinical decision support in OC diagnosis. 展开更多
关键词 Ovarian cancer ensemble learning machine learning STACKING explainable artificial intelligence medical data analysis clinical data HE4
在线阅读 下载PDF
Combining different climate datasets better reflects the response of warm-temperate forests to climate:a case study from Mt.Dongling,Beijing
9
作者 Shengjie Wang Haiyang Liu +1 位作者 Shuai Yuan Chenxi Xu 《Journal of Forestry Research》 2026年第2期131-143,共13页
Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and... Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research. 展开更多
关键词 Climate data representativeness Alternative climate data selection Response differences Deciduous broad-leaf forest Warm temperate zone
在线阅读 下载PDF
现代地理学Geo-STEP模式创建与应用
10
作者 刘彦随 《地理科学》 北大核心 2026年第4期729-740,共12页
进入“人类世”以来,全球气候变化、高强度人类活动与信息智能化的多重作用和影响不断加大,全球人地系统及其地域格局正经历关系重塑和功能重构的巨大挑战,现代地理学研究面临理论创新与范式转型应对的迫切需求。本文围绕创建“地理科学... 进入“人类世”以来,全球气候变化、高强度人类活动与信息智能化的多重作用和影响不断加大,全球人地系统及其地域格局正经历关系重塑和功能重构的巨大挑战,现代地理学研究面临理论创新与范式转型应对的迫切需求。本文围绕创建“地理科学(Science)-技术(Technology)-工程(Engineering)-实践(Practice)”四维融通模式(简称Geo-STEP模式),系统阐释其理论内涵、四维交互机制及其创新应用。研究表明,Geo-STEP模式是一套系统化、综合性、贯通式的现代地理学方法论体系,它聚焦多维度关联、多体系耦合、多场景协同,构建起地理科学、技术、工程、实践多维融通的有机整体,其传导逻辑遵循“科学认知(S)-技术创新(T)-工程落地(E)-实践反馈(P)”的通用范式,推动现代地理学实现3个关键转型:一是从解释现象的理论研究向解决问题的工程实践转型;二是从单一学科的独立发展向综合交叉的协同创新转型;三是从理论研究向重大战略导向的理论与实践结合转型。本文重点剖析了Geo-STEP模式在国土空间规划、黄河流域生态保护和高质量发展、城乡融合与乡村振兴、地理教育与教学等重点领域的应用场景,初步验证其用于分析解决复杂人地系统问题的独特优势与综合能力,为“人类世”“人地圈”时空格局下地理科学体系与学科体系的优化重构提供理论参考和实践范式。 展开更多
关键词 Geo-step模式 人地系统科学 地理工程 地理技术 地理学科体系
原文传递
Construction and Application Practice of the Data-driven Comprehensive Management Platform for Regional Air Quality
11
作者 Tongxing ZHANG Yun WU Yongwen LI 《Meteorological and Environmental Research》 2026年第1期21-28,共8页
To address the severe challenges of PM_(2.5) and ozone co-control during the"14^(th) Five-Year Plan"period and to enhance the precision and intelligence level of air environment governance,it is imperative t... To address the severe challenges of PM_(2.5) and ozone co-control during the"14^(th) Five-Year Plan"period and to enhance the precision and intelligence level of air environment governance,it is imperative to build an efficient comprehensive management platform for regional air quality.In this paper,the specific practice in Zibo City,Shandong Province is as an example to systematically analyze the top-level design,technical implementation,and innovative application of a comprehensive management platform for regional air quality integrating"perception monitoring,data fusion,research judgment of early warnings,analysis of sources,collaborative dispatching,and evaluation assessment".Through the construction of an"sky-air-ground"integrated three-dimensional monitoring network,the platform integrates multi-source heterogeneous environmental data,and employs big data,cloud computing,artificial intelligence,CALPUFF/CMAQ,and other numerical model technologies to achieve comprehensive perception,precise prediction,intelligent source tracing,and closed-loop management of air pollution.The platform innovatively establishes a full-process closed-loop management mechanism of"data-early warning-disposition-evaluation",and achieves a fundamental transformation from passive response to active anticipation and from experience-based judgment to data driving in environmental supervision.The application results show that this platform significantly improves the scientific decision-making ability and collaborative execution efficiency of air pollution governance in Zibo City,providing a replicable and scalable comprehensive solution for similar industrial cities to achieve the continuous improvement of air quality. 展开更多
关键词 Comprehensive management of air quality Big data Internet of Things Closed-loop management data driving Off-site supervision
在线阅读 下载PDF
tsRNADisease:a manually curated database of tsRNAs associated with human disease
12
作者 Hui Yang Shaoying Zhu +5 位作者 Huijun Wei Wei Huang Qi Chen Yungang He Kun Lv Zhen Yang 《Journal of Genetics and Genomics》 2026年第3期537-543,共7页
tRNA-derived small RNAs(tsRNAs),as a class of regulatory small noncoding RNA,have been implicated in a wide variety of human diseases.Large amounts of tsRNA–disease associations have been identified in recent years f... tRNA-derived small RNAs(tsRNAs),as a class of regulatory small noncoding RNA,have been implicated in a wide variety of human diseases.Large amounts of tsRNA–disease associations have been identified in recent years from accumulating studies.However,repositories for cataloging the detailed information on tsRNA–disease associations are scarce.In this study,we provide a tsRNADisease database by integrating experimentally and computationally supported tsRNA–disease associations from manual curation of literatures and other related resources.tsRNADisease contains 5571 manually curated associations between 4759 tsRNAs and 166 diseases with experimental evidence from 346 studies.In addition,it also contains 5013 predicted associations between 1297 tsRNAs and 111 diseases.tsRNADisease provides a user-friendly interface to browse,retrieve,and download data conveniently.This database can improve our understanding of tsRNA deregulation in diseases and serve as a valuable resource for investigating the mechanism of disease-related tsRNAs.tsRNADisease is freely available at http://www.compgenelab.info/tsRNADisease. 展开更多
关键词 tsRNA DISEASE CANCER data integration dataBASE
原文传递
Data-Driven Research Drives Earth System Science
13
作者 Xing Yu Shufeng Yang 《Journal of Earth Science》 2026年第1期361-367,共7页
0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has... 0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has largely been discipline-based,relying on field investigations,data collection,experimental analyses,and data interpretation to study individual components of the Earth system. 展开更多
关键词 natural science data interpretation earth system science field investigationsdata earth science COMPOSITION study individual components earth system data driven research
原文传递
Photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer
14
作者 Jialin Li Tingting Li +2 位作者 Yiming Ma Yi Shen Mingjian Sun 《Journal of Innovative Optical Health Sciences》 2026年第1期110-125,共16页
Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.Howev... Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality. 展开更多
关键词 Photoacoustic-computed tomography data compression TRANSFORMER
原文传递
Toward Secure and Auditable Data Sharing:A Cross-Chain CP-ABE Framework
15
作者 Ye Tian Zhuokun Fan Yifeng Zhang 《Computers, Materials & Continua》 2026年第4期1509-1529,共21页
Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy a... Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys. 展开更多
关键词 data sharing blockchain attribute-based encryption dynamic permissions
在线阅读 下载PDF
Design,Realization,and Evaluation of Faster End-to-End Data Transmission over Voice Channels
16
作者 Jian Huang Ming weiLi +2 位作者 Yulong Tian Yi Yao Hao Han 《Computers, Materials & Continua》 2026年第4期1650-1675,共26页
With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-... With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause. 展开更多
关键词 Deep learning modulation CHIRP data over voice
在线阅读 下载PDF
DeepClassifier:A Data Sampling-Based Hybrid BiLSTM-BiGRU Neural Network for Enhanced Type 2 Diabetes Prediction
17
作者 Abdullahi Abubakar Imam Sahalu Balarabe Junaidu +9 位作者 Hussaini Mamman Ganesh Kumar Abdullateef Oluwagbemiga Balogun Sunder Ali Khowaja Shuib Basri Luiz Fernando Capretz Asmah Husaini Hanif Abdul Rahman Usman Ali Fatoumatta Conteh 《Computer Modeling in Engineering & Sciences》 2026年第3期1017-1049,共33页
Artificial Intelligence(AI)in healthcare enables predicting diabetes using data-driven methods instead of the traditional ways of screening the disease,which include hemoglobin A1c(HbA1c),oral glucose tolerance test(O... Artificial Intelligence(AI)in healthcare enables predicting diabetes using data-driven methods instead of the traditional ways of screening the disease,which include hemoglobin A1c(HbA1c),oral glucose tolerance test(OGTT),and fasting plasma glucose(FPG)screening techniques,which are invasive and limited in scale.Machine learning(ML)and deep neural network(DNN)models that use large datasets to learn the complex,nonlinear feature interactions,but the conventional ML algorithms are data sensitive and often show unstable predictive accuracy.Conversely,DNN models are more robust,though the ability to reach a high accuracy rate consistently on heterogeneous datasets is still an open challenge.For predicting diabetes,this work proposed a hybrid DNN approach by integrating a bidirectional long short-term memory(BiLSTM)network with a bidirectional gated recurrent unit(BiGRU).A robust DL model,developed by combining various datasets with weighted coefficients,dense operations in the connection of deep layers,and the output aggregation using batch normalization and dropout functions to avoid overfitting.The goal of this hybrid model is better generalization and consistency among various datasets,which facilitates the effective management and early intervention.The proposed DNN model exhibits an excellent predictive performance as compared to the state-of-the-art and baseline ML and DNN models for diabetes prediction tasks.The robust performance indicates the possible usefulness of DL-based models in the development of disease prediction in healthcare and other areas that demand high-quality analytics. 展开更多
关键词 DIABETES deep learning PREDICTION BiLSTM BiGRU classification data sampling
在线阅读 下载PDF
Prediction of carbon emissions with historical data
18
作者 WANG Dawei KUMAR Prashant CAO Shijie 《Journal of Southeast University(English Edition)》 2026年第1期55-64,共10页
Reducing carbon emissions is fundamental to achieving carbon neutrality.Existing studies have typically estimated emissions by predicting fossil fuel consumption across sectors under different socioeconomic scenarios;... Reducing carbon emissions is fundamental to achieving carbon neutrality.Existing studies have typically estimated emissions by predicting fossil fuel consumption across sectors under different socioeconomic scenarios;however,uncertainties in future development often lead to deviations from these assumptions.To address this limitation,this study proposes a data-driven approach for evaluating national carbon emissions using historical data.Countries with similar energy consumption patterns were selected as reference samples,and their emission pathways were analyzed to predict future emissions for countries that have not yet reached their peak.Key indicators,including peak levels,timing,plateau duration,and post-peak decline rates,were identified.The results indicate that the trends in unpeaked economies can be effectively assessed based on the emission patterns of countries with comparable energy structures.Applying this framework to China suggests a carbon peak between 2027 and 2030,in the range of 14.207 to 16.234 Gt,followed by a gradual decline from 2031 to 2036.Compared with the average results of the existing studies,the predicted minimum and maximum emissions show error margins of 10.1% and 1.41%,respectively.This study proposes a top-down methodology that provides a transparent,reproducible,and empirical framework for forecasting carbon emission pathways,thereby offering a scientific basis for assessing countries that have not yet reached their emissions peak. 展开更多
关键词 carbon emissions historical data BOOTSTRAP ASSESSMENT sustainable development
在线阅读 下载PDF
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
19
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
ISTIRDA:An Efficient Data Availability Sampling Scheme for Lightweight Nodes in Blockchain
20
作者 Jiaxi Wang Wenbo Sun +3 位作者 Ziyuan Zhou Shihua Wu Jiang Xu Shan Ji 《Computers, Materials & Continua》 2026年第4期685-700,共16页
Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either re... Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either require trusted setups or suffer from high communication overhead and low verification efficiency.This paper presents ISTIRDA,a DAS scheme that lets light clients certify availability by sampling small random codeword symbols.Built on ISTIR,an improved Reed–Solomon interactive oracle proof of proximity,ISTIRDA combines adaptive folding with dynamic code rate adjustment to preserve soundness while lowering communication.This paper formalizes opening consistency and prove security with bounded error in the random oracle model,giving polylogarithmic verifier queries and no trusted setup.In a prototype compared with FRIDA under equal soundness,ISTIRDA reduces communication by 40.65%to 80%.For data larger than 16 MB,ISTIRDA verifies faster and the advantage widens;at 128 MB,proofs are about 60%smaller and verification time is roughly 25%shorter,while prover overhead remains modest.In peer-to-peer emulation under injected latency and loss,ISTIRDA reaches confidence more quickly and is less sensitive to packet loss and load.These results indicate that ISTIRDA is a scalable and provably secure DAS scheme suitable for high-throughput,large-block public blockchains,substantially easing bandwidth and latency pressure on lightweight nodes. 展开更多
关键词 Blockchain scalability data availability sampling lightweight nodes
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部