期刊文献+
共找到364,227篇文章
< 1 2 250 >
每页显示 20 50 100
High-precision classification of benthic habitat sediments in shallow waters of islands by multi-source data
1
作者 Qiuhua TANG Ningning LI +4 位作者 Yujie ZHANG Zhipeng DONG Yongling ZHENG Jingjing BAO Jingyu ZHANG 《Journal of Oceanology and Limnology》 2026年第1期99-108,共10页
Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications... Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications.Seabed sediment classification is one of the main contents of seabed habitat mapping.In response to the impact of remote sensing imaging quality and the limitations of acoustic measurement range,where a single data source does not fully reflect the substrate type,we proposed a high-precision seabed habitat sediment classification method that integrates data from multiple sources.Based on WorldView-2 multi-spectral remote sensing image data and multibeam bathymetry data,constructed a random forests(RF)classifier with optimal feature selection.A seabed sediment classification experiment integrating optical remote sensing and acoustic remote sensing data was carried out in the shallow water area of Wuzhizhou Island,Hainan,South China.Different seabed sediment types,such as sand,seagrass,and coral reefs were effectively identified,with an overall classification accuracy of 92%.Experimental results show that RF matrix optimized by fusing multi-source remote sensing data for feature selection were better than the classification results of simple combinations of data sources,which improved the accuracy of seabed sediment classification.Therefore,the method proposed in this paper can be effectively applied to high-precision seabed sediment classification and habitat mapping around islands and reefs. 展开更多
关键词 Wuzhizhou Island marine remote sensing coastal mapping multi-spectral remote sensing shallow water reef seabed sediment classification benthic habitat mapping multi-source data fusion random forest(RF)
在线阅读 下载PDF
A novel method for clustering cellular data to improve classification
2
作者 Diek W.Wheeler Giorgio A.Ascoli 《Neural Regeneration Research》 SCIE CAS 2025年第9期2697-2705,共9页
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse... Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons. 展开更多
关键词 cellular data clustering dendrogram data classification Levene's one-tailed statistical test unsupervised hierarchical clustering
在线阅读 下载PDF
Research on Classification and Desensitization Strategies of Sensitive Educational Data
3
作者 Chen Chen Caixia Liu 《Journal of Contemporary Educational Research》 2025年第4期141-146,共6页
In the era of digital intelligence,data is a key element in promoting social and economic development.Educational data,as a vital component of data,not only supports teaching and learning but also contains much sensit... In the era of digital intelligence,data is a key element in promoting social and economic development.Educational data,as a vital component of data,not only supports teaching and learning but also contains much sensitive information.How to effectively categorize and protect sensitive data has become an urgent issue in educational data security.This paper systematically researches and constructs a multi-dimensional classification framework for sensitive educational data,and discusses its security protection strategy from the aspects of identification and desensitization,aiming to provide new ideas for the security management of sensitive educational data and to help the construction of an educational data security ecosystem in the era of digital intelligence. 展开更多
关键词 data security Sensitive data data classification data desensitization
在线阅读 下载PDF
Audiovisual Art Event Classification and Outreach Based on Web Extracted Data
4
作者 Andreas Giannakoulopoulos Minas Pergantis +1 位作者 Aristeidis Lamprogeorgos Stella Lampoura 《Journal of Software Engineering and Applications》 2025年第1期24-43,共20页
The World Wide Web provides a wealth of information about everything, including contemporary audio and visual art events, which are discussed on media outlets, blogs, and specialized websites alike. This information m... The World Wide Web provides a wealth of information about everything, including contemporary audio and visual art events, which are discussed on media outlets, blogs, and specialized websites alike. This information may become a robust source of real-world data, which may form the basis of an objective data-driven analysis. In this study, a methodology for collecting information about audio and visual art events in an automated manner from a large array of websites is presented in detail. This process uses cutting edge Semantic Web, Web Search and Generative AI technologies to convert website documents into a collection of structured data. The value of the methodology is demonstrated by creating a large dataset concerning audiovisual events in Greece. The collected information includes event characteristics, estimated metrics based on their text descriptions, outreach metrics based on the media that reported them, and a multi-layered classification of these events based on their type, subjects and methods used. This dataset is openly provided to the general and academic public through a Web application. Moreover, each event’s outreach is evaluated using these quantitative metrics, the results are analyzed with an emphasis on classification popularity and useful conclusions are drawn concerning the importance of artistic subjects, methods, and media. 展开更多
关键词 Web data Extraction Art Events classification Artistic Outreach Online Media
在线阅读 下载PDF
Experiments on image data augmentation techniques for geological rock type classification with convolutional neural networks 被引量:2
5
作者 Afshin Tatar Manouchehr Haghighi Abbas Zeinijahromi 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期106-125,共20页
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist... The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications. 展开更多
关键词 Deep learning(DL) Image analysis Image data augmentation Convolutional neural networks(CNNs) Geological image analysis Rock classification Rock thin section(RTS)images
在线阅读 下载PDF
Dynamic Data Classification Strategy and Security Management in Higher Education: A Case Study of Wenzhou Medical University
6
作者 Chunyan Yang Feng Chen Jiahao He 《教育技术与创新》 2025年第1期1-10,共10页
In the context of the rapid development of digital education,the security of educational data has become an increasing concern.This paper explores strategies for the classification and grading of educational data,and ... In the context of the rapid development of digital education,the security of educational data has become an increasing concern.This paper explores strategies for the classification and grading of educational data,and constructs a higher educational data security management and control model centered on the integration of medical and educational data.By implementing a multi-dimensional strategy of dynamic classification,real-time authorization,and secure execution through educational data security levels,dynamic access control is applied to effectively enhance the security and controllability of educational data,providing a secure foundation for data sharing and openness. 展开更多
关键词 data classification strategy dynamic classification data security management
在线阅读 下载PDF
LAMOST Spectral Data Processing: Classification, Redshift Measurement,and Data Product Creation
7
作者 Xiao Kong A-Li Luo 《Research in Astronomy and Astrophysics》 2025年第5期12-18,共7页
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) has become a crucial resource in astronomical research,offering a vast amount of spectral data for stars,galaxies,and quasars.This paper presents ... The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) has become a crucial resource in astronomical research,offering a vast amount of spectral data for stars,galaxies,and quasars.This paper presents the data processing methods used by LAMOST,focusing on the classification and redshift measurement of large spectral data sets through template matching,as well as the creation of data products.Additionally,this paper details the construction of the Multiple Epoch Catalogs by integrating LAMOST spectral data with photometric data from Gaia and Pan-STARRS,and explains the creation of both low-and medium-resolution data products. 展开更多
关键词 catalogs-methods data analysis-techniques spectroscopic-surveys
在线阅读 下载PDF
A Comparative Study of Data Representation Techniques for Deep Learning-Based Classification of Promoter and Histone-Associated DNA Regions
8
作者 Sarab Almuhaideb Najwa Altwaijry +2 位作者 Isra Al-Turaiki Ahmad Raza Khan Hamza Ali Rizvi 《Computers, Materials & Continua》 2025年第11期3095-3128,共34页
Many bioinformatics applications require determining the class of a newly sequenced Deoxyribonucleic acid(DNA)sequence,making DNA sequence classification an integral step in performing bioinformatics analysis,where la... Many bioinformatics applications require determining the class of a newly sequenced Deoxyribonucleic acid(DNA)sequence,making DNA sequence classification an integral step in performing bioinformatics analysis,where large biomedical datasets are transformed into valuable knowledge.Existing methods rely on a feature extraction step and suffer from high computational time requirements.In contrast,newer approaches leveraging deep learning have shown significant promise in enhancing accuracy and efficiency.In this paper,we investigate the performance of various deep learning architectures:Convolutional Neural Network(CNN),CNN-Long Short-Term Memory(CNNLSTM),CNN-Bidirectional Long Short-Term Memory(CNN-BiLSTM),Residual Network(ResNet),and InceptionV3 for DNA sequence classification.Various numerical and visual data representation techniques are utilized to represent the input datasets,including:label encoding,k-mer sentence encoding,k-mer one-hot vector,Frequency Chaos Game Representation(FCGR)and 5-Color Map(ColorSquare).Three datasets are used for the training of the models including H3,H4 and DNA Sequence Dataset(Yeast,Human,Arabidopsis Thaliana).Experiments are performed to determine which combination of DNA representation and deep learning architecture yields improved performance for the classification task.Our results indicate that using a hybrid CNN-LSTM neural network trained on DNA sequences represented as one-hot encoded k-mer sequences yields the best performance,achieving an accuracy of 92.1%. 展开更多
关键词 DNA sequence classification deep learning data visualization
在线阅读 下载PDF
Bird Species Classification Using Image Background Removal for Data Augmentation
9
作者 Yu-Xiang Zhao Yi Lee 《Computers, Materials & Continua》 2025年第7期791-810,共20页
Bird species classification is not only a challenging topic in artificial intelligence but also a domain closely related to environmental protection and ecological research.Additionally,performing edge computing on lo... Bird species classification is not only a challenging topic in artificial intelligence but also a domain closely related to environmental protection and ecological research.Additionally,performing edge computing on low-level devices using small neural networks can be an important research direction.In this paper,we use the EfficientNetV2B0 model for bird species classification,applying transfer learning on a dataset of 525 bird species.We also employ the BiRefNet model to remove backgrounds from images in the training set.The generated background-removed images are mixed with the original training set as a form of data augmentation.We aim for these background-removed images to help the model focus on key features,and by combining data augmentation with transfer learning,we trained a highly accurate and efficient bird species classification model.The training process is divided into a transfer learning stage and a fine-tuning stage.In the transfer learning stage,only the newly added custom layers are trained;while in the fine-tuning stage,all pre-trained layers except for the batch normalization layers are fine-tuned.According to the experimental results,the proposed model not only has an advantage in size compared to other models but also outperforms them in various metrics.The training results show that the proposed model achieved an accuracy of 99.54%and a precision of 99.62%,demonstrating that it achieves both lightweight design and high accuracy.To confirm the credibility of the results,we use heatmaps to interpret the model.The heatmaps show that our model can clearly highlight the image feature area.In addition,we also perform the 10-fold cross-validation on the model to verify its credibility.Finally,this paper proposes a model with low training cost and high accuracy,making it suitable for deployment on edge computing devices to provide lighter and more convenient services. 展开更多
关键词 Bird species classification edge computing EfficientNet BiRefNet data augmentation
在线阅读 下载PDF
A dynamic load-balanced scheme based on flow classification for software-defined data center
10
作者 YU Yang JING Zhengjun +1 位作者 CHEN Qianmin WANG Xiumei 《High Technology Letters》 2025年第4期338-346,共9页
Aiming at the problems such as low throughput and unbalanced load of data center network caused by traditional multipath routing strategy,a dynamic load balancing strategy for flow classification oriented to Fat-Tree ... Aiming at the problems such as low throughput and unbalanced load of data center network caused by traditional multipath routing strategy,a dynamic load balancing strategy for flow classification oriented to Fat-Tree topology based on the software defined network(SDN)architecture is proposed,named DLB-FC.Multi-index evaluation methods such as link state information and network traffic characteristics are considered.DLB-FC mechanism can dynamically adjust the flow classification threshold to differentiate between large and small flows.The scheme selects different forwarding paths to meet the transmission performance requirements of different flow characteristics.On this basis,an SDN simulation platform is built for performance testing.The simulation results show that DLB-FC algorithm can dynamically distinguish large flows from small flows and achieve load balancing effectively.Compared with equal-cost multi-path(ECMP),global first fit(GFF)and minmum total delay load routing(MTDLR)algorithms,DLB-FC scheme improves the network throughput and link utilization of the data center network effectively.The transmission delay is also reduced with better load balance. 展开更多
关键词 data center software defined network load balance multi-index evaluation Fat-Tree
在线阅读 下载PDF
Legume Cowpea Leaves Classification for Crop Phenotyping Using Deep Learning and Big Data
11
作者 Vijaya Choudhary Paramita Guha Giovanni Pau 《Journal on Big Data》 2025年第1期1-14,共14页
Crop phenotyping plays a critical role in precision agriculture by enabling the accurate assessment of plant traits,supporting improved crop management,breeding programs,and yield optimization.However,cowpea leaves pr... Crop phenotyping plays a critical role in precision agriculture by enabling the accurate assessment of plant traits,supporting improved crop management,breeding programs,and yield optimization.However,cowpea leaves present unique challenges for automated phenotyping due to their diverse shapes,complex vein structures,and variations caused by environmental conditions.This research presents a deep learning-based approach for the classification of cowpea leaf images to support crop phenotyping tasks.Given the limited availability of annotated datasets,data augmentation techniques were employed to artificially expand the original small dataset while preserving essential leaf characteristics.Various image processing methods were applied to enrich the dataset,ensuring better feature representation without significant information loss.A deep neural network,specifically the MobileNet architecture,was utilized for its efficiency in capturing multi-scale features and handling image data with limited computational resources.The performance of the model trained on the augmented dataset was evaluated,achieving an accuracy of 94.12%on the cowpea leaf classification task.These results demonstrate the effectiveness of data augmentation in enhancing model generalization and learning capabilities. 展开更多
关键词 Cowpea leaves leaves classifications image processing smart agriculture supervised learning big data
在线阅读 下载PDF
Enhancing Medical Image Classification with BSDA-Mamba:Integrating Bayesian Random Semantic Data Augmentation and Residual Connections
12
作者 Honglin Wang Yaohua Xu Cheng Zhu 《Computers, Materials & Continua》 2025年第6期4999-5018,共20页
Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Aug... Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work. 展开更多
关键词 Deep learning medical image classification data augmentation visual state space model
在线阅读 下载PDF
Innovative Machine Learning Approaches for Drinking Water Quality Classification:Addressing Data Imbalances with Custom SMOTE Sampling Strategy
13
作者 Borislava Toleva Ivan Ivanov Kalina Kitova 《Journal of Environmental & Earth Sciences》 2025年第3期262-273,共12页
This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictabi... This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictability of water quality thus plays a crucial role in managing our ecosystems to make informed decisions and,hence,proper environmental management.This study addresses these challenges by proposing an effective machine learning methodology applied to the“Water Quality”public dataset.The methodology has modeled the dataset suitable for providing prediction classification analysis with high values of the evaluating parameters such as accuracy,sensitivity,and specificity.The proposed methodology is based on two novel approaches:(a)the SMOTE method to deal with unbalanced data and(b)the skillfully involved classical machine learning models.This paper uses Random Forests,Decision Trees,XGBoost,and Support Vector Machines because they can handle large datasets,train models for handling skewed datasets,and provide high accuracy in water quality classification.A key contribution of this work is the use of custom sampling strategies within the SMOTE approach,which significantly enhanced performance metrics and improved class imbalance handling.The results demonstrate significant improvements in predictive performance,achieving the highest reported metrics:accuracy(98.92%vs.96.06%),sensitivity(98.3%vs.71.26%),and F1 score(98.37%vs.79.74%)using the XGBoost model.These improvements underscore the effectiveness of our custom SMOTE sampling strategies in addressing class imbalance.The findings contribute to environmental management by enabling ecology specialists to develop more accurate strategies for monitoring,assessing,and managing drinking water quality,ensuring better ecosystem and public health outcomes. 展开更多
关键词 data Modeling Class Imbalance SMOTE Machine Learning classification Model Estimation Water Quality dataset
在线阅读 下载PDF
Spatio-Temporal Earthquake Analysis via Data Warehousing for Big Data-Driven Decision Systems
14
作者 Georgia Garani George Pramantiotis Francisco Javier Moreno Arboleda 《Computers, Materials & Continua》 2026年第3期1963-1988,共26页
Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei... Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management. 展开更多
关键词 data warehouse data analysis big data decision systems SEISMOLOGY data visualization
在线阅读 下载PDF
Adaptive feature selection method for high-dimensional imbalanced data classification
15
作者 WU Jianzhen XUE Zhen +1 位作者 ZHANG Liangliang YANG Xu 《Journal of Measurement Science and Instrumentation》 2025年第4期612-624,共13页
Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from nume... Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications. 展开更多
关键词 high-dimensional imbalanced data adaptive feature selection adaptive multi-filter feature hardness gaining sharing knowledge based algorithm metaheuristic algorithm
在线阅读 下载PDF
Real-world data and evidence:pioneering frontiers in precision oncology
16
作者 Jingxin JIANG Weiwei PAN +4 位作者 Liyang SUN Liwei PANG Hailang CHEN Jian HUANG Wuzhen CHEN 《Journal of Zhejiang University-Science B(Biomedicine & Biotechnology)》 2026年第1期44-57,共14页
Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clini... Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clinical settings.This review examines the evolving landscape of RWSs in oncology,focusing on their implementation,methodological considerations,and impact on precision medicine.We systematically analyze how RWSs leverage diverse data sources,including electronic health records(EHRs),insurance claims,and patient registries,to generate evidence that bridges the gap between controlled clinical trials and real-world clinical practice.The review underscores the key contributions of RWSs,including capturing therapeutic outcomes in traditionally underrepresented populations,expanding drug indications,and evaluating long-term safety and effectiveness in routine clinical settings.While acknowledging significant challenges,including data quality variability and privacy concerns,we discuss how emerging technologies like artificial intelligence are helping to address these limitations.The integration of RWSs with traditional clinical research is revolutionizing the paradigm of precision oncology and enabling more personalized treatment approaches based on real-world evidence. 展开更多
关键词 Real-world study(RWS) Precision oncology Real-world data(RWD) Study design data characterization
原文传递
Toward Secure and Auditable Data Sharing:A Cross-Chain CP-ABE Framework
17
作者 Ye Tian Zhuokun Fan Yifeng Zhang 《Computers, Materials & Continua》 2026年第4期1509-1529,共21页
Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy a... Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys. 展开更多
关键词 data sharing blockchain attribute-based encryption dynamic permissions
在线阅读 下载PDF
Design,Realization,and Evaluation of Faster End-to-End Data Transmission over Voice Channels
18
作者 Jian Huang Ming weiLi +2 位作者 Yulong Tian Yi Yao Hao Han 《Computers, Materials & Continua》 2026年第4期1650-1675,共26页
With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-... With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause. 展开更多
关键词 Deep learning modulation CHIRP data over voice
在线阅读 下载PDF
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
19
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
ISTIRDA:An Efficient Data Availability Sampling Scheme for Lightweight Nodes in Blockchain
20
作者 Jiaxi Wang Wenbo Sun +3 位作者 Ziyuan Zhou Shihua Wu Jiang Xu Shan Ji 《Computers, Materials & Continua》 2026年第4期685-700,共16页
Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either re... Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either require trusted setups or suffer from high communication overhead and low verification efficiency.This paper presents ISTIRDA,a DAS scheme that lets light clients certify availability by sampling small random codeword symbols.Built on ISTIR,an improved Reed–Solomon interactive oracle proof of proximity,ISTIRDA combines adaptive folding with dynamic code rate adjustment to preserve soundness while lowering communication.This paper formalizes opening consistency and prove security with bounded error in the random oracle model,giving polylogarithmic verifier queries and no trusted setup.In a prototype compared with FRIDA under equal soundness,ISTIRDA reduces communication by 40.65%to 80%.For data larger than 16 MB,ISTIRDA verifies faster and the advantage widens;at 128 MB,proofs are about 60%smaller and verification time is roughly 25%shorter,while prover overhead remains modest.In peer-to-peer emulation under injected latency and loss,ISTIRDA reaches confidence more quickly and is less sensitive to packet loss and load.These results indicate that ISTIRDA is a scalable and provably secure DAS scheme suitable for high-throughput,large-block public blockchains,substantially easing bandwidth and latency pressure on lightweight nodes. 展开更多
关键词 Blockchain scalability data availability sampling lightweight nodes
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部