期刊文献+
共找到432,073篇文章
< 1 2 250 >
每页显示 20 50 100
SIRT Visualization and Hotspot Analysis Based on The Web of Science Database for the Treatment of Liver Neoplasms
1
作者 Guangyuan Zhao Mengyun Bai Dengxiang Liu 《Proceedings of Anticancer Research》 2026年第1期33-47,共15页
Selective internal radiation therapy using yttrium-90 has been used to treat hepatocellular carcinoma,intrahepatic cholangiocarcinoma,and other malignant tumors that have spread to the liver locally.The authors used t... Selective internal radiation therapy using yttrium-90 has been used to treat hepatocellular carcinoma,intrahepatic cholangiocarcinoma,and other malignant tumors that have spread to the liver locally.The authors used the bibliometric approach in response to the neoplasms,using the keyword“Yttrium 90 AND Liver Neoplasms”as a search parameter and then looked up pertinent English-language literature in the Web of Science core collection database’s selfbuilt database through November 30,2025.For statistical analysis and literature management,EndNote and Excel tools were utilized.In addition to co-citation and emergent keyword analysis of authors,VOSviewer and CiteSpace were utilized for social network and chronological order of countries,institutions,authors,and keywords.The aim of this study was to serve as a reference for future research by methodically sorting through the international research literature on Yttrium 90 treatment of liver neoplasms and summarizing the research status and hot trends in this field.In recent years,research focus has increasingly shifted toward high-quality,multi-center clinical trials that combine SIRT-targeted systemic therapy with hepatectomy following the descending stage.This approach is likely to remain a significant research trend in the field. 展开更多
关键词 Yttrium-90 Liver neoplasms web of Science CiteSpace VOSviewer Bibliometric method THERAPEUTICS
在线阅读 下载PDF
Bibliometric analysis of literature regarding ostomy research based on the Web of Science database
2
作者 Lan Gao Xiu-Zhen Cao +2 位作者 ying Zhang Tai-Fang Liu Ai-Hua Zhang 《Frontiers of Nursing》 CAS 2018年第3期193-198,共6页
Objective: To analyze the literature status and research hotspots of Science Citation Index (SCI)-related ostomy in the world and to provide references for scientific research and clinical work in the stoma care fi... Objective: To analyze the literature status and research hotspots of Science Citation Index (SCI)-related ostomy in the world and to provide references for scientific research and clinical work in the stoma care field. Methods: Based on the Web of Science core database and its own analysis function, HistCite analysis software and Excel were used to study the published research about ostomy patients. Results: A total of 1,262 articles were published between 1910 and 2016 with the authors from 48 countries and regions, 1,347 research institutions, published in 321 journals, with 4,048 first authors and coauthors; globally, there was a trend of slow growth in the number of authors every year. The study in the USA was absolutely in the lead position, and Canada and Turkey were more active. China's circulation volume was the 15th in the world. The periodical that published most often was the Journal of Wound Ostomy and Continence Nursing. The most interdisciplinary surgical studies were surgery and nursing, where these should be considered important. The most prolific author in the field was "Grant", and the highest cited article was entitled as "Living with a stoma: a review of the literature". Conclusions: The related research of global stoma is constantly developing. The research hotspot is nursing before and after stoma surgery. China and the USA are leading countries in research. They should follow the recent trend to improve the depth and breadth of the research in the field. 展开更多
关键词 web of Science OSTOMY STOMA literature measurement NURSING
在线阅读 下载PDF
Bibliometric analysis of Wendan decoction research based on the Web of Science database
3
作者 Yu-Feng Zhang Ting Liu +1 位作者 Zhong-Ping Pu Hai-Bing Hua 《Medical Data Mining》 2023年第3期10-18,共9页
Objective:With the development of Wendan decoction(WDD)and more literature on the research of WDD,we aimed to present an insight into WDD research using bibliometric analysis.Methods:We retrieved data from the Web of ... Objective:With the development of Wendan decoction(WDD)and more literature on the research of WDD,we aimed to present an insight into WDD research using bibliometric analysis.Methods:We retrieved data from the Web of Science database from 2008 to 2022.Publication trends were examined using data from journals,authors,institutions,international collaborations,citations,and keywords from the online analytical platform Bibliometric(http://bibliometric.com).Results:Over the years,an increase in the use of keywords,including depression,insomnia,Alzheimer’s disease,schizophrenia,hippocampus,and insomnia,was observed.The outcomes of the analysis revealed that topics related to WDD have become increasingly prevalent in the last five years,and the internationally recognized application of WDD is mainly neuropsychiatric diseases.Conclusion:We exhibited WDD research progress by identifying and evaluating WDD-focused articles.Our analysis revealed the trend of the topics from 2008 to 2022,which can assist scholars in identifying future trends. 展开更多
关键词 Bibliometric analysis neuropsychiatric diseases web of Science Wendan decoction
在线阅读 下载PDF
Combining different climate datasets better reflects the response of warm-temperate forests to climate:a case study from Mt.Dongling,Beijing
4
作者 Shengjie Wang Haiyang Liu +1 位作者 Shuai Yuan Chenxi Xu 《Journal of Forestry Research》 2026年第2期131-143,共13页
Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and... Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research. 展开更多
关键词 Climate data representativeness Alternative climate data selection Response differences Deciduous broad-leaf forest Warm temperate zone
在线阅读 下载PDF
Bibliometric analysis about compassion fatigue based on Web of Science database 被引量:1
5
作者 Xi-Liang Kang Xiao Gao 《Frontiers of Nursing》 CAS 2019年第2期151-156,共6页
Objective: To analyze the related foreign literature about compassion fatigue (CF) and to provide the basis for further improving the level of research in this field. Methods: Based on the Web of Science core database... Objective: To analyze the related foreign literature about compassion fatigue (CF) and to provide the basis for further improving the level of research in this field. Methods: Based on the Web of Science core database and its own analysis function, HistCite analysis software was used to study the diploma research about CF. Results: A total of 652 paper were retrieved, the output of literature showed an increasing trend year by year. The United States ranked first, and China ranked eleventh. The “Oncology Nursing Forum” had the largest number of articles (4.0%). The main research direction was in nursing (29.6%), and the main research output was from articles (73.2%). The key words highly cited were “compassion” and “fatigue”. Figley was the author of the high-yield field, and mainly research institutions were universities. The highly cited article was entitled “Compassion fatigue: Psychotherapists’ chronic lack of self-care”. Conclusions: In recent years, the related heat of CF has been high fever. Our research in this field is still in the primary stage, and it needs to be further excavated and promoted. 展开更多
关键词 COMPASSION FATIGUE web of SCIENCE LITERATURE measurement
暂未订购
The Logic and Architecture of Future Data Systems
6
作者 Jinghai Li Li Guo 《Engineering》 2025年第4期14-15,共2页
This article presents views on the future development of data science,with a particular focus on its importance to artificial intel-ligence(AI).After discussing the challenges of data science,it elu-cidates a possible... This article presents views on the future development of data science,with a particular focus on its importance to artificial intel-ligence(AI).After discussing the challenges of data science,it elu-cidates a possible approach to tackle these challenges by clarifying the logic and principles of data related to the multi-level complex-ity of the world.Finally,urgently required actions are briefly outlined. 展开更多
关键词 data sciencewith data science artificial intelligence future data systems data scienceit challenges clarifying logic principles data ARCHITECTURE
在线阅读 下载PDF
Diversity,Complexity,and Challenges of Viral Infectious Disease Data in the Big Data Era:A Comprehensive Review 被引量:1
7
作者 Yun Ma Lu-Yao Qin +1 位作者 Xiao Ding Ai-Ping Wu 《Chinese Medical Sciences Journal》 2025年第1期29-44,I0005,共17页
Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning fr... Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape. 展开更多
关键词 viral infectious diseases big data data diversity and complexity data standardization artificial intelligence data analysis
暂未订购
基于Web of Science数据库的老年性肌少症与线粒体相关性研究热点和趋势可视化分析
8
作者 余若如 冯诚 +2 位作者 郑淑煜 吴磊 勇入琳 《数理医药学杂志》 2026年第1期34-44,共11页
目的分析2008—2024年老年性肌少症与线粒体相关性研究的现状、热点及发展趋势,为该领域的后续研究提供参考。方法检索2008年1月1日至2024年12月31日Web of Science核心合集数据库收录的老年性肌少症与线粒体相关性研究的文献,使用R 4.... 目的分析2008—2024年老年性肌少症与线粒体相关性研究的现状、热点及发展趋势,为该领域的后续研究提供参考。方法检索2008年1月1日至2024年12月31日Web of Science核心合集数据库收录的老年性肌少症与线粒体相关性研究的文献,使用R 4.2.0软件的Bibliometrix包对发文国家、合作网络、作者、机构、期刊、高被引文献、关键词和文献被引频次进行定量和可视化分析,并运用H指数分析作者的学术影响力。结果共纳入1219篇文献,2008—2024年发文量总体呈上升趋势。累计发文量排名前三位的国家分别是美国、中国和意大利;发文量排名前三位的期刊分别为Journal of Cachexia,Sarcopenia and Muscle、International Journal of Molecular Sciences和Experimental Gerontology;H指数排名前六位的作者分别为Marzettie E、Calvani R、Picca A、Van Remmen H、Leeuwenbugh C和Bernabel R;被引频次最高的文献是“Sarcopenia:agingrelated loss of muscle mass and function”;出现频次排名前五的关键词分别为skeletalmuscle、sarcopenia、oxidative stress、exercise和expression。结论老年性肌少症与线粒体相关性研究领域呈现良好的发展态势。未来需加强跨国家、跨机构和跨学科合作,可重点关注线粒体融合蛋白等对线粒体功能的影响,以及饮食和运动对老年性肌少症的干预作用等方面的探索。 展开更多
关键词 老年性肌少症 线粒体 web of Science 文献计量学 可视化分析
暂未订购
Design,Realization,and Evaluation of Faster End-to-End Data Transmission over Voice Channels
9
作者 Jian Huang Ming weiLi +2 位作者 Yulong Tian Yi Yao Hao Han 《Computers, Materials & Continua》 2026年第4期1650-1675,共26页
With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-... With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause. 展开更多
关键词 Deep learning modulation CHIRP data over voice
在线阅读 下载PDF
Research on the Optimal Allocation of Community Elderly Care Service Resources Based on Big Data Technology
10
作者 Shuying Li 《Journal of Clinical and Nursing Research》 2026年第1期241-246,共6页
With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service... With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry. 展开更多
关键词 Big data technology COMMUNITY Elderly care Service resources
在线阅读 下载PDF
Constructions of Control Sequence Set for Hierarchical Access in Data Link Network
11
作者 Niu Xianhua Ma Jiabei +3 位作者 Zhou Enzhi Wang Yaoxuan Zeng Bosen Li Zhiping 《China Communications》 2026年第1期67-80,共14页
As an important resource in data link,time slots should be strategically allocated to enhance transmission efficiency and resist eavesdropping,especially considering the tremendous increase in the number of nodes and ... As an important resource in data link,time slots should be strategically allocated to enhance transmission efficiency and resist eavesdropping,especially considering the tremendous increase in the number of nodes and diverse communication needs.It is crucial to design control sequences with robust randomness and conflict-freeness to properly address differentiated access control in data link.In this paper,we propose a hierarchical access control scheme based on control sequences to achieve high utilization of time slots and differentiated access control.A theoretical bound of the hierarchical control sequence set is derived to characterize the constraints on the parameters of the sequence set.Moreover,two classes of optimal hierarchical control sequence sets satisfying the theoretical bound are constructed,both of which enable the scheme to achieve maximum utilization of time slots.Compared with the fixed time slot allocation scheme,our scheme reduces the symbol error rate by up to 9%,which indicates a significant improvement in anti-interference and eavesdropping capabilities. 展开更多
关键词 control sequence data link hierarchical access control theoretical bound
在线阅读 下载PDF
Multi-Time Scale Optimization Scheduling of Data Center Considering Workload Shift and Refrigeration Regulation
12
作者 Luyao Liu Xiao Liao +1 位作者 Yiqian Li Shaofeng Zhang 《Energy Engineering》 2026年第2期451-486,共36页
Data center industries have been facing huge energy challenges due to escalating power consumption and associated carbon emissions.In the context of carbon neutrality,the integration of data centers with renewable ene... Data center industries have been facing huge energy challenges due to escalating power consumption and associated carbon emissions.In the context of carbon neutrality,the integration of data centers with renewable energy has become a prevailing trend.To advance the renewable energy integration in data centers,it is imperative to thoroughly explore the data centers’operational flexibility.Computing workloads and refrigeration systems are recognized as two promising flexible resources for power regulationwithin data centermicro-grids.This paper identifies and categorizes delay-tolerant computing workloads into three types(long-running non-interruptible,long-running interruptible,and short-running)and develops mathematical time-shifting models for each.Additionally,this paper examines the thermal dynamics of the computer room and derives a time-varying temperature model coupled to refrigeration power.Building on these models,this paper proposes a two-stage,multi-time scale optimization scheduling framework that jointly coordinates computing workloads time-shift in day-ahead scheduling and refrigeration power control in intra-day dispatch to mitigate renewable variability.A case study demonstrates that the framework effectively enhances the renewable-energy utilization,improves the operational economy of the data center microgrid,and mitigates the impact of renewable power uncertainty.The results highlight the potential of coordinated computing workloads and thermal system flexibility to support greener,more cost-effective data center operation. 展开更多
关键词 data center renewable energy load shift multi-time scale optimization
在线阅读 下载PDF
Research on the Independent Status of the Data Utilization Right
13
作者 Zhang Ying 《Contemporary Social Sciences》 2026年第1期140-155,共16页
Among the “three data rights,” the data utilization right has been persistently overlooked, and is similar to a neglected “middle child” in the context of the data rights family. However, it is precisely during th... Among the “three data rights,” the data utilization right has been persistently overlooked, and is similar to a neglected “middle child” in the context of the data rights family. However, it is precisely during the stages of processing and utilization that data undergoes its transformations and where its economic value is ultimately created. A series of recent policy documents on treating data as a factor of production have emphasized that the building of a scientific data property rights system requires a fair and efficient mechanism for benefit distribution, which provides reasonable preference for creators of data value and use value in terms of the income generated by data elements. Constrained by the inertial thinking of property right logic, the data utilization right is often regarded as a “transitional fulcrum” wherein the holders of data resources have to authorize the operators of data products to realize data value thereby. In the future structural design and implementation of the coordination mechanism for the property right system against the backdrop of the data factor-oriented reform, the establishment of data processing and utilization as an independent right will require the implementation of two core initiatives: first, attaching importance to the independent protection of the benefit distribution;second, implementing risk regulation for data security through optimization of governance. These two initiatives will serve as the key for optimizing the data factor governance system and accelerating the release of data value. 展开更多
关键词 utilization right data property rights structure benefit distribution risk regulation
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
14
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Drive-by spatial offset detection for high-speed railway bridges based on fusion analysis of multi-source data from comprehensive inspection train
15
作者 Chuang Wang Jiawang Zhan +4 位作者 Nan Zhang Yujie Wang Xinxiang Xu Zhihang Wang Zhen Ni 《Railway Engineering Science》 2026年第1期128-148,共21页
The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR ... The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges. 展开更多
关键词 High-speed railway bridge Drive-by inspection Spatial offset Multi-source data fusion Deep learning
在线阅读 下载PDF
Research on Classification and Desensitization Strategies of Sensitive Educational Data
16
作者 Chen Chen Caixia Liu 《Journal of Contemporary Educational Research》 2025年第4期141-146,共6页
In the era of digital intelligence,data is a key element in promoting social and economic development.Educational data,as a vital component of data,not only supports teaching and learning but also contains much sensit... In the era of digital intelligence,data is a key element in promoting social and economic development.Educational data,as a vital component of data,not only supports teaching and learning but also contains much sensitive information.How to effectively categorize and protect sensitive data has become an urgent issue in educational data security.This paper systematically researches and constructs a multi-dimensional classification framework for sensitive educational data,and discusses its security protection strategy from the aspects of identification and desensitization,aiming to provide new ideas for the security management of sensitive educational data and to help the construction of an educational data security ecosystem in the era of digital intelligence. 展开更多
关键词 data security Sensitive data data classification data desensitization
在线阅读 下载PDF
GranuSAS:Software of rapid particle size distribution analysis from small angle scattering data
17
作者 Qiaoyu Guo Fei Xie +3 位作者 Xuefei Feng Zhe Sun Changda Wang Xuechen Jiao 《Chinese Physics B》 2026年第2期216-225,共10页
Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces th... Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization. 展开更多
关键词 small angle x-ray scattering data analysis software particle size distribution inverse problem
原文传递
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
18
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
19
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
20
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部