期刊文献+
共找到96,916篇文章
< 1 2 250 >
每页显示 20 50 100
ASME Code Case 3029高温许用压应力计算方法的介绍及工程应用
1
作者 马忠明 《化工设备与管道》 北大核心 2026年第1期24-30,共7页
介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及... 介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及工程意义进行了简单的介绍,以某工程设计项目中的实际结构为例,介绍了该方法的使用过程及注意事项,并结合压力容器工程设计领域的实际需求,对我国标准体系下一步的制定或修订方向提出了展望。 展开更多
关键词 code Case 3029 蠕变屈曲 失稳 压力容器 许用应力
在线阅读 下载PDF
ChatGPT+VS Code在高中地理地图开发中的应用——以“国内人口迁移”为例
2
作者 王凌宇 白絮飞 《中国信息技术教育》 2026年第1期81-84,共4页
人工智能技术在中学地理教学中的应用是大势所趋。当前的研究主要聚焦于其作为学生的“助学者”和教师的“助教者”两大角色。然而,现有应用方式存在一定局限性:作为“助学者”,若学生使用不当可能引发依赖性,削弱其独立思考能力;作为... 人工智能技术在中学地理教学中的应用是大势所趋。当前的研究主要聚焦于其作为学生的“助学者”和教师的“助教者”两大角色。然而,现有应用方式存在一定局限性:作为“助学者”,若学生使用不当可能引发依赖性,削弱其独立思考能力;作为“助教者”,若教师生成教学设计的指令过于宽泛,结果易出现“张冠李戴”或“似是而非”等问题,需教师二次加工。相反,若教师能针对教学设计中的特定模块提供详细准确的指令,人工智能技术输出的结果将更具准确性和实用性,展现出更高研究价值。因此,本文从人工智能技术“助教者”身份出发,摒弃传统完整的教学过程设计,聚焦备课中的“地图开发”模块,采用由人工智能技术生成地图代码并通过第三方软件运行的方式,实现快速辅助教师生成所需地图的目标,提升备课效率与教学质量。 展开更多
关键词 ChatGPT VS code 人工智能技术 中学地理 地图开发
在线阅读 下载PDF
VS Code软件技术在红绿彩文化网站构建中的应用研究
3
作者 李萍 杨冬梅 《办公自动化》 2026年第6期1-3,共3页
红绿彩文化作为中华民族传统文化的重要组成部分,有着悠久的历史和独特的艺术价值。其兴起打破以往以单色釉为主导的高温烧瓷的局面,在历史的长河中不断地推陈出新。然而,在现代社会中,随着文化的多元化和工业化的冲击,红绿彩文化发展... 红绿彩文化作为中华民族传统文化的重要组成部分,有着悠久的历史和独特的艺术价值。其兴起打破以往以单色釉为主导的高温烧瓷的局面,在历史的长河中不断地推陈出新。然而,在现代社会中,随着文化的多元化和工业化的冲击,红绿彩文化发展面临着诸多挑战。在此背景下,借助VS Code以及JavaScript相关技术,对红绿彩文化网站进行深化设计与研究,为构建交互型红绿彩文化网站提供借鉴,同时也将为传播红绿彩文化提供具体可行的方法。 展开更多
关键词 红绿彩文化 JAVASCRIPT VS code 网站
在线阅读 下载PDF
Rateless Polar Codes with Unequal Error Protection Property
4
作者 Cui Chen Xiang Wei +1 位作者 Ma Siwei Guo Qing 《China Communications》 2026年第1期10-23,共14页
Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a prom... Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a promising solution due to their outstanding error-correction performance and low complexity.Unequal error protection(UEP)involves nonuniform error safeguarding for distinct data segments,achieving a fine balance between error resilience and resource allocation,which ultimately enhancing system performance and efficiency.In this paper,we propose a novel class of UEP rateless polar codes.The codes are designed based on matrix extension of polar codes,and elegant mapping and duplication operations are designed to achieve UEP property while preserving the overall performance of conventional polar codes.Superior UEP performance is attained without significant modifications to conventional polar codes,making it straightforward for compatibility with existing polar codes.A theoretical analysis is conducted on the block error rate and throughput efficiency performance.To the best of our knowledge,this work provides the first theoretical performance analysis of UEP rateless polar codes.Simulation results show that the proposed codes significantly outperform existing polar coding schemes in both block error rate and throughput efficiency. 展开更多
关键词 matrix extension polar codes rateless coding unequal error protection
在线阅读 下载PDF
Efficient Polar Codes with Low Complexity for Correcting Insertions/Deletions in DPPM
5
作者 Li Leran Liu Yuan +2 位作者 Yuan Ye Xiahou Wenqian Chen Maonan 《China Communications》 2026年第1期24-33,共10页
Differential pulse-position modulation(DP PM)can achieve a good compromise between power and bandwidth requirements.However,the output sequence has undetectable insertions and deletions.This paper proposes a successiv... Differential pulse-position modulation(DP PM)can achieve a good compromise between power and bandwidth requirements.However,the output sequence has undetectable insertions and deletions.This paper proposes a successive cancellation(SC)decoding scheme based on the weighted levenshtein distance(WLD)of polar codes for correcting insertions/deletions in DPPM systems.In this method,the WLD is used to calculate the transfer probabilities recursively to obtain likelihood ratios,and the low-complexity SC decoding method is built according to the error characteristics to match the DPPM system.Additionally,the proposed SC decoding scheme is extended to list decoding,which can further improve error correction performance.Simulation results show that the proposed scheme can effectively correct insertions/deletions in the DPPM system,which enhances its reliability and performance. 展开更多
关键词 DPPM insertions/deletions polar codes SC decoding
在线阅读 下载PDF
Integration of Large Language Models(LLMs)and Static Analysis for Improving the Efficacy of Security Vulnerability Detection in Source Code
6
作者 JoséArmando Santas Ciavatta Juan Ramón Bermejo Higuera +3 位作者 Javier Bermejo Higuera Juan Antonio Sicilia Montalvo Tomás Sureda Riera Jesús Pérez Melero 《Computers, Materials & Continua》 2026年第3期351-390,共40页
As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processin... As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processing and enabled significant improvements in various applications.This document seeks to investigate the security vulnerabilities detection in the source code using a range of large language models(LLM).Our primary objective is to evaluate the effectiveness of Static Application Security Testing(SAST)by applying various techniques such as prompt persona,structure outputs and zero-shot.To the selection of the LLMs(CodeLlama 7B,DeepSeek coder 7B,Gemini 1.5 Flash,Gemini 2.0 Flash,Mistral 7b Instruct,Phi 38b Mini 128K instruct,Qwen 2.5 coder,StartCoder 27B)with comparison and combination with Find Security Bugs.The evaluation method will involve using a selected dataset containing vulnerabilities,and the results to provide insights for different scenarios according to the software criticality(Business critical,non-critical,minimum effort,best effort)In detail,the main objectives of this study are to investigate if large language models outperform or exceed the capabilities of traditional static analysis tools,if the combining LLMs with Static Application Security Testing(SAST)tools lead to an improvement and the possibility that local machine learning models on a normal computer produce reliable results.Summarizing the most important conclusions of the research,it can be said that while it is true that the results have improved depending on the size of the LLM for business-critical software,the best results have been obtained by SAST analysis.This differs in“NonCritical,”“Best Effort,”and“Minimum Effort”scenarios,where the combination of LLM(Gemini)+SAST has obtained better results. 展开更多
关键词 AI+SAST secure code LLM benchmarking LLM vulnerability detection
在线阅读 下载PDF
Gradient-Guided Assembly Instruction Relocation for Adversarial Attacks Against Binary Code Similarity Detection
7
作者 Ran Wei Hui Shu 《Computers, Materials & Continua》 2026年第1期1372-1394,共23页
Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Althoug... Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC. 展开更多
关键词 Assembly instruction relocation adversary attack binary code similarity detection
在线阅读 下载PDF
Improving MCUCN code to simulate ultracold neutron storage and transportation in superfluid^(4)He
8
作者 Xue-Fen Han Fei Shen +6 位作者 Bin Zhou Xiao-Xiao Cai Tian-Cheng Yi Zhi-Liang Hu Song-Lin Wang Tian-Jiao Liang Robert Golub 《Nuclear Science and Techniques》 2026年第3期235-246,共12页
The ultracold neutron(UCN)transport code,MCUCN,designed initially for simulating UCN transportation from a solid deuterium(SD_2)source and neutron electric dipole moment experiments,could not simulate UCN storage and ... The ultracold neutron(UCN)transport code,MCUCN,designed initially for simulating UCN transportation from a solid deuterium(SD_2)source and neutron electric dipole moment experiments,could not simulate UCN storage and transportation in a superfluid^(4)He(SFHe,He-Ⅱ)source accurately.This limitation arose from the absence of an^(4)He upscattering mechanism and the absorption of^(3)He.And the provided source energy distribution in MCUCN is different from that in SFHe source.This study introduced enhancements to MCUCN to address these constraints,explicitly incorporating the^(4)He upscattering effect,the absorption of^(3)He,the loss caused by impurities on converter wall,UCN source energy distribution in SFHe,and the transmission through negative optical potential.Additionally,a Python-based visualization code for intermediate states and results was developed.To validate these enhancements,we systematically compared the simulation results of the Lujan Center Mark3 UCN system by MCUCN and the improved MCUCN code(iMCUCN)with UCNtransport simulations.Additionally,we compared the results of the SUN1 system simulated by MCUCN and iMCUCN with measurement results.The study demonstrates that iMCUCN effectively simulates the storage and transportation of ultracold neutrons in He-Ⅱ. 展开更多
关键词 Ultracold neutron Storage TRANSPORTATION Improved MCUCN code Upscattering effect Absorption by^(3)He
在线阅读 下载PDF
TLS Blind Recognition Algorithm of LDPC Codes
9
作者 Ning Xiaoyan Sun Jingjing +1 位作者 Wang Zhenduo Sun Zhiguo 《China Communications》 2026年第2期112-121,共10页
Blind recognition of low-density paritycheck(LDPC)codes has gradually attracted more attention with the development of military and civil communications.However,in the case of the paritycheck matrices with relatively ... Blind recognition of low-density paritycheck(LDPC)codes has gradually attracted more attention with the development of military and civil communications.However,in the case of the paritycheck matrices with relatively high row weights,the existing blind recognition algorithms based on a candidate set generally perform worse.In this paper,we propose a blind recognition method for LDPC codes,called as tangent function assisted least square(TLS)method,which improves recognition performances by constructing a new cost function.To characterize the constraint degree among received vectors and paritycheck vectors,a feature function based on tangent function is constructed in the proposed algorithm.A cost function based on least square method is also established according to the feature function values satisfying the parity-check relationship.Moreover,the minimum average value in TLS is obtained on the candidate set.Numerical analysis and simulation results show that recognition performances of TLS algorithm are consistent with theoretical results.Compared with existing algorithms,the proposed method possesses better recognition performances. 展开更多
关键词 blind recognition cost function least square low-density parity-check(LDPC)codes
在线阅读 下载PDF
Beyond Accuracy:Evaluating and Explaining the Capability Boundaries of Large Language Models in Syntax-Preserving Code Translation
10
作者 Yaxin Zhao Qi Han +1 位作者 Hui Shu Yan Guang 《Computers, Materials & Continua》 2026年第2期1371-1394,共24页
LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora... LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora,which introduces significant bias in performance evaluation;and(2)mainstream metrics focus primarily on surface-level accuracy,failing to uncover the underlying factors that constrain model capabilities.To address these issues,this paper presents TCode(Translation-Oriented Code Evaluation benchmark)—a complexity-controllable,contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework.The dataset is carefully designed to control complexity along multiple dimensions—including syntactic nesting and expression intricacy—enabling both broad coverage and fine-grained differentiation of sample difficulty.This design supports precise evaluation of model capabilities across a wide spectrum of translation challenges.The proposed evaluation framework introduces a correlation-driven analysis mechanism based on static program features,enabling predictive modeling of translation success from two perspectives:Code Form Complexity(e.g.,code length and character density)and Semantic Modeling Complexity(e.g.,syntactic depth,control-flow nesting,and type system complexity).Empirical evaluations across representative LLMs—including Qwen2.5-72B and Llama3.3-70B—demonstrate that even state-of-the-art models achieve over 80% compilation success on simple samples,but their accuracy drops sharply below 40% on complex cases.Further correlation analysis indicates that Semantic Modeling Complexity alone is correlated with up to 60% of the variance in translation success,with static program features exhibiting nonlinear threshold effects that highlight clear capability boundaries.This study departs fromthe traditional accuracy-centric evaluation paradigm and,for the first time,systematically characterizes the capabilities of large languagemodels in translation tasks through the lens of programstatic features.The findings provide actionable insights for model refinement and training strategy development. 展开更多
关键词 Large language models(LLMs) code translation compiler testing program analysis complexity-based evaluation
在线阅读 下载PDF
Integrating Attention Mechanism with Code Structural Affinity and Execution Context Correlation for Automated Bug Repair
11
作者 Jinfeng Ji Geunseok Yang 《Computers, Materials & Continua》 2026年第3期1708-1725,共18页
Automated Program Repair(APR)techniques have shown significant potential in mitigating the cost and complexity associated with debugging by automatically generating corrective patches for software defects.Despite cons... Automated Program Repair(APR)techniques have shown significant potential in mitigating the cost and complexity associated with debugging by automatically generating corrective patches for software defects.Despite considerable progress in APR methodologies,existing approaches frequently lack contextual awareness of runtime behaviors and structural intricacies inherent in buggy source code.In this paper,we propose a novel APR approach that integrates attention mechanisms within an autoencoder-based framework,explicitly utilizing structural code affinity and execution context correlation derived from stack trace analysis.Our approach begins with an innovative preprocessing pipeline,where code segments and stack traces are transformed into tokenized representations.Subsequently,the BM25 ranking algorithm is employed to quantitatively measure structural code affinity and execution context correlation,identifying syntactically and semantically analogous buggy code snippets and relevant runtime error contexts from extensive repositories.These extracted features are then encoded via an attention-enhanced autoencoder model,specifically designed to capture significant patterns and correlations essential for effective patch generation.To assess the efficacy and generalizability of our proposed method,we conducted rigorous experimental comparisons against DeepFix,a state-of-the-art APR system,using a substantial dataset comprising 53,478 studentdeveloped C programs.Experimental outcomes indicate that our model achieves a notable bug repair success rate of approximately 62.36%,representing a statistically significant performance improvement of over 6%compared to the baseline.Furthermore,a thorough K-fold cross-validation reinforced the consistency,robustness,and reliability of our method across diverse subsets of the dataset.Our findings present the critical advantage of integrating attentionbased learning with code structural and execution context features in APR tasks,leading to improved accuracy and practical applicability.Future work aims to extend the model’s applicability across different programming languages,systematically optimize hyperparameters,and explore alternative feature representation methods to further enhance debugging efficiency and effectiveness. 展开更多
关键词 Automated bug repair autoencoder algorithm buggy code analysis stack trace similarity machine learning for debugging
在线阅读 下载PDF
Cognitive Erasure-Coded Data Update and Repair for Mitigating I/O Overhead
12
作者 Bing Wei Ming Zhong +2 位作者 Qian Chen Yi Wu Yubin Li 《Computers, Materials & Continua》 2026年第2期1706-1725,共20页
In erasure-coded storage systems,updating data requires parity maintenance,which often leads to significant I/O amplification due to“write-after-read”operations.Furthermore,scattered parity placement increases disk ... In erasure-coded storage systems,updating data requires parity maintenance,which often leads to significant I/O amplification due to“write-after-read”operations.Furthermore,scattered parity placement increases disk seek overhead during repair,resulting in degraded system performance.To address these challenges,this paper proposes a Cognitive Update and Repair Method(CURM)that leverages machine learning to classify files into writeonly,read-only,and read-write categories,enabling tailored update and repair strategies.For write-only and read-write files,CURM employs a data-differencemechanism combined with fine-grained I/O scheduling to minimize redundant read operations and mitigate I/O amplification.For read-write files,CURM further reserves adjacent disk space near parity blocks,supporting parallel reads and reducing disk seek overhead during repair.We implement CURM in a prototype system,Cognitive Update and Repair File System(CURFS),and conduct extensive experiments using realworld Network File System(NFS)and Microsoft Research(MSR)workloads on a 25-node cluster.Experimental results demonstrate that CURMimproves data update throughput by up to 82.52%,reduces recovery time by up to 47.47%,and decreases long-term storage overhead by more than 15% compared to state-of-the-art methods including Full Logging(FL),ParityLogging(PL),ParityLoggingwithReservedspace(PLR),andPARIX.These results validate the effectiveness of CURM in enhancing both update and repair performance,providing a scalable and efficient solution for large-scale erasure-coded storage systems. 展开更多
关键词 Erasure coding machine learning cognitive update and repair I/O amplification mitigation seekefficient recovery
在线阅读 下载PDF
Construction of a Maritime Knowledge Graph Using GraphRAG for Entity and Relationship Extraction from Maritime Documents 被引量:4
13
作者 Yi Han Tao Yang +2 位作者 Meng Yuan Pinghua Hu Chen Li 《Journal of Computer and Communications》 2025年第2期68-93,共26页
In the international shipping industry, digital intelligence transformation has become essential, with both governments and enterprises actively working to integrate diverse datasets. The domain of maritime and shippi... In the international shipping industry, digital intelligence transformation has become essential, with both governments and enterprises actively working to integrate diverse datasets. The domain of maritime and shipping is characterized by a vast array of document types, filled with complex, large-scale, and often chaotic knowledge and relationships. Effectively managing these documents is crucial for developing a Large Language Model (LLM) in the maritime domain, enabling practitioners to access and leverage valuable information. A Knowledge Graph (KG) offers a state-of-the-art solution for enhancing knowledge retrieval, providing more accurate responses and enabling context-aware reasoning. This paper presents a framework for utilizing maritime and shipping documents to construct a knowledge graph using GraphRAG, a hybrid tool combining graph-based retrieval and generation capabilities. The extraction of entities and relationships from these documents and the KG construction process are detailed. Furthermore, the KG is integrated with an LLM to develop a Q&A system, demonstrating that the system significantly improves answer accuracy compared to traditional LLMs. Additionally, the KG construction process is up to 50% faster than conventional LLM-based approaches, underscoring the efficiency of our method. This study provides a promising approach to digital intelligence in shipping, advancing knowledge accessibility and decision-making. 展开更多
关键词 Maritime Knowledge Graph GraphRAG Entity and Relationship Extraction document Management
在线阅读 下载PDF
GaiaDoc: A Tool Focused on Business Requirements for Code Documentation with a CMMI Compliant and RUP Based on Requirement Flow
14
作者 Ferreira-Luz-Junior Humberto Miranda-Barros Rodolfo 《通讯和计算机(中英文版)》 2013年第5期593-602,共10页
关键词 源代码文件 业务需求 CMMI RUP 工具 文档 RATIONAL统一过程 能力成熟度模型集成
在线阅读 下载PDF
Figma2Code:面向Figma设计稿的自动代码生成方法 被引量:1
15
作者 朱琳 封颖超杰 +8 位作者 朱航 王斯加 朱闽峰 喻晨昊 张钰荟 许达兴 赵德明 冯玉君 陈为 《计算机辅助设计与图形学学报》 北大核心 2025年第2期321-329,共9页
设计类创作工具已被广泛用于提高用户界面的设计效率,然而,根据设计稿开发代码是一件耗时费力的工作.针对现有的设计稿自动转代码的方案面临的代码可用性和复现结果准确性等问题,基于Figma设计工具提出一种自动代码生成方法——Figma2Co... 设计类创作工具已被广泛用于提高用户界面的设计效率,然而,根据设计稿开发代码是一件耗时费力的工作.针对现有的设计稿自动转代码的方案面临的代码可用性和复现结果准确性等问题,基于Figma设计工具提出一种自动代码生成方法——Figma2Code.首先,通过节点和图层优化提高设计稿元数据质量;其次,采用元数据标注信息的语义理解和图像识别技术识别组件;然后构建一套通用型的中间态数据结构,表示优化后的元数据和识别后的组件属性,以支持多种代码语言的生成;最后,基于模板生成可用代码,并通过函数抽取和元素循环输出提高代码可用性.采用生成代码的复现样式准确度量化评估和基于专家经验的代码可用性定性评估,证明了所提方法的有效性. 展开更多
关键词 设计稿转代码 逆向工程 用户界面 深度学习
在线阅读 下载PDF
Artificial Intelligence-Powered Legal Document Processing for Medical Negligence Cases: A Critical Review 被引量:1
16
作者 Gobind Naidu Vicknesh Krishnan 《International Journal of Intelligence Science》 2025年第1期10-55,共46页
This critical review looks at the assessment of the application of artificial intelligence in handling legal documents with specific reference to medical negligence cases with a view of identifying its transformative ... This critical review looks at the assessment of the application of artificial intelligence in handling legal documents with specific reference to medical negligence cases with a view of identifying its transformative potentialities, issues and ethical concerns. The review consolidates findings that show the impact of AI in improving the efficiency, accuracy and justice delivery in the legal profession. The studies show increased efficiency in speed of document review and enhancement of the accuracy of the reviewed documents, with time efficiency estimates of 60% reduction of time. However, the review also outlines some of the problems that continue to characterize AI, such as data quality problems, biased algorithms and the problem of the opaque decision-making system. This paper assesses ethical issues related to patient autonomy, justice and non-malignant suffering, with particular focus on patient privacy and fair process, and on potential unfairness to patients. This paper’s review of AI innovations finds that regulations lag behind AI developments, leading to unsettled issues regarding legal responsibility for AI and user control over AI-generated results and findings in legal proceedings. Some of the future avenues that are presented in the study are the future of XAI for legal purposes, utilizing federated learning for resolving privacy issues, and the need to foster adaptive regulation. Finally, the review advocates for Legal Subject Matter Experts to collaborate with legal informatics experts, ethicists, and policy makers to develop the best solutions to implement AI in medical negligence claims. It reasons that there is great potential for AI to have a deep impact on the practice of law but when done, it must do so in a way that respects justice and on the Rights of Individuals. 展开更多
关键词 Artificial Intelligence Medical Negligence Legal document Processing Ethical Implications Regulatory Frameworks
在线阅读 下载PDF
Shift in Translation:A Case Study of Translating NFPA 1 Fire Code into Chinese
17
作者 Fang Chen Xinlu Xing Huili Wang 《Journal of Contemporary Educational Research》 2025年第2期1-15,共15页
National Fire codes,mandated by government authorities to tackle technical challenges in fire prevention and control,establish fundamental standards for construction practices.International collaboration in fire prote... National Fire codes,mandated by government authorities to tackle technical challenges in fire prevention and control,establish fundamental standards for construction practices.International collaboration in fire protection technologies has opened avenues for China to access a wealth of documents and codes,which are crucial in crafting regulations and developing a robust,scientific framework for fire code formulation.However,the translation of these codes into Chinese has been inadequate,thereby diminishing the benefits of technological exchange and collaborative learning.This underscores the necessity for comprehensive research into code translation,striving for higher-quality translations guided by established translation theories.In this study,we translated the initial segment of the NFPA 1 Fire Code into Chinese and examined both the source text and target text through the lens of Translation Shift Theory,a concept introduced by Catford.The conclusion culminated in identifying four key shifts across various linguistic levels:lexis,sentences,and groups,to ensure an accurate and precise translation of fire codes.This study offers a through and lucid explanation of how the translator integrates Catford’s theories to solve technical challenges in NFPA 1 Fire Code translation,and establish essential standards for construction translation practices. 展开更多
关键词 Fire code code document JC Catford Translation shift theory
在线阅读 下载PDF
Correction:Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space
18
作者 Mudassir Khalil Muhammad Imran Sharif +3 位作者 Ahmed Naeem Muhammad Umar Chaudhry Hafiz Tayyab Rauf Adham E.Ragab 《Computers, Materials & Continua》 SCIE EI 2025年第1期1461-1461,共1页
In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab C... In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab Computers,Materials&Continua,2023,Vol.77,No.2,pp.2031–2047.DOI:10.32604/cmc.2023.043687,URL:https://www.techscience.com/cmc/v77n2/54831,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,ST42DE,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”. 展开更多
关键词 Deep code CIELAB
在线阅读 下载PDF
Malicious Document Detection Based on GGE Visualization
19
作者 Youhe Wang Yi Sun +1 位作者 Yujie Li Chuanqi Zhou 《Computers, Materials & Continua》 SCIE EI 2025年第1期1233-1254,共22页
With the development of anti-virus technology,malicious documents have gradually become the main pathway of Advanced Persistent Threat(APT)attacks,therefore,the development of effective malicious document classifiers ... With the development of anti-virus technology,malicious documents have gradually become the main pathway of Advanced Persistent Threat(APT)attacks,therefore,the development of effective malicious document classifiers has become particularly urgent.Currently,detection methods based on document structure and behavioral features encounter challenges in feature engineering,these methods not only have limited accuracy,but also consume large resources,and usually can only detect documents in specific formats,which lacks versatility and adaptability.To address such problems,this paper proposes a novel malicious document detection method-visualizing documents as GGE images(Grayscale,Grayscale matrix,Entropy).The GGE method visualizes the original byte sequence of the malicious document as a grayscale image,the information entropy sequence of the document as an entropy image,and at the same time,the grayscale level co-occurrence matrix and the texture and spatial information stored in it are converted into grayscale matrix image,and fuses the three types of images to get the GGE color image.The Convolutional Block Attention Module-EfficientNet-B0(CBAM-EfficientNet-B0)model is then used for classification,combining transfer learning and applying the pre-trained model on the ImageNet dataset to the feature extraction process of GGE images.As shown in the experimental results,the GGE method has superior performance compared with other methods,which is suitable for detecting malicious documents in different formats,and achieves an accuracy of 99.44%and 97.39%on Portable Document Format(PDF)and office datasets,respectively,and consumes less time during the detection process,which can be effectively applied to the task of detecting malicious documents in real-time. 展开更多
关键词 Malicious document VISUALIZATION EfficientNet-B0 convolutional block attention module GGE image
在线阅读 下载PDF
Chinese Documentaries
20
《China Today》 2025年第1期73-73,共1页
This video series is the first experimental psychology documentary made in China.It focuses on analyzing professional theories to raise people’s general understanding of basic psychology.By combining innovative audio... This video series is the first experimental psychology documentary made in China.It focuses on analyzing professional theories to raise people’s general understanding of basic psychology.By combining innovative audiovisual narrative with psychological experiments,it zooms in on real human nature through discussing social hotspots from the perspectives of social psychology,cognitive psychology,and personality psychology,in order to help people find answers for their current psychological difficulties. 展开更多
关键词 China. document INNOVATIVE
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部