The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situati...The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.展开更多
In this study,it aims at examining the differences between humangenerated and AI-generated texts in IELTS Writing Task 2.It especially focuses on lexical resourcefulness,grammatical accuracy,and contextual appropriate...In this study,it aims at examining the differences between humangenerated and AI-generated texts in IELTS Writing Task 2.It especially focuses on lexical resourcefulness,grammatical accuracy,and contextual appropriateness.We analyzed 20 essays,including 10 human written ones by Chinese university students who have achieved an IELTS writing score ranging from 5.5 to 6.0,and 10 ChatGPT-4 Turbo-generated ones,using a mixed-methods approach,through corpus-based tools(NLTK,SpaCy,AntConc)and qualitative content analysis.Results showed that AI texts exhibited superior grammatical accuracy(0.4%–3%error rates for AI vs.20–26%for university students)but higher lexical repetition(17.2%to 23.25%for AI vs.17.68%for university students)and weaker contextual adaptability(3.33/10–3.69/10 for AI vs.3.23/10 to 4.14/10 for university students).While AI’s grammatical precision supports its utility as a corrective tool,human writers outperformed AI in lexical diversity and task-specific nuance.The findings advocate for a hybrid pedagogical model that leverages AI’s strengths in error detection while retaining human instruction for advanced lexical and contextual skills.Limitations include the small corpus and single-AI-model focus,suggesting future research with diverse datasets and longitudinal designs.展开更多
This conceptual study proposes a pedagogical framework that integrates Generative Artificial Intelligence tools(AIGC)and Chain-of-Thought(CoT)reasoning,grounded in the cognitive apprenticeship model,for the Pragmatics...This conceptual study proposes a pedagogical framework that integrates Generative Artificial Intelligence tools(AIGC)and Chain-of-Thought(CoT)reasoning,grounded in the cognitive apprenticeship model,for the Pragmatics and Translation course within Master of Translation and Interpreting(MTI)programs.A key feature involves CoT reasoning exercises,which require students to articulate their step-by-step translation reasoning.This explicates cognitive processes,enhances pragmatic awareness,translation strategy development,and critical reflection on linguistic choices and context.Hypothetical activities exemplify its application,including comparative analysis of AI and human translations to examine pragmatic nuances,and guided exercises where students analyze or critique the reasoning traces generated by Large Language Models(LLMs).Ethically grounded,the framework positions AI as a supportive tool,thereby ensuring human translators retain the central decision-making role and promoting critical evaluation of machine-generated suggestions.Potential challenges,such as AI biases,ethical concerns,and overreliance,are addressed through strategies including bias-awareness discussions,rigorous accuracy verification,and a strong emphasis on human accountability.Future research will involve piloting the framework to empirically evaluate its impact on learners’pragmatic competence and translation skills,followed by iterative refinements to advance evidence-based translation pedagogy.展开更多
Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well a...Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.展开更多
AI-generated images are a prime example of AI-generated content,and this paper discusses the controversy over their copyrightability.Starting with the general technical principles that lie behind AI’s deep learning f...AI-generated images are a prime example of AI-generated content,and this paper discusses the controversy over their copyrightability.Starting with the general technical principles that lie behind AI’s deep learning for model training and the generation and correction of AI-generated images according to an AI users’instructions to the AI prompt and their parameter settings,the paper analyzes the initial legal viewpoint that as AI-generated images do not have a human creator,they cannot apply for copyright.It goes on to examine the rapid development of AI-generated image technology and the gradual adoption of more open attitudes towards the copyrightability of AI-generated images due to the influence of the promoting technological advancement approach.On the basis of this,the paper further analyzes the criteria for assessing the copyrightability of AI-generated images,by using measures such as originality,human authorship,and intellectual achievements,aiming to clarify the legal basis for the copyrightability of AI-generated images and enhancing the copyright protection system.展开更多
Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive te...Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).展开更多
Aiming at the problems of incomplete characterization of text relations,poor guidance of potential representations,and low quality of model generation in the field of controllable long text generation,this paper propo...Aiming at the problems of incomplete characterization of text relations,poor guidance of potential representations,and low quality of model generation in the field of controllable long text generation,this paper proposes a new GSPT-CVAE model(Graph Structured Processing,Single Vector,and Potential Attention Com-puting Transformer-Based Conditioned Variational Autoencoder model).The model obtains a more comprehensive representation of textual relations by graph-structured processing of the input text,and at the same time obtains a single vector representation by weighted merging of the vector sequences after graph-structured processing to get an effective potential representation.In the process of potential representation guiding text generation,the model adopts a combination of traditional embedding and potential attention calculation to give full play to the guiding role of potential representation for generating text,to improve the controllability and effectiveness of text generation.The experimental results show that the model has excellent representation learning ability and can learn rich and useful textual relationship representations.The model also achieves satisfactory results in the effectiveness and controllability of text generation and can generate long texts that match the given constraints.The ROUGE-1 F1 score of this model is 0.243,the ROUGE-2 F1 score is 0.041,the ROUGE-L F1 score is 0.22,and the PPL-Word score is 34.303,which gives the GSPT-CVAE model a certain advantage over the baseline model.Meanwhile,this paper compares this model with the state-of-the-art generative models T5,GPT-4,Llama2,and so on,and the experimental results show that the GSPT-CVAE model has a certain competitiveness.展开更多
Restoring texts corrupted by visually perturbed homoglyph characters presents significant challenges to conventional Natural Language Processing(NLP)systems,primarily due to ambiguities arising from characters that ap...Restoring texts corrupted by visually perturbed homoglyph characters presents significant challenges to conventional Natural Language Processing(NLP)systems,primarily due to ambiguities arising from characters that appear visually similar yet differ semantically.Traditional text restoration methods struggle with these homoglyph perturbations due to limitations such as a lack of contextual understanding and difficulty in handling cases where one character maps to multiple candidates.To address these issues,we propose an Optical Character Recognition(OCR)-assisted masked Bidirectional Encoder Representations from Transformers(BERT)model specifically designed for homoglyph-perturbed text restoration.Our method integrates OCR preprocessing with a character-level BERT architecture,where OCR preprocessing transforms visually perturbed characters into their approximate alphabetic equivalents,significantly reducing multi-correspondence ambiguities.Subsequently,the character-level BERT leverages bidirectional contextual information to accurately resolve remaining ambiguities by predicting intended characters based on surrounding semantic cues.Extensive experiments conducted on realistic phishing email datasets demonstrate that the proposed method significantly outperforms existing restoration techniques,including OCR-based,dictionarybased,and traditional BERT-based approaches,achieving a word-level restoration accuracy of up to 99.59%in fine-tuned settings.Additionally,our approach exhibits robust performance in zero-shot scenarios and maintains effectiveness under low-resource conditions.Further evaluations across multiple downstream tasks,such as part-ofspeech tagging,chunking,toxic comment classification,and homoglyph detection under conditions of severe visual perturbation(up to 40%),confirm the method’s generalizability and applicability.Our proposed hybrid approach,combining OCR preprocessing with character-level contextual modeling,represents a scalable and practical solution for mitigating visually adversarial text attacks,thereby enhancing the security and reliability of NLP systems in real-world applications.展开更多
Surgical site infections(SSIs)are the most common healthcare-related infections in patients with lung cancer.Constructing a lung cancer SSI risk prediction model requires the extraction of relevant risk factors from l...Surgical site infections(SSIs)are the most common healthcare-related infections in patients with lung cancer.Constructing a lung cancer SSI risk prediction model requires the extraction of relevant risk factors from lung cancer case texts,which involves two types of text structuring tasks:attribute discrimination and attribute extraction.This article proposes a joint model,Multi-BGLC,around these two types of tasks,using bidirectional encoder representations from transformers(BERT)as the encoder and fine-tuning the decoder composed of graph convolutional neural network(GCNN)+long short-term memory(LSTM)+conditional random field(CRF)based on cancer case data.The GCNN is used for attribute discrimination,whereas the LSTM and CRF are used for attribute extraction.The experiment verified the effectiveness and accuracy of the model compared with other baseline models.展开更多
We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of t...We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains.展开更多
On January 14,Heimtextil kicked off the new trade fair year with over 3,000 exhibitors from 65 countries.With steady growth,the leading trade fair for home and contract textiles and textile design is strongly position...On January 14,Heimtextil kicked off the new trade fair year with over 3,000 exhibitors from 65 countries.With steady growth,the leading trade fair for home and contract textiles and textile design is strongly positioned. This makes it a reliable platform for international participants.At the opening,architect and designer Patricia Urquiola presented her installation 'among-us' at Heimtextil.展开更多
Since the launch of a digitization project for the protection and utilization of ancient texts in the Sakya Monastery of the Xizang Autonomous Region in 2012,significant efforts and achievements have been made in anci...Since the launch of a digitization project for the protection and utilization of ancient texts in the Sakya Monastery of the Xizang Autonomous Region in 2012,significant efforts and achievements have been made in ancient text preservation.展开更多
文摘The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.
基金supported by the Macao Science and Technology Development Fund(FDCT)(No.0071/2023/RIB3)Joint Research Funding Program between the Macao Science and Technology Development Fund(FDCT)and the Department of Science and Technology of Guangdong Province(FDCTGDST)(No.0003-2024-AGJ).
文摘In this study,it aims at examining the differences between humangenerated and AI-generated texts in IELTS Writing Task 2.It especially focuses on lexical resourcefulness,grammatical accuracy,and contextual appropriateness.We analyzed 20 essays,including 10 human written ones by Chinese university students who have achieved an IELTS writing score ranging from 5.5 to 6.0,and 10 ChatGPT-4 Turbo-generated ones,using a mixed-methods approach,through corpus-based tools(NLTK,SpaCy,AntConc)and qualitative content analysis.Results showed that AI texts exhibited superior grammatical accuracy(0.4%–3%error rates for AI vs.20–26%for university students)but higher lexical repetition(17.2%to 23.25%for AI vs.17.68%for university students)and weaker contextual adaptability(3.33/10–3.69/10 for AI vs.3.23/10 to 4.14/10 for university students).While AI’s grammatical precision supports its utility as a corrective tool,human writers outperformed AI in lexical diversity and task-specific nuance.The findings advocate for a hybrid pedagogical model that leverages AI’s strengths in error detection while retaining human instruction for advanced lexical and contextual skills.Limitations include the small corpus and single-AI-model focus,suggesting future research with diverse datasets and longitudinal designs.
文摘This conceptual study proposes a pedagogical framework that integrates Generative Artificial Intelligence tools(AIGC)and Chain-of-Thought(CoT)reasoning,grounded in the cognitive apprenticeship model,for the Pragmatics and Translation course within Master of Translation and Interpreting(MTI)programs.A key feature involves CoT reasoning exercises,which require students to articulate their step-by-step translation reasoning.This explicates cognitive processes,enhances pragmatic awareness,translation strategy development,and critical reflection on linguistic choices and context.Hypothetical activities exemplify its application,including comparative analysis of AI and human translations to examine pragmatic nuances,and guided exercises where students analyze or critique the reasoning traces generated by Large Language Models(LLMs).Ethically grounded,the framework positions AI as a supportive tool,thereby ensuring human translators retain the central decision-making role and promoting critical evaluation of machine-generated suggestions.Potential challenges,such as AI biases,ethical concerns,and overreliance,are addressed through strategies including bias-awareness discussions,rigorous accuracy verification,and a strong emphasis on human accountability.Future research will involve piloting the framework to empirically evaluate its impact on learners’pragmatic competence and translation skills,followed by iterative refinements to advance evidence-based translation pedagogy.
文摘Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.
文摘AI-generated images are a prime example of AI-generated content,and this paper discusses the controversy over their copyrightability.Starting with the general technical principles that lie behind AI’s deep learning for model training and the generation and correction of AI-generated images according to an AI users’instructions to the AI prompt and their parameter settings,the paper analyzes the initial legal viewpoint that as AI-generated images do not have a human creator,they cannot apply for copyright.It goes on to examine the rapid development of AI-generated image technology and the gradual adoption of more open attitudes towards the copyrightability of AI-generated images due to the influence of the promoting technological advancement approach.On the basis of this,the paper further analyzes the criteria for assessing the copyrightability of AI-generated images,by using measures such as originality,human authorship,and intellectual achievements,aiming to clarify the legal basis for the copyrightability of AI-generated images and enhancing the copyright protection system.
文摘Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).
文摘Aiming at the problems of incomplete characterization of text relations,poor guidance of potential representations,and low quality of model generation in the field of controllable long text generation,this paper proposes a new GSPT-CVAE model(Graph Structured Processing,Single Vector,and Potential Attention Com-puting Transformer-Based Conditioned Variational Autoencoder model).The model obtains a more comprehensive representation of textual relations by graph-structured processing of the input text,and at the same time obtains a single vector representation by weighted merging of the vector sequences after graph-structured processing to get an effective potential representation.In the process of potential representation guiding text generation,the model adopts a combination of traditional embedding and potential attention calculation to give full play to the guiding role of potential representation for generating text,to improve the controllability and effectiveness of text generation.The experimental results show that the model has excellent representation learning ability and can learn rich and useful textual relationship representations.The model also achieves satisfactory results in the effectiveness and controllability of text generation and can generate long texts that match the given constraints.The ROUGE-1 F1 score of this model is 0.243,the ROUGE-2 F1 score is 0.041,the ROUGE-L F1 score is 0.22,and the PPL-Word score is 34.303,which gives the GSPT-CVAE model a certain advantage over the baseline model.Meanwhile,this paper compares this model with the state-of-the-art generative models T5,GPT-4,Llama2,and so on,and the experimental results show that the GSPT-CVAE model has a certain competitiveness.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)[RS-2021-II211341,Artificial Intelligence Graduate School Program(Chung-Ang University)]by the Chung-Ang University Graduate Research Scholarship in 2024.
文摘Restoring texts corrupted by visually perturbed homoglyph characters presents significant challenges to conventional Natural Language Processing(NLP)systems,primarily due to ambiguities arising from characters that appear visually similar yet differ semantically.Traditional text restoration methods struggle with these homoglyph perturbations due to limitations such as a lack of contextual understanding and difficulty in handling cases where one character maps to multiple candidates.To address these issues,we propose an Optical Character Recognition(OCR)-assisted masked Bidirectional Encoder Representations from Transformers(BERT)model specifically designed for homoglyph-perturbed text restoration.Our method integrates OCR preprocessing with a character-level BERT architecture,where OCR preprocessing transforms visually perturbed characters into their approximate alphabetic equivalents,significantly reducing multi-correspondence ambiguities.Subsequently,the character-level BERT leverages bidirectional contextual information to accurately resolve remaining ambiguities by predicting intended characters based on surrounding semantic cues.Extensive experiments conducted on realistic phishing email datasets demonstrate that the proposed method significantly outperforms existing restoration techniques,including OCR-based,dictionarybased,and traditional BERT-based approaches,achieving a word-level restoration accuracy of up to 99.59%in fine-tuned settings.Additionally,our approach exhibits robust performance in zero-shot scenarios and maintains effectiveness under low-resource conditions.Further evaluations across multiple downstream tasks,such as part-ofspeech tagging,chunking,toxic comment classification,and homoglyph detection under conditions of severe visual perturbation(up to 40%),confirm the method’s generalizability and applicability.Our proposed hybrid approach,combining OCR preprocessing with character-level contextual modeling,represents a scalable and practical solution for mitigating visually adversarial text attacks,thereby enhancing the security and reliability of NLP systems in real-world applications.
基金the Special Project of the Shanghai Municipal Commission of Economy and Information Technology for Promoting High-Quality Industrial Development(No.2024-GZL-RGZN-02011)the Shanghai City Digital Transformation Project(No.202301002)the Project of Shanghai Shenkang Hospital Development Center(No.SHDC22023214)。
文摘Surgical site infections(SSIs)are the most common healthcare-related infections in patients with lung cancer.Constructing a lung cancer SSI risk prediction model requires the extraction of relevant risk factors from lung cancer case texts,which involves two types of text structuring tasks:attribute discrimination and attribute extraction.This article proposes a joint model,Multi-BGLC,around these two types of tasks,using bidirectional encoder representations from transformers(BERT)as the encoder and fine-tuning the decoder composed of graph convolutional neural network(GCNN)+long short-term memory(LSTM)+conditional random field(CRF)based on cancer case data.The GCNN is used for attribute discrimination,whereas the LSTM and CRF are used for attribute extraction.The experiment verified the effectiveness and accuracy of the model compared with other baseline models.
文摘We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains.
文摘On January 14,Heimtextil kicked off the new trade fair year with over 3,000 exhibitors from 65 countries.With steady growth,the leading trade fair for home and contract textiles and textile design is strongly positioned. This makes it a reliable platform for international participants.At the opening,architect and designer Patricia Urquiola presented her installation 'among-us' at Heimtextil.
文摘Since the launch of a digitization project for the protection and utilization of ancient texts in the Sakya Monastery of the Xizang Autonomous Region in 2012,significant efforts and achievements have been made in ancient text preservation.