In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in sema...In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in semantic feature extraction have turned to external knowledge to augment the model’s grasp of textual content,often overlooking intrinsic textual cues such as label statistical features.In contrast,these endogenous insights naturally align with the classification task.In our paper,to complement this focus on intrinsic knowledge,we introduce a novel Gate-Attention mechanism.This mechanism adeptly integrates statistical features from the text itself into the semantic fabric,enhancing the model’s capacity to understand and represent the data.Additionally,to address the intricate task of mining label correlations,we propose a Dual-end enhancement mechanism.This mechanism effectively mitigates the challenges of information loss and erroneous transmission inherent in traditional long short term memory propagation.We conducted an extensive battery of experiments on the AAPD and RCV1-2 datasets.These experiments serve the dual purpose of confirming the efficacy of both the Gate-Attention mechanism and the Dual-end enhancement mechanism.Our final model unequivocally outperforms the baseline model,attesting to its robustness.These findings emphatically underscore the imperativeness of taking into account not just external knowledge but also the inherent intricacies of textual data when crafting potent MLTC models.展开更多
Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive te...Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).展开更多
Aiming at the problems of incomplete characterization of text relations,poor guidance of potential representations,and low quality of model generation in the field of controllable long text generation,this paper propo...Aiming at the problems of incomplete characterization of text relations,poor guidance of potential representations,and low quality of model generation in the field of controllable long text generation,this paper proposes a new GSPT-CVAE model(Graph Structured Processing,Single Vector,and Potential Attention Com-puting Transformer-Based Conditioned Variational Autoencoder model).The model obtains a more comprehensive representation of textual relations by graph-structured processing of the input text,and at the same time obtains a single vector representation by weighted merging of the vector sequences after graph-structured processing to get an effective potential representation.In the process of potential representation guiding text generation,the model adopts a combination of traditional embedding and potential attention calculation to give full play to the guiding role of potential representation for generating text,to improve the controllability and effectiveness of text generation.The experimental results show that the model has excellent representation learning ability and can learn rich and useful textual relationship representations.The model also achieves satisfactory results in the effectiveness and controllability of text generation and can generate long texts that match the given constraints.The ROUGE-1 F1 score of this model is 0.243,the ROUGE-2 F1 score is 0.041,the ROUGE-L F1 score is 0.22,and the PPL-Word score is 34.303,which gives the GSPT-CVAE model a certain advantage over the baseline model.Meanwhile,this paper compares this model with the state-of-the-art generative models T5,GPT-4,Llama2,and so on,and the experimental results show that the GSPT-CVAE model has a certain competitiveness.展开更多
Surgical site infections(SSIs)are the most common healthcare-related infections in patients with lung cancer.Constructing a lung cancer SSI risk prediction model requires the extraction of relevant risk factors from l...Surgical site infections(SSIs)are the most common healthcare-related infections in patients with lung cancer.Constructing a lung cancer SSI risk prediction model requires the extraction of relevant risk factors from lung cancer case texts,which involves two types of text structuring tasks:attribute discrimination and attribute extraction.This article proposes a joint model,Multi-BGLC,around these two types of tasks,using bidirectional encoder representations from transformers(BERT)as the encoder and fine-tuning the decoder composed of graph convolutional neural network(GCNN)+long short-term memory(LSTM)+conditional random field(CRF)based on cancer case data.The GCNN is used for attribute discrimination,whereas the LSTM and CRF are used for attribute extraction.The experiment verified the effectiveness and accuracy of the model compared with other baseline models.展开更多
In its 2023 global health statistics,the World Health Organization noted that noncommunicable diseases(NCDs)remain the leading cause of disease burden worldwide,with cardiovascular diseases(CVDs)resulting in more deat...In its 2023 global health statistics,the World Health Organization noted that noncommunicable diseases(NCDs)remain the leading cause of disease burden worldwide,with cardiovascular diseases(CVDs)resulting in more deaths than the three other major NCDs combined.In this study,we developed a method that can comprehensively detect which CVDs are present in a patient.Specifically,we propose a multi-label classification method that utilizes photoplethysmography(PPG)signals and physiological characteristics from public datasets to classify four types of CVDs and related conditions:hypertension,diabetes,cerebral infarction,and cerebrovascular disease.Our approach to multi-disease classification of cardiovascular diseases(CVDs)using PPG signals achieves the highest classification performance when encompassing the broadest range of disease categories,thereby offering a more comprehensive assessment of human health.We employ a multi-label classification strategy to simultaneously predict the presence or absence of multiple diseases.Specifically,we first apply the Savitzky-Golay(S-G)filter to the PPG signals to reduce noise and then transform into statistical features.We integrate processed PPG signals with individual physiological features as a multimodal input,thereby expanding the learned feature space.Notably,even with a simple machine learning method,this approach can achieve relatively high accuracy.The proposed method achieved a maximum F1-score of 0.91,minimum Hamming loss of 0.04,and an accuracy of 0.95.Thus,our method represents an effective and rapid solution for detecting multiple diseases simultaneously,which is beneficial for comprehensively managing CVDs.展开更多
We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of t...We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains.展开更多
On January 14,Heimtextil kicked off the new trade fair year with over 3,000 exhibitors from 65 countries.With steady growth,the leading trade fair for home and contract textiles and textile design is strongly position...On January 14,Heimtextil kicked off the new trade fair year with over 3,000 exhibitors from 65 countries.With steady growth,the leading trade fair for home and contract textiles and textile design is strongly positioned. This makes it a reliable platform for international participants.At the opening,architect and designer Patricia Urquiola presented her installation 'among-us' at Heimtextil.展开更多
The application of legal texts in the context of digital television is a process that relies on several normative instruments,ranging from international treaties,such as those of the ITU(International Telecommunicatio...The application of legal texts in the context of digital television is a process that relies on several normative instruments,ranging from international treaties,such as those of the ITU(International Telecommunications Union),to national regulations defining the obligations of audiovisual operators and the modalities of consumer support.Many countries have introduced specific laws and regulations to organize the gradual switch-off of analog broadcasting and encourage the adoption of new digital standards.Consequently,the digitization of Guinea’s broadcasting network cannot be carried out without taking into account the legal framework:allocation of resources and broadcasting players.Analog and digital broadcasting,according to regulatory texts,shows the relationships between the different communication management structures.As for digital broadcasting,we note the appearance of a new service,multiplex.展开更多
With the rapid development of web technology,Social Networks(SNs)have become one of the most popular platforms for users to exchange views and to express their emotions.More and more people are used to commenting on a...With the rapid development of web technology,Social Networks(SNs)have become one of the most popular platforms for users to exchange views and to express their emotions.More and more people are used to commenting on a certain hot spot in SNs,resulting in a large amount of texts containing emotions.Textual Emotion Cause Extraction(TECE)aims to automatically extract causes for a certain emotion in texts,which is an important research issue in natural language processing.It is different from the previous tasks of emotion recognition and emotion classification.In addition,it is not limited to the shallow-level emotion classification of text,but to trace the emotion source.In this paper,we provide a survey for TECE.First,we introduce the development process and classification of TECE.Then,we discuss the existing methods and key factors for TECE.Finally,we enumerate the challenges and developing trend for TECE.展开更多
This study investigates translation strategies for Chinese cultural terms in academic texts through a case study of Chapter 7 from“Jade Myth Belief and Chinese Spirit”.Using a qualitative research approach based on ...This study investigates translation strategies for Chinese cultural terms in academic texts through a case study of Chapter 7 from“Jade Myth Belief and Chinese Spirit”.Using a qualitative research approach based on cultural context framework and cognitive model,the study analyzes translation challenges and solutions in rendering cultural terms related to jade mythology and archaeological concepts.The research identifies three primary translation strategies:transliteration with annotation,domestication with explanation,and cognitive-based translation.The findings reveal that effective translation requires a balanced approach between maintaining academic precision and preserving cultural authenticity.The study demonstrates that successful translation of cultural terms in academic contexts demands a sophisticated understanding of both source and target cultural contexts,along with careful consideration of the academic audience’s needs.This research contributes to the field by providing practical insights for translators working with Chinese cultural texts in academic settings and proposing an approach to handling complex cultural terminology.展开更多
Purpose:Policies have often,albeit inadvertently,overlooked certain scientific insights,especially in the handling of complex events.This study aims to systematically uncover and evaluate pivotal scientific insights t...Purpose:Policies have often,albeit inadvertently,overlooked certain scientific insights,especially in the handling of complex events.This study aims to systematically uncover and evaluate pivotal scientific insights that have been underrepresented in policy documents by leveraging extensive datasets from policy texts and scholarly publications.Design/methodology/approach:This article introduces a research framework aimed at excavating scientific insights that have been overlooked by policy,encompassing four integral parts:data acquisition and preprocessing,the identification of overlooked content through thematic analysis,the discovery of overlooked content via keyword analysis,and a comprehensive analysis and discussion of the overlooked content.Leveraging this framework,the research conducts an in-depth exploration of the scientific content overlooked by policies during the COVID-19 pandemic.Findings:During the COVID-19 pandemic,scientific information in four domains was overlooked by policy:psychological state of the populace,environmental issues,the role of computer technology,and public relations.These findings indicate a systematic underrepresentation of important scientific insights in policy.Research limitations:This study is subject to two key limitations.Firstly,the text analysis method—relying on pre-extracted keywords and thematic structures—may not fully capture the nuanced context and complexity of scientific insights in policy documents.Secondly,the focus on a limited set of case studies restricts the broader applicability of the conclusions across diverse situations.Practical implications:The study introduces a quantitative framework using text analysis to identify overlooked scientific content in policy,bridging the gap between science and policy.It also highlights overlooked scientific information during COVID-19,promoting more evidence-based and robust policies through improved science-policy integration.Originality/value:This paper provides new ideas and methods for excavating scientific information that has been overlooked by policy,further deepens the understanding of the interaction between policy and science during the COVID-19 period,and lays the foundation for the more rational use of scientific information in policy-making.展开更多
Based on Katharina Reiss’s text typology theory from the German Functionalist School,this paper systematically explores the strategic differences and practical methods in the mutual translation between Chinese and Ja...Based on Katharina Reiss’s text typology theory from the German Functionalist School,this paper systematically explores the strategic differences and practical methods in the mutual translation between Chinese and Japanese for operative,informative,expressive,and informative+expressive text types.By comparatively analyzing the linguistic functions,communicative features,and cultural cognitive differences of these four text types,the study proposes targeted translation strategies:Operative texts require tone adjustment,localized adaptation,and pragmatic compensation to maximize appeal;informative texts emphasize sentence structure restructuring and terminology standardization to ensure information transfer efficiency;expressive texts focus on formal imitation and rhetorical reproduction to preserve the original aesthetic value;for informative+expressive texts,flexible handling of idioms,proverbs,and stylistic forms is necessary to achieve an optimal balance between informational accuracy and aesthetic form.The study validates the applicability of functionalist theory in Chinese-Japanese translation through empirical case studies,while also revealing the prevalence of mixed text functions and the consequent demand for translator strategic flexibility.These findings not only expand the explanatory dimension of Reiss’s theory,but also provide an operational methodological framework for cross-cultural translation practice,holding practical significance for promoting in-depth communication between China and Japan.展开更多
基金supported by National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2020040,ZDYF2021GXJS003)+2 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant Nos.620MS021,621QN211)Science and Technology Development Center of the Ministry of Education Industry-University-Research Innovation Fund(2021JQR017).
文摘In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in semantic feature extraction have turned to external knowledge to augment the model’s grasp of textual content,often overlooking intrinsic textual cues such as label statistical features.In contrast,these endogenous insights naturally align with the classification task.In our paper,to complement this focus on intrinsic knowledge,we introduce a novel Gate-Attention mechanism.This mechanism adeptly integrates statistical features from the text itself into the semantic fabric,enhancing the model’s capacity to understand and represent the data.Additionally,to address the intricate task of mining label correlations,we propose a Dual-end enhancement mechanism.This mechanism effectively mitigates the challenges of information loss and erroneous transmission inherent in traditional long short term memory propagation.We conducted an extensive battery of experiments on the AAPD and RCV1-2 datasets.These experiments serve the dual purpose of confirming the efficacy of both the Gate-Attention mechanism and the Dual-end enhancement mechanism.Our final model unequivocally outperforms the baseline model,attesting to its robustness.These findings emphatically underscore the imperativeness of taking into account not just external knowledge but also the inherent intricacies of textual data when crafting potent MLTC models.
文摘Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).
文摘Aiming at the problems of incomplete characterization of text relations,poor guidance of potential representations,and low quality of model generation in the field of controllable long text generation,this paper proposes a new GSPT-CVAE model(Graph Structured Processing,Single Vector,and Potential Attention Com-puting Transformer-Based Conditioned Variational Autoencoder model).The model obtains a more comprehensive representation of textual relations by graph-structured processing of the input text,and at the same time obtains a single vector representation by weighted merging of the vector sequences after graph-structured processing to get an effective potential representation.In the process of potential representation guiding text generation,the model adopts a combination of traditional embedding and potential attention calculation to give full play to the guiding role of potential representation for generating text,to improve the controllability and effectiveness of text generation.The experimental results show that the model has excellent representation learning ability and can learn rich and useful textual relationship representations.The model also achieves satisfactory results in the effectiveness and controllability of text generation and can generate long texts that match the given constraints.The ROUGE-1 F1 score of this model is 0.243,the ROUGE-2 F1 score is 0.041,the ROUGE-L F1 score is 0.22,and the PPL-Word score is 34.303,which gives the GSPT-CVAE model a certain advantage over the baseline model.Meanwhile,this paper compares this model with the state-of-the-art generative models T5,GPT-4,Llama2,and so on,and the experimental results show that the GSPT-CVAE model has a certain competitiveness.
基金the Special Project of the Shanghai Municipal Commission of Economy and Information Technology for Promoting High-Quality Industrial Development(No.2024-GZL-RGZN-02011)the Shanghai City Digital Transformation Project(No.202301002)the Project of Shanghai Shenkang Hospital Development Center(No.SHDC22023214)。
文摘Surgical site infections(SSIs)are the most common healthcare-related infections in patients with lung cancer.Constructing a lung cancer SSI risk prediction model requires the extraction of relevant risk factors from lung cancer case texts,which involves two types of text structuring tasks:attribute discrimination and attribute extraction.This article proposes a joint model,Multi-BGLC,around these two types of tasks,using bidirectional encoder representations from transformers(BERT)as the encoder and fine-tuning the decoder composed of graph convolutional neural network(GCNN)+long short-term memory(LSTM)+conditional random field(CRF)based on cancer case data.The GCNN is used for attribute discrimination,whereas the LSTM and CRF are used for attribute extraction.The experiment verified the effectiveness and accuracy of the model compared with other baseline models.
基金supporting of the National Science and Technology Council NSTC(grant nos.NSTC 112-2221-E-019-023,NSTC 113-2221-E-019-039)Taiwan University of Science and Technology.
文摘In its 2023 global health statistics,the World Health Organization noted that noncommunicable diseases(NCDs)remain the leading cause of disease burden worldwide,with cardiovascular diseases(CVDs)resulting in more deaths than the three other major NCDs combined.In this study,we developed a method that can comprehensively detect which CVDs are present in a patient.Specifically,we propose a multi-label classification method that utilizes photoplethysmography(PPG)signals and physiological characteristics from public datasets to classify four types of CVDs and related conditions:hypertension,diabetes,cerebral infarction,and cerebrovascular disease.Our approach to multi-disease classification of cardiovascular diseases(CVDs)using PPG signals achieves the highest classification performance when encompassing the broadest range of disease categories,thereby offering a more comprehensive assessment of human health.We employ a multi-label classification strategy to simultaneously predict the presence or absence of multiple diseases.Specifically,we first apply the Savitzky-Golay(S-G)filter to the PPG signals to reduce noise and then transform into statistical features.We integrate processed PPG signals with individual physiological features as a multimodal input,thereby expanding the learned feature space.Notably,even with a simple machine learning method,this approach can achieve relatively high accuracy.The proposed method achieved a maximum F1-score of 0.91,minimum Hamming loss of 0.04,and an accuracy of 0.95.Thus,our method represents an effective and rapid solution for detecting multiple diseases simultaneously,which is beneficial for comprehensively managing CVDs.
文摘We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains.
文摘On January 14,Heimtextil kicked off the new trade fair year with over 3,000 exhibitors from 65 countries.With steady growth,the leading trade fair for home and contract textiles and textile design is strongly positioned. This makes it a reliable platform for international participants.At the opening,architect and designer Patricia Urquiola presented her installation 'among-us' at Heimtextil.
文摘The application of legal texts in the context of digital television is a process that relies on several normative instruments,ranging from international treaties,such as those of the ITU(International Telecommunications Union),to national regulations defining the obligations of audiovisual operators and the modalities of consumer support.Many countries have introduced specific laws and regulations to organize the gradual switch-off of analog broadcasting and encourage the adoption of new digital standards.Consequently,the digitization of Guinea’s broadcasting network cannot be carried out without taking into account the legal framework:allocation of resources and broadcasting players.Analog and digital broadcasting,according to regulatory texts,shows the relationships between the different communication management structures.As for digital broadcasting,we note the appearance of a new service,multiplex.
基金partially supported by the National Natural Science Foundation of China under Grant No.62372121the Ministry of education of Humanities and Social Science project under Grant No.20YJAZH118+1 种基金the National Key Research and Development Program of China under Grant No.2020YFB1005804the MOE Project at Center for Linguistics and Applied Linguistics,Guangdong University of Foreign Studies。
文摘With the rapid development of web technology,Social Networks(SNs)have become one of the most popular platforms for users to exchange views and to express their emotions.More and more people are used to commenting on a certain hot spot in SNs,resulting in a large amount of texts containing emotions.Textual Emotion Cause Extraction(TECE)aims to automatically extract causes for a certain emotion in texts,which is an important research issue in natural language processing.It is different from the previous tasks of emotion recognition and emotion classification.In addition,it is not limited to the shallow-level emotion classification of text,but to trace the emotion source.In this paper,we provide a survey for TECE.First,we introduce the development process and classification of TECE.Then,we discuss the existing methods and key factors for TECE.Finally,we enumerate the challenges and developing trend for TECE.
基金sponsored by the Humanities and Social Sciences Project of the Ministry of Education under Grant No.24YJCZH443Shanghai Philosophy and Social Science Planning Project under Grant No.2024EYY015Shanghai Municipal Philosophy and Social Sciences Planning Project under Grant No.2024EYY011.
文摘This study investigates translation strategies for Chinese cultural terms in academic texts through a case study of Chapter 7 from“Jade Myth Belief and Chinese Spirit”.Using a qualitative research approach based on cultural context framework and cognitive model,the study analyzes translation challenges and solutions in rendering cultural terms related to jade mythology and archaeological concepts.The research identifies three primary translation strategies:transliteration with annotation,domestication with explanation,and cognitive-based translation.The findings reveal that effective translation requires a balanced approach between maintaining academic precision and preserving cultural authenticity.The study demonstrates that successful translation of cultural terms in academic contexts demands a sophisticated understanding of both source and target cultural contexts,along with careful consideration of the academic audience’s needs.This research contributes to the field by providing practical insights for translators working with Chinese cultural texts in academic settings and proposing an approach to handling complex cultural terminology.
基金financially supported by the Ningbo University of Technology New Faculty Research Fundthe 2023 Interdisciplinary Innovation Research Cultivation Program of School of Interdisciplinary Studies,RUCKey Project of the National Social Science Foundation of China(21ATQ008)。
文摘Purpose:Policies have often,albeit inadvertently,overlooked certain scientific insights,especially in the handling of complex events.This study aims to systematically uncover and evaluate pivotal scientific insights that have been underrepresented in policy documents by leveraging extensive datasets from policy texts and scholarly publications.Design/methodology/approach:This article introduces a research framework aimed at excavating scientific insights that have been overlooked by policy,encompassing four integral parts:data acquisition and preprocessing,the identification of overlooked content through thematic analysis,the discovery of overlooked content via keyword analysis,and a comprehensive analysis and discussion of the overlooked content.Leveraging this framework,the research conducts an in-depth exploration of the scientific content overlooked by policies during the COVID-19 pandemic.Findings:During the COVID-19 pandemic,scientific information in four domains was overlooked by policy:psychological state of the populace,environmental issues,the role of computer technology,and public relations.These findings indicate a systematic underrepresentation of important scientific insights in policy.Research limitations:This study is subject to two key limitations.Firstly,the text analysis method—relying on pre-extracted keywords and thematic structures—may not fully capture the nuanced context and complexity of scientific insights in policy documents.Secondly,the focus on a limited set of case studies restricts the broader applicability of the conclusions across diverse situations.Practical implications:The study introduces a quantitative framework using text analysis to identify overlooked scientific content in policy,bridging the gap between science and policy.It also highlights overlooked scientific information during COVID-19,promoting more evidence-based and robust policies through improved science-policy integration.Originality/value:This paper provides new ideas and methods for excavating scientific information that has been overlooked by policy,further deepens the understanding of the interaction between policy and science during the COVID-19 period,and lays the foundation for the more rational use of scientific information in policy-making.
基金supported by the Fund from the Professional Degree Graduate Practice Base Project of University of Shanghai for Science and Technology(Project No.1025115004005).
文摘Based on Katharina Reiss’s text typology theory from the German Functionalist School,this paper systematically explores the strategic differences and practical methods in the mutual translation between Chinese and Japanese for operative,informative,expressive,and informative+expressive text types.By comparatively analyzing the linguistic functions,communicative features,and cultural cognitive differences of these four text types,the study proposes targeted translation strategies:Operative texts require tone adjustment,localized adaptation,and pragmatic compensation to maximize appeal;informative texts emphasize sentence structure restructuring and terminology standardization to ensure information transfer efficiency;expressive texts focus on formal imitation and rhetorical reproduction to preserve the original aesthetic value;for informative+expressive texts,flexible handling of idioms,proverbs,and stylistic forms is necessary to achieve an optimal balance between informational accuracy and aesthetic form.The study validates the applicability of functionalist theory in Chinese-Japanese translation through empirical case studies,while also revealing the prevalence of mixed text functions and the consequent demand for translator strategic flexibility.These findings not only expand the explanatory dimension of Reiss’s theory,but also provide an operational methodological framework for cross-cultural translation practice,holding practical significance for promoting in-depth communication between China and Japan.