This paper presents a project aimed at developing a trilingual visual dictionary for aircraft maintenance professionals and students.The project addresses the growing demand for accurate communication and technical te...This paper presents a project aimed at developing a trilingual visual dictionary for aircraft maintenance professionals and students.The project addresses the growing demand for accurate communication and technical terminology in the aviation industry,particularly in Brazil and China.The study employs a corpus-driven approach,analyzing a large corpus of aircraft maintenance manuals to extract key technical terms and their collocates.Using specialized subcorpora and a comparative analysis,this paper demonstrates challenges and solutions into the identification of high-frequency keywords and explores their contextual use in aviation documentation,emphasizing the need for clear and accurate technical communication.By incorporating these findings into a trilingual visual dictionary,the project aims to enhance the understanding and usage of aviation terminology.展开更多
In daily life,keyword spotting plays an important role in human-computer interaction.However,noise often interferes with the extraction of time-frequency information,and achieving both computational efficiency and rec...In daily life,keyword spotting plays an important role in human-computer interaction.However,noise often interferes with the extraction of time-frequency information,and achieving both computational efficiency and recognition accuracy on resource-constrained devices such as mobile terminals remains a major challenge.To address this,we propose a novel time-frequency dual-branch parallel residual network,which integrates a Dual-Branch Broadcast Residual module and a Time-Frequency Coordinate Attention module.The time-domain and frequency-domain branches are designed in parallel to independently extract temporal and spectral features,effectively avoiding the potential information loss caused by serial stacking,while enhancing information flow and multi-scale feature fusion.In terms of training strategy,a curriculum learning approach is introduced to progressively improve model robustness fromeasy to difficult tasks.Experimental results demonstrate that the proposed method consistently outperforms existing lightweight models under various signal-to-noise ratio(SNR)conditions,achieving superior far-field recognition performance on the Google Speech Commands V2 dataset.Notably,the model maintains stable performance even in low-SNR environments such as–10 dB,and generalizes well to unseen SNR conditions during training,validating its robustness to novel noise scenarios.Furthermore,the proposed model exhibits significantly fewer parameters,making it highly suitable for deployment on resource-limited devices.Overall,the model achieves a favorable balance between performance and parameter efficiency,demonstrating strong potential for practical applications.展开更多
This study presents a reflective bibliometric review of 1457 peer-reviewed articles published in the Journal of Psychology in Africa(2008-2024,17 years),using a Meta-Editorial Mapping Framework(MEMF)analysis.The MEMF ...This study presents a reflective bibliometric review of 1457 peer-reviewed articles published in the Journal of Psychology in Africa(2008-2024,17 years),using a Meta-Editorial Mapping Framework(MEMF)analysis.The MEMF integrates citation metrics,keyword novelty ratios,TF-IDF weighting,and cluster-based topic modeling to trace long-term thematic trends and editorial evolution.Findings reveal sustained attention to foundational domains such as mental health,education,and identity,alongside a gradual integration of emergent themes including digital well-being,organizational behavior,and post-pandemic adaptation.Articles with moderate topical novelty(40%-60% new keywords)achieved the highest citation and usage metrics,suggesting that integrative innovation enhances scholarly impact.Clustering analyses indicate that the journal’s content forms overlapping conceptual domains rather than isolated silos.These insights contribute to editorial strategy,authorial positioning,and the future design of regional academic platforms.Moreover,the findings provide evidence supporting the use of the MEMF as a replicable tool for meta-editorial analysis across disciplinary and geographic boundaries.展开更多
To save the local storage,users store the data on the cloud server who offers convenient internet services.To guarantee the data privacy,users encrypt the data before uploading them into the cloud server.Since encrypt...To save the local storage,users store the data on the cloud server who offers convenient internet services.To guarantee the data privacy,users encrypt the data before uploading them into the cloud server.Since encryption can reduce the data availability,public-key encryption with keyword search(PEKS)is developed to achieve the retrieval of the encrypted data without decrypting them.However,most PEKS schemes cannot resist quantum computing attack,because the corresponding hardness assumptions are some number theory problems that can be solved efficiently under quantum computers.Besides,the traditional PEKS schemes have an inherent security issue that they cannot resist inside keywords guessing attack(KGA).In this attack,a malicious server can guess the keywords encapsulated in the search token by computing the ciphertext of keywords exhaustively and performing the test between the token and the ciphertext of keywords.In the paper,we propose a lattice-based PEKS scheme that can resist quantum computing attacks.To resist inside KGA,this scheme adopts a lattice-based signature technique into the encryption of keywords to prevent the malicious server from forging a valid ciphertext.Finally,some simulation experiments are conducted to demonstrate the performance of the proposed scheme and some comparison results are further shown with respect to other searchable schemes.展开更多
In this paper, an improved algorithm, web-based keyword weight algorithm (WKWA), is presented to weight keywords in web documents. WKWA takes into account representation features of web documents and advantages of t...In this paper, an improved algorithm, web-based keyword weight algorithm (WKWA), is presented to weight keywords in web documents. WKWA takes into account representation features of web documents and advantages of the TF*IDF, TFC and ITC algorithms in order to make it more appropriate for web documents. Meanwhile, the presented algorithm is applied to improved vector space model (IVSM). A real system has been implemented for calculating semantic similarities of web documents. Four experiments have been carried out. They are keyword weight calculation, feature item selection, semantic similarity calculation, and WKWA time performance. The results demonstrate accuracy of keyword weight, and semantic similarity is improved.展开更多
Keywords with the retrieval value clearly show the main contents, enhance the influence of academic periodicals, highlight specific information, and generalize information, but because of the ignorance of the value of...Keywords with the retrieval value clearly show the main contents, enhance the influence of academic periodicals, highlight specific information, and generalize information, but because of the ignorance of the value of keywords from some authors and editors, the lack of canonical guidance, and limitations of the editorial knowledge structure, there are some problems concerning keywords in academic papers, such as papers containing too many or too few keywords, or keywords being used in a way that is too general or colloquial. According to academic principle, accurate principle, logical principle, comprehensive principle, it is imperative to discuss how to choose keywords properly.展开更多
A search strategy over encrypted cloud data based on keywords has been improved and has presented a method using different strategies on the client and the server to improve the search efficiency in this paper. The cl...A search strategy over encrypted cloud data based on keywords has been improved and has presented a method using different strategies on the client and the server to improve the search efficiency in this paper. The client uses the Chinese and English to achieve the synonym construction of the keywords, the establishment of the fuzzy-syllable words and synonyms set of keywords and the implementation of fuzzy search strategy over the encryption of cloud data based on keywords. The server side through the analysis of the user’s query request provides keywords for users to choose and topic words and secondary words are picked out. System will match topic words with historical inquiry in time order, and then the new query result of the request is directly gained. The analysis of the simulation experiment shows that the fuzzy search strategy can make better use of historical results on the basis of privacy protection for the realization of efficient data search, saving the search time and improving the efficiency of search.展开更多
文摘This paper presents a project aimed at developing a trilingual visual dictionary for aircraft maintenance professionals and students.The project addresses the growing demand for accurate communication and technical terminology in the aviation industry,particularly in Brazil and China.The study employs a corpus-driven approach,analyzing a large corpus of aircraft maintenance manuals to extract key technical terms and their collocates.Using specialized subcorpora and a comparative analysis,this paper demonstrates challenges and solutions into the identification of high-frequency keywords and explores their contextual use in aviation documentation,emphasizing the need for clear and accurate technical communication.By incorporating these findings into a trilingual visual dictionary,the project aims to enhance the understanding and usage of aviation terminology.
文摘In daily life,keyword spotting plays an important role in human-computer interaction.However,noise often interferes with the extraction of time-frequency information,and achieving both computational efficiency and recognition accuracy on resource-constrained devices such as mobile terminals remains a major challenge.To address this,we propose a novel time-frequency dual-branch parallel residual network,which integrates a Dual-Branch Broadcast Residual module and a Time-Frequency Coordinate Attention module.The time-domain and frequency-domain branches are designed in parallel to independently extract temporal and spectral features,effectively avoiding the potential information loss caused by serial stacking,while enhancing information flow and multi-scale feature fusion.In terms of training strategy,a curriculum learning approach is introduced to progressively improve model robustness fromeasy to difficult tasks.Experimental results demonstrate that the proposed method consistently outperforms existing lightweight models under various signal-to-noise ratio(SNR)conditions,achieving superior far-field recognition performance on the Google Speech Commands V2 dataset.Notably,the model maintains stable performance even in low-SNR environments such as–10 dB,and generalizes well to unseen SNR conditions during training,validating its robustness to novel noise scenarios.Furthermore,the proposed model exhibits significantly fewer parameters,making it highly suitable for deployment on resource-limited devices.Overall,the model achieves a favorable balance between performance and parameter efficiency,demonstrating strong potential for practical applications.
文摘This study presents a reflective bibliometric review of 1457 peer-reviewed articles published in the Journal of Psychology in Africa(2008-2024,17 years),using a Meta-Editorial Mapping Framework(MEMF)analysis.The MEMF integrates citation metrics,keyword novelty ratios,TF-IDF weighting,and cluster-based topic modeling to trace long-term thematic trends and editorial evolution.Findings reveal sustained attention to foundational domains such as mental health,education,and identity,alongside a gradual integration of emergent themes including digital well-being,organizational behavior,and post-pandemic adaptation.Articles with moderate topical novelty(40%-60% new keywords)achieved the highest citation and usage metrics,suggesting that integrative innovation enhances scholarly impact.Clustering analyses indicate that the journal’s content forms overlapping conceptual domains rather than isolated silos.These insights contribute to editorial strategy,authorial positioning,and the future design of regional academic platforms.Moreover,the findings provide evidence supporting the use of the MEMF as a replicable tool for meta-editorial analysis across disciplinary and geographic boundaries.
基金The authors would like to thank the support from Fundamental Research Funds for the Central Universities(No.30918012204)The authors also gratefully acknowledge the helpful comments and suggestions of other researchers,which has improved the presentation.
文摘To save the local storage,users store the data on the cloud server who offers convenient internet services.To guarantee the data privacy,users encrypt the data before uploading them into the cloud server.Since encryption can reduce the data availability,public-key encryption with keyword search(PEKS)is developed to achieve the retrieval of the encrypted data without decrypting them.However,most PEKS schemes cannot resist quantum computing attack,because the corresponding hardness assumptions are some number theory problems that can be solved efficiently under quantum computers.Besides,the traditional PEKS schemes have an inherent security issue that they cannot resist inside keywords guessing attack(KGA).In this attack,a malicious server can guess the keywords encapsulated in the search token by computing the ciphertext of keywords exhaustively and performing the test between the token and the ciphertext of keywords.In the paper,we propose a lattice-based PEKS scheme that can resist quantum computing attacks.To resist inside KGA,this scheme adopts a lattice-based signature technique into the encryption of keywords to prevent the malicious server from forging a valid ciphertext.Finally,some simulation experiments are conducted to demonstrate the performance of the proposed scheme and some comparison results are further shown with respect to other searchable schemes.
基金Project supported by the Science Foundation of Shanghai Municipal Commission of Science and Technology (Grant No.055115001)
文摘In this paper, an improved algorithm, web-based keyword weight algorithm (WKWA), is presented to weight keywords in web documents. WKWA takes into account representation features of web documents and advantages of the TF*IDF, TFC and ITC algorithms in order to make it more appropriate for web documents. Meanwhile, the presented algorithm is applied to improved vector space model (IVSM). A real system has been implemented for calculating semantic similarities of web documents. Four experiments have been carried out. They are keyword weight calculation, feature item selection, semantic similarity calculation, and WKWA time performance. The results demonstrate accuracy of keyword weight, and semantic similarity is improved.
文摘Keywords with the retrieval value clearly show the main contents, enhance the influence of academic periodicals, highlight specific information, and generalize information, but because of the ignorance of the value of keywords from some authors and editors, the lack of canonical guidance, and limitations of the editorial knowledge structure, there are some problems concerning keywords in academic papers, such as papers containing too many or too few keywords, or keywords being used in a way that is too general or colloquial. According to academic principle, accurate principle, logical principle, comprehensive principle, it is imperative to discuss how to choose keywords properly.
文摘A search strategy over encrypted cloud data based on keywords has been improved and has presented a method using different strategies on the client and the server to improve the search efficiency in this paper. The client uses the Chinese and English to achieve the synonym construction of the keywords, the establishment of the fuzzy-syllable words and synonyms set of keywords and the implementation of fuzzy search strategy over the encryption of cloud data based on keywords. The server side through the analysis of the user’s query request provides keywords for users to choose and topic words and secondary words are picked out. System will match topic words with historical inquiry in time order, and then the new query result of the request is directly gained. The analysis of the simulation experiment shows that the fuzzy search strategy can make better use of historical results on the basis of privacy protection for the realization of efficient data search, saving the search time and improving the efficiency of search.