Let G be a group.The family of all sets which are closed in every Hausdorf group topology of G form the family of closed sets of a T_(1) topology M_(G) on G called the Markov topology.Similarly,the family of all algeb...Let G be a group.The family of all sets which are closed in every Hausdorf group topology of G form the family of closed sets of a T_(1) topology M_(G) on G called the Markov topology.Similarly,the family of all algebraic subsets of G forms a family of closed sets for another T_(1)topology Z_(G) on G called the Zarski topology.A subgroup H of G is said to be Markov(resp.Zarski)embedded if the equality M_(G|H)=M_(H)(resp.Z_(G|H)=Z_(H))holds.I's proved that an abirary subgroup of a free group is both Zariski and Markov embedded in it.展开更多
Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image dis...Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.展开更多
In global navigation satellite system denial environment,cross-view geo-localization based on image retrieval presents an exceedingly critical visual localization solution for Unmanned Aerial Vehicle(UAV)systems.The e...In global navigation satellite system denial environment,cross-view geo-localization based on image retrieval presents an exceedingly critical visual localization solution for Unmanned Aerial Vehicle(UAV)systems.The essence of cross-view geo-localization resides in matching images containing the same geographical targets from disparate platforms,such as UAV-view and satellite-view images.However,images of the same geographical targets may suffer from occlusions and geometric distortions due to variations in the capturing platform,view,and timing.The existing methods predominantly extract features by segmenting feature maps,which overlook the holistic semantic distribution and structural information of objects,resulting in loss of image information.To address these challenges,dilated neighborhood attention Transformer is employed as the feature extraction backbone,and Multi-feature representations based on Multi-scale Hierarchical Contextual Aggregation(MMHCA)is proposed.In the proposed MMHCA method,the multiscale hierarchical contextual aggregation method is utilized to extract contextual information from local to global across various granularity levels,establishing feature associations of contextual information with global and local information in the image.Subsequently,the multi-feature representations method is utilized to obtain rich discriminative feature information,bolstering the robustness of model in scenarios characterized by positional shifts,varying distances,and scale ambiguities.Comprehensive experiments conducted on the extensively utilized University-1652 and SUES-200 benchmarks indicate that the MMHCA method surpasses the existing techniques.showing outstanding results in UAV localization and navigation.展开更多
This study proposes a learner profile framework based on multi-feature fusion,aiming to enhance the precision of personalized learning recommendations by integrating learners’static attributes(e.g.,demographic data a...This study proposes a learner profile framework based on multi-feature fusion,aiming to enhance the precision of personalized learning recommendations by integrating learners’static attributes(e.g.,demographic data and historical academic performance)with dynamic behavioral patterns(e.g.,real-time interactions and evolving interests over time).The research employs Term Frequency-Inverse Document Frequency(TF-IDF)for semantic feature extraction,integrates the Analytic Hierarchy Process(AHP)for feature weighting,and introduces a time decay function inspired by Newton’s law of cooling to dynamically model changes in learners’interests.Empirical results demonstrate that this framework effectively captures the dynamic evolution of learners’behaviors and provides context-aware learning resource recommendations.The study introduces a novel paradigm for learner modeling in educational technology,combining methodological innovation with a scalable technical architecture,thereby laying a foundation for the development of adaptive learning systems.展开更多
The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland im...The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland image segmentation and extraction.An EnFCM remote sensing forest land extraction method based on PCA multi-feature fusion was proposed.Firstly,histogram equalization was applied to improve the image contrast.Secondly,the texture and edge features of the image were extracted,and a multi-feature fused pixel image was generated using the PCA technique.Moreover,the fused feature was used as a feature constraint to measure the difference of pixels instead of a single grey-scale feature.Finally,an improved feature distance metric calculated the similarity between the pixel points and the cluster center to complete the cluster segmentation.The experimental results showed that the error was between 1.5%and 4.0%compared with the forested area counted by experts’hand-drawing,which could obtain a high accuracy segmentation and extraction result.展开更多
Digital watermarking must balance imperceptibility,robustness,complexity,and security.To address the challenge of computational efficiency in trellis-based informed embedding,we propose a modified watermarking framewo...Digital watermarking must balance imperceptibility,robustness,complexity,and security.To address the challenge of computational efficiency in trellis-based informed embedding,we propose a modified watermarking framework that integrates fuzzy c-means(FCM)clustering into the generation off block codewords for labeling trellis arcs.The system incorporates a parallel trellis structure,controllable embedding parameters,and a novel informed embedding algorithm with reduced complexity.Two types of embedding schemes—memoryless and memory-based—are designed to flexibly trade-off between imperceptibility and robustness.Experimental results demonstrate that the proposed method outperforms existing approaches in bit error rate(BER)and computational complexity under various attacks,including additive noise,filtering,JPEG compression,cropping,and rotation.The integration of FCM enhances robustness by increasing the codeword distance,while preserving perceptual quality.Overall,the proposed framework is suitable for real-time and secure watermarking applications.展开更多
In shale gas reservoir stimulation,proppants are essential for sustaining fracture conductivity.However,increasing closing stress causes proppants to embed into the rock matrix,leading to a progressive decline in frac...In shale gas reservoir stimulation,proppants are essential for sustaining fracture conductivity.However,increasing closing stress causes proppants to embed into the rock matrix,leading to a progressive decline in fracture permeability and conductivity.Furthermore,rock creep contributes to long-term reductions in fracture performance.To elucidate the combined effects of proppant embedding and rock creep on sustained conductivity,this study conducted controlled experiments examining conductivity decay in propped fractures under varying closing stresses,explicitly accounting for both mechanisms.An embedded discrete fracture model was developed to simulate reservoir production under different conductivity decay scenarios,while evaluating the influence of proppant parameters on fracture performance.The results demonstrate that fracture conductivity diminishes rapidly with increasing stress,yet at 50 MPa,the decline becomes less pronounced.Simulated production profiles show strong agreement with actual gas well data,confirming the model’s accuracy and predictive capability.These findings suggest that employing a high proppant concentration with smaller particle size(5 kg/m^(2),70/140 mesh)is effective for maintaining long-term fracture conductivity and enhancing shale gas recovery.This study provides a rigorous framework for optimizing proppant selection and designing stimulation strategies that maximize reservoir performance over time.展开更多
Tibetan medical named entity recognition(Tibetan MNER)involves extracting specific types of medical entities from unstructured Tibetan medical texts.Tibetan MNER provide important data support for the work related to ...Tibetan medical named entity recognition(Tibetan MNER)involves extracting specific types of medical entities from unstructured Tibetan medical texts.Tibetan MNER provide important data support for the work related to Tibetan medicine.However,existing Tibetan MNER methods often struggle to comprehensively capture multi-level semantic information,failing to sufficiently extract multi-granularity features and effectively filter out irrelevant information,which ultimately impacts the accuracy of entity recognition.This paper proposes an improved embedding representation method called syllable-word-sentence embedding.By leveraging features at different granularities and using un-scaled dot-product attention to focus on key features for feature fusion,the syllable-word-sentence embedding is integrated into the transformer,enhancing the specificity and diversity of feature representations.The model leverages multi-level and multi-granularity semantic information,thereby improving the performance of Tibetan MNER.We evaluate our proposed model on datasets from various domains.The results indicate that the model effectively identified three types of entities in the Tibetan news dataset we constructed,achieving an F1 score of 93.59%,which represents an improvement of 1.24%compared to the vanilla FLAT.Additionally,results from the Tibetan medical dataset we developed show that it is effective in identifying five kinds of medical entities,with an F1 score of 71.39%,which is a 1.34%improvement over the vanilla FLAT.展开更多
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens...A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.展开更多
Multimodal sentiment analysis aims to understand emotions from text,speech,and video data.However,current methods often overlook the dominant role of text and suffer from feature loss during integration.Given the vary...Multimodal sentiment analysis aims to understand emotions from text,speech,and video data.However,current methods often overlook the dominant role of text and suffer from feature loss during integration.Given the varying importance of each modality across different contexts,a central and pressing challenge in multimodal sentiment analysis lies in maximizing the use of rich intra-modal features while minimizing information loss during the fusion process.In response to these critical limitations,we propose a novel framework that integrates spatial position encoding and fusion embedding modules to address these issues.In our model,text is treated as the core modality,while speech and video features are selectively incorporated through a unique position-aware fusion process.The spatial position encoding strategy preserves the internal structural information of speech and visual modalities,enabling the model to capture localized intra-modal dependencies that are often overlooked.This design enhances the richness and discriminative power of the fused representation,enabling more accurate and context-aware sentiment prediction.Finally,we conduct comprehensive evaluations on two widely recognized standard datasets in the field—CMU-MOSI and CMU-MOSEI to validate the performance of the proposed model.The experimental results demonstrate that our model exhibits good performance and effectiveness for sentiment analysis tasks.展开更多
Network virtualization is the development trend and inevitable requirement of hybrid wireless sensor networks(HWSNs).Low mapping efficiency and service interruption caused by mobility seriously affect the reliability ...Network virtualization is the development trend and inevitable requirement of hybrid wireless sensor networks(HWSNs).Low mapping efficiency and service interruption caused by mobility seriously affect the reliability of sensing tasks and ultimately affect the long-term revenue of the infrastructure providers.In response to these problems,this paper proposes an efficient virtual network embedding algorithm with a reliable service guarantee.Based on the topological attributes of nodes,a method for evaluating the physical network resource importance degree is proposed,and the nodes with rich resources are selected to improve embedding efficiency.Then,a method for evaluating the physical network reliability degree is proposed to predict the probability of mobile sensors providing uninterrupted services.The simulation results show that the proposed algorithm improves the acceptance rate of virtual sensor networks(VSN)embedding requests and the long-term revenue of the infrastructure providers.展开更多
Named Entity Recognition(NER)is vital in natural language processing for the analysis of news texts,as it accurately identifies entities such as locations,persons,and organizations,which is crucial for applications li...Named Entity Recognition(NER)is vital in natural language processing for the analysis of news texts,as it accurately identifies entities such as locations,persons,and organizations,which is crucial for applications like news summarization and event tracking.However,NER in the news domain faces challenges due to insufficient annotated data,complex entity structures,and strong context dependencies.To address these issues,we propose a new Chinesenamed entity recognition method that integrates transfer learning with word embeddings.Our approach leverages the ERNIE pre-trained model for transfer learning and obtaining general language representations and incorporates the Soft-lexicon word embedding technique to handle varied entity structures.This dual-strategy enhances the model’s understanding of context and boosts its ability to process complex texts.Experimental results show that our method achieves an F1 score of 94.72% on a news dataset,surpassing baseline methods by 3%–4%,thereby confirming its effectiveness for Chinese-named entity recognition in the news domain.展开更多
Constructing an in vitro vascularized liver tissue model that closely simulates the human liver is crucial for promoting cell proliferation,mimicking physiological heterogeneous structures,and recreating the cellular ...Constructing an in vitro vascularized liver tissue model that closely simulates the human liver is crucial for promoting cell proliferation,mimicking physiological heterogeneous structures,and recreating the cellular microenvironment.However,the layer-by-layer printing method is significantly constrained by the rheological properties of the bioink,making it challenging to form complex three-dimensional vascular structures in low-viscosity soft materials.To overcome this limitation,we developed a cross-linkable biphasic embedding medium by mixing low-viscosity biomaterials with gelatin microgel.This medium possesses yield stress and self-healing properties,facilitating efficient and continuous three-dimensional shaping of sacrificial ink within it.By adjusting the printing speed,we controlled the filament diameter,achieving a range from 250μm to 1000μm,and ensuring precise control over ink deposition locations and filament shapes.Using the in situ endothelialization method,we constructed complex vascular structures and ensured close adhesion between hepatocytes and endothelial cells.In vitro experiments demonstrated that the vascularized liver tissue model exhibited enhanced protein synthesis and metabolic function compared to mixed liver tissue.We also investigated the impact of varying vascular densities on liver tissue function.Transcriptome sequencing revealed that liver tissues with higher vascular density exhibited upregulated gene expression in metabolic and angiogenesis-related pathways.In summary,this method is adaptable to various materials,allowing the rheological properties of the supporting bath and the tissue's porosity to be modified using microgels,thus enabling precise regulation of the liver tissue microenvironment.Additionally,it facilitates the rapid construction of three-dimensional vascular structures within liver tissue.The resulting vascularized liver tissue model exhibits enhanced biological functionality,opening new opportunities for biomedical applications.展开更多
In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with l...In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.展开更多
The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situati...The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.展开更多
Objective:To explore the effects of acupoint catgut embedding combined with auricular point pressing with beans on symptom management self-efficacy and quality of life in patients with nonalcoholic steatohepatitis(NAS...Objective:To explore the effects of acupoint catgut embedding combined with auricular point pressing with beans on symptom management self-efficacy and quality of life in patients with nonalcoholic steatohepatitis(NASH)of liver depression and spleen deficiency type.Methods:Sixty patients with NASH of liver depression and spleen deficiency type admitted to our hospital from January 2021 to December 2023 were selected and divided into an acupoint catgut embedding group(n=30)and a combined group(n=30)using the envelope lottery method.The acupoint catgut embedding group received acupoint catgut embedding intervention,while the combined group received auricular point pressing with beans on the basis of the acupoint catgut embedding group.The two groups were compared in terms of TCM syndrome scores,symptom management self-efficacy[Chronic Disease Self-Efficacy Scale(CDSES)],and quality of life[Chronic Liver Disease Questionnaire(CLDQ)].Results:After intervention,the combined group had lower TCM syndrome scores for both primary and secondary symptoms compared to the acupoint catgut embedding group(P<0.05).The combined group also had higher scores in all dimensions and total score of the CDSES compared to the acupoint catgut embedding group(P<0.05).Similarly,the combined group had higher scores in all dimensions and total score of the CLDQ compared to the acupoint catgut embedding group(P<0.05).Conclusion:Acupoint catgut embedding combined with auricular point pressing with beans can effectively improve TCM symptoms,enhance symptom management self-efficacy,and improve quality of life in patients with NASH of liver depression and spleen deficiency type.展开更多
Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity...Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity of clinical terminology,the complexity of Chinese text semantics,and the uncertainty of Chinese entity boundaries.To address these issues,we propose an improved CNER model,which is based on multi-feature fusion and multi-scale local context enhancement.The model simultaneously fuses multi-feature representations of pinyin,radical,Part of Speech(POS),word boundary with BERT deep contextual representations to enhance the semantic representation of text for more effective entity recognition.Furthermore,to address the model’s limitation of focusing just on global features,we incorporate Convolutional Neural Networks(CNNs)with various kernel sizes to capture multi-scale local features of the text and enhance the model’s comprehension of the text.Finally,we integrate the obtained global and local features,and employ multi-head attention mechanism(MHA)extraction to enhance the model’s focus on characters associated with medical entities,hence boosting the model’s performance.We obtained 92.74%,and 87.80%F1 scores on the two CNER benchmark datasets,CCKS2017 and CCKS2019,respectively.The results demonstrate that our model outperforms the latest models in CNER,showcasing its outstanding overall performance.It can be seen that the CNER model proposed in this study has an important application value in constructing clinical medical knowledge graph and intelligent Q&A system.展开更多
Aerodynamic surrogate modeling mostly relies only on integrated loads data obtained from simulation or experiment,while neglecting and wasting the valuable distributed physical information on the surface.To make full ...Aerodynamic surrogate modeling mostly relies only on integrated loads data obtained from simulation or experiment,while neglecting and wasting the valuable distributed physical information on the surface.To make full use of both integrated and distributed loads,a modeling paradigm,called the heterogeneous data-driven aerodynamic modeling,is presented.The essential concept is to incorporate the physical information of distributed loads as additional constraints within the end-to-end aerodynamic modeling.Towards heterogenous data,a novel and easily applicable physical feature embedding modeling framework is designed.This framework extracts lowdimensional physical features from pressure distribution and then effectively enhances the modeling of the integrated loads via feature embedding.The proposed framework can be coupled with multiple feature extraction methods,and the well-performed generalization capabilities over different airfoils are verified through a transonic case.Compared with traditional direct modeling,the proposed framework can reduce testing errors by almost 50%.Given the same prediction accuracy,it can save more than half of the training samples.Furthermore,the visualization analysis has revealed a significant correlation between the discovered low-dimensional physical features and the heterogeneous aerodynamic loads,which shows the interpretability and credibility of the superior performance offered by the proposed deep learning framework.展开更多
In this paper,we introduce the notion of embedding tensors on 3-Hom-Lie algebras and show that embedding tensors induce naturally 3-Hom-Leibniz algebras.Moreover,the cohomology theory of embedding tensors on 3-Hom-Lie...In this paper,we introduce the notion of embedding tensors on 3-Hom-Lie algebras and show that embedding tensors induce naturally 3-Hom-Leibniz algebras.Moreover,the cohomology theory of embedding tensors on 3-Hom-Lie algebras is defined.As an application,we show that if two linear deformations of an embedding tensor on a 3-Hom-Lie algebra are equivalent,then their infinitesimals belong to the same cohomology class in the first cohomology group.展开更多
A novel image encryption scheme based on parallel compressive sensing and edge detection embedding technology is proposed to improve visual security. Firstly, the plain image is sparsely represented using the discrete...A novel image encryption scheme based on parallel compressive sensing and edge detection embedding technology is proposed to improve visual security. Firstly, the plain image is sparsely represented using the discrete wavelet transform.Then, the coefficient matrix is scrambled and compressed to obtain a size-reduced image using the Fisher–Yates shuffle and parallel compressive sensing. Subsequently, to increase the security of the proposed algorithm, the compressed image is re-encrypted through permutation and diffusion to obtain a noise-like secret image. Finally, an adaptive embedding method based on edge detection for different carrier images is proposed to generate a visually meaningful cipher image. To improve the plaintext sensitivity of the algorithm, the counter mode is combined with the hash function to generate keys for chaotic systems. Additionally, an effective permutation method is designed to scramble the pixels of the compressed image in the re-encryption stage. The simulation results and analyses demonstrate that the proposed algorithm performs well in terms of visual security and decryption quality.展开更多
基金Supported by the Grant-in-Aid for Scientific Research(C)by the Japan Society for the Promotion of Science(20K03615)。
文摘Let G be a group.The family of all sets which are closed in every Hausdorf group topology of G form the family of closed sets of a T_(1) topology M_(G) on G called the Markov topology.Similarly,the family of all algebraic subsets of G forms a family of closed sets for another T_(1)topology Z_(G) on G called the Zarski topology.A subgroup H of G is said to be Markov(resp.Zarski)embedded if the equality M_(G|H)=M_(H)(resp.Z_(G|H)=Z_(H))holds.I's proved that an abirary subgroup of a free group is both Zariski and Markov embedded in it.
基金supported by Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX24_1332)Jiangsu Province Education Science Planning Project in 2024(Grant No.B-b/2024/01/122)High-Level Talent Scientific Research Foundation of Jinling Institute of Technology,China(Grant No.jit-b-201918).
文摘Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.
基金supported by the National Natural Science Foundation of China(Nos.12072027,62103052,61603346 and 62103379)the Henan Key Laboratory of General Aviation Technology,China(No.ZHKF-230201)+3 种基金the Funding for the Open Research Project of the Rotor Aerodynamics Key Laboratory,China(No.RAL20200101)the Key Research and Development Program of Henan Province,China(Nos.241111222000 and 241111222900)the Key Science and Technology Program of Henan Province,China(No.232102220067)the Scholarship Funding from the China Scholarship Council(No.202206030079).
文摘In global navigation satellite system denial environment,cross-view geo-localization based on image retrieval presents an exceedingly critical visual localization solution for Unmanned Aerial Vehicle(UAV)systems.The essence of cross-view geo-localization resides in matching images containing the same geographical targets from disparate platforms,such as UAV-view and satellite-view images.However,images of the same geographical targets may suffer from occlusions and geometric distortions due to variations in the capturing platform,view,and timing.The existing methods predominantly extract features by segmenting feature maps,which overlook the holistic semantic distribution and structural information of objects,resulting in loss of image information.To address these challenges,dilated neighborhood attention Transformer is employed as the feature extraction backbone,and Multi-feature representations based on Multi-scale Hierarchical Contextual Aggregation(MMHCA)is proposed.In the proposed MMHCA method,the multiscale hierarchical contextual aggregation method is utilized to extract contextual information from local to global across various granularity levels,establishing feature associations of contextual information with global and local information in the image.Subsequently,the multi-feature representations method is utilized to obtain rich discriminative feature information,bolstering the robustness of model in scenarios characterized by positional shifts,varying distances,and scale ambiguities.Comprehensive experiments conducted on the extensively utilized University-1652 and SUES-200 benchmarks indicate that the MMHCA method surpasses the existing techniques.showing outstanding results in UAV localization and navigation.
基金This work is supported by the Ministry of Education of Humanities and Social Science projects in China(No.20YJCZH124)Guangdong Province Education and Teaching Reform Project No.640:Research on the Teaching Practice and Application of Online Peer Assessment Methods in the Context of Artificial Intelligence.
文摘This study proposes a learner profile framework based on multi-feature fusion,aiming to enhance the precision of personalized learning recommendations by integrating learners’static attributes(e.g.,demographic data and historical academic performance)with dynamic behavioral patterns(e.g.,real-time interactions and evolving interests over time).The research employs Term Frequency-Inverse Document Frequency(TF-IDF)for semantic feature extraction,integrates the Analytic Hierarchy Process(AHP)for feature weighting,and introduces a time decay function inspired by Newton’s law of cooling to dynamically model changes in learners’interests.Empirical results demonstrate that this framework effectively captures the dynamic evolution of learners’behaviors and provides context-aware learning resource recommendations.The study introduces a novel paradigm for learner modeling in educational technology,combining methodological innovation with a scalable technical architecture,thereby laying a foundation for the development of adaptive learning systems.
基金supported by National Natural Science Foundation of China(No.61761027)Gansu Young Doctor’s Fund for Higher Education Institutions(No.2021QB-053)。
文摘The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland image segmentation and extraction.An EnFCM remote sensing forest land extraction method based on PCA multi-feature fusion was proposed.Firstly,histogram equalization was applied to improve the image contrast.Secondly,the texture and edge features of the image were extracted,and a multi-feature fused pixel image was generated using the PCA technique.Moreover,the fused feature was used as a feature constraint to measure the difference of pixels instead of a single grey-scale feature.Finally,an improved feature distance metric calculated the similarity between the pixel points and the cluster center to complete the cluster segmentation.The experimental results showed that the error was between 1.5%and 4.0%compared with the forested area counted by experts’hand-drawing,which could obtain a high accuracy segmentation and extraction result.
基金funded by the National Science and Technology Council,Taiwan,under grant number NSTC 114-2221-E-167-005-MY3,and NSTC 113-2221-E-167-006-.
文摘Digital watermarking must balance imperceptibility,robustness,complexity,and security.To address the challenge of computational efficiency in trellis-based informed embedding,we propose a modified watermarking framework that integrates fuzzy c-means(FCM)clustering into the generation off block codewords for labeling trellis arcs.The system incorporates a parallel trellis structure,controllable embedding parameters,and a novel informed embedding algorithm with reduced complexity.Two types of embedding schemes—memoryless and memory-based—are designed to flexibly trade-off between imperceptibility and robustness.Experimental results demonstrate that the proposed method outperforms existing approaches in bit error rate(BER)and computational complexity under various attacks,including additive noise,filtering,JPEG compression,cropping,and rotation.The integration of FCM enhances robustness by increasing the codeword distance,while preserving perceptual quality.Overall,the proposed framework is suitable for real-time and secure watermarking applications.
基金supported by the National Natural Science Foundation of China(Nos.52204051,52304046).
文摘In shale gas reservoir stimulation,proppants are essential for sustaining fracture conductivity.However,increasing closing stress causes proppants to embed into the rock matrix,leading to a progressive decline in fracture permeability and conductivity.Furthermore,rock creep contributes to long-term reductions in fracture performance.To elucidate the combined effects of proppant embedding and rock creep on sustained conductivity,this study conducted controlled experiments examining conductivity decay in propped fractures under varying closing stresses,explicitly accounting for both mechanisms.An embedded discrete fracture model was developed to simulate reservoir production under different conductivity decay scenarios,while evaluating the influence of proppant parameters on fracture performance.The results demonstrate that fracture conductivity diminishes rapidly with increasing stress,yet at 50 MPa,the decline becomes less pronounced.Simulated production profiles show strong agreement with actual gas well data,confirming the model’s accuracy and predictive capability.These findings suggest that employing a high proppant concentration with smaller particle size(5 kg/m^(2),70/140 mesh)is effective for maintaining long-term fracture conductivity and enhancing shale gas recovery.This study provides a rigorous framework for optimizing proppant selection and designing stimulation strategies that maximize reservoir performance over time.
基金supported in part by the National Science and Technology Major Project under(Grant 2022ZD0116100)in part by the National Natural Science Foundation Key Project under(Grant 62436006)+4 种基金in part by the National Natural Science Foundation Youth Fund under(Grant 62406257)in part by the Xizang Autonomous Region Natural Science Foundation General Project under(Grant XZ202401ZR0031)in part by the National Natural Science Foundation of China under(Grant 62276055)in part by the Sichuan Science and Technology Program under(Grant 23ZDYF0755)in part by the Xizang University‘High-Level Talent Training Program’Project under(Grant 2022-GSP-S098).
文摘Tibetan medical named entity recognition(Tibetan MNER)involves extracting specific types of medical entities from unstructured Tibetan medical texts.Tibetan MNER provide important data support for the work related to Tibetan medicine.However,existing Tibetan MNER methods often struggle to comprehensively capture multi-level semantic information,failing to sufficiently extract multi-granularity features and effectively filter out irrelevant information,which ultimately impacts the accuracy of entity recognition.This paper proposes an improved embedding representation method called syllable-word-sentence embedding.By leveraging features at different granularities and using un-scaled dot-product attention to focus on key features for feature fusion,the syllable-word-sentence embedding is integrated into the transformer,enhancing the specificity and diversity of feature representations.The model leverages multi-level and multi-granularity semantic information,thereby improving the performance of Tibetan MNER.We evaluate our proposed model on datasets from various domains.The results indicate that the model effectively identified three types of entities in the Tibetan news dataset we constructed,achieving an F1 score of 93.59%,which represents an improvement of 1.24%compared to the vanilla FLAT.Additionally,results from the Tibetan medical dataset we developed show that it is effective in identifying five kinds of medical entities,with an F1 score of 71.39%,which is a 1.34%improvement over the vanilla FLAT.
文摘A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.
基金supported by the Collaborative Tackling Project of the Yangtze River Delta SciTech Innovation Community(Nos.2024CSJGG01503,2024CSJGG01500)Guangxi Key Research and Development Program(No.AB24010317)Jiangxi Provincial Key Laboratory of Electronic Data Control and Forensics(Jiangxi Police College)(No.2025JXJYKFJJ002).
文摘Multimodal sentiment analysis aims to understand emotions from text,speech,and video data.However,current methods often overlook the dominant role of text and suffer from feature loss during integration.Given the varying importance of each modality across different contexts,a central and pressing challenge in multimodal sentiment analysis lies in maximizing the use of rich intra-modal features while minimizing information loss during the fusion process.In response to these critical limitations,we propose a novel framework that integrates spatial position encoding and fusion embedding modules to address these issues.In our model,text is treated as the core modality,while speech and video features are selectively incorporated through a unique position-aware fusion process.The spatial position encoding strategy preserves the internal structural information of speech and visual modalities,enabling the model to capture localized intra-modal dependencies that are often overlooked.This design enhances the richness and discriminative power of the fused representation,enabling more accurate and context-aware sentiment prediction.Finally,we conduct comprehensive evaluations on two widely recognized standard datasets in the field—CMU-MOSI and CMU-MOSEI to validate the performance of the proposed model.The experimental results demonstrate that our model exhibits good performance and effectiveness for sentiment analysis tasks.
基金supported by National Natural Science Foundation of China(61901071,61871062,61771082,U20A20157)Science and Natural Science Foundation of Chongqing,China(cstc2020jcyjzdxmX0024)+1 种基金University Innovation Research Group of Chongqing(CXQT20017)Scientific and Technological Research Program of Chongqing Municipal Education Commission(No.KJZD-K201901301).
文摘Network virtualization is the development trend and inevitable requirement of hybrid wireless sensor networks(HWSNs).Low mapping efficiency and service interruption caused by mobility seriously affect the reliability of sensing tasks and ultimately affect the long-term revenue of the infrastructure providers.In response to these problems,this paper proposes an efficient virtual network embedding algorithm with a reliable service guarantee.Based on the topological attributes of nodes,a method for evaluating the physical network resource importance degree is proposed,and the nodes with rich resources are selected to improve embedding efficiency.Then,a method for evaluating the physical network reliability degree is proposed to predict the probability of mobile sensors providing uninterrupted services.The simulation results show that the proposed algorithm improves the acceptance rate of virtual sensor networks(VSN)embedding requests and the long-term revenue of the infrastructure providers.
基金funded by Advanced Research Project(30209040702).
文摘Named Entity Recognition(NER)is vital in natural language processing for the analysis of news texts,as it accurately identifies entities such as locations,persons,and organizations,which is crucial for applications like news summarization and event tracking.However,NER in the news domain faces challenges due to insufficient annotated data,complex entity structures,and strong context dependencies.To address these issues,we propose a new Chinesenamed entity recognition method that integrates transfer learning with word embeddings.Our approach leverages the ERNIE pre-trained model for transfer learning and obtaining general language representations and incorporates the Soft-lexicon word embedding technique to handle varied entity structures.This dual-strategy enhances the model’s understanding of context and boosts its ability to process complex texts.Experimental results show that our method achieves an F1 score of 94.72% on a news dataset,surpassing baseline methods by 3%–4%,thereby confirming its effectiveness for Chinese-named entity recognition in the news domain.
基金the funding from the National Natural Science Foundation of China No.52275294the National Key Research and Development Program of China(No.2018YFA0703000)。
文摘Constructing an in vitro vascularized liver tissue model that closely simulates the human liver is crucial for promoting cell proliferation,mimicking physiological heterogeneous structures,and recreating the cellular microenvironment.However,the layer-by-layer printing method is significantly constrained by the rheological properties of the bioink,making it challenging to form complex three-dimensional vascular structures in low-viscosity soft materials.To overcome this limitation,we developed a cross-linkable biphasic embedding medium by mixing low-viscosity biomaterials with gelatin microgel.This medium possesses yield stress and self-healing properties,facilitating efficient and continuous three-dimensional shaping of sacrificial ink within it.By adjusting the printing speed,we controlled the filament diameter,achieving a range from 250μm to 1000μm,and ensuring precise control over ink deposition locations and filament shapes.Using the in situ endothelialization method,we constructed complex vascular structures and ensured close adhesion between hepatocytes and endothelial cells.In vitro experiments demonstrated that the vascularized liver tissue model exhibited enhanced protein synthesis and metabolic function compared to mixed liver tissue.We also investigated the impact of varying vascular densities on liver tissue function.Transcriptome sequencing revealed that liver tissues with higher vascular density exhibited upregulated gene expression in metabolic and angiogenesis-related pathways.In summary,this method is adaptable to various materials,allowing the rheological properties of the supporting bath and the tissue's porosity to be modified using microgels,thus enabling precise regulation of the liver tissue microenvironment.Additionally,it facilitates the rapid construction of three-dimensional vascular structures within liver tissue.The resulting vascularized liver tissue model exhibits enhanced biological functionality,opening new opportunities for biomedical applications.
基金supported by the National Science and Technology Council(NSTC),Taiwan,under Grants Numbers 112-2622-E-029-009 and 112-2221-E-029-019.
文摘In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.
文摘The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.
文摘Objective:To explore the effects of acupoint catgut embedding combined with auricular point pressing with beans on symptom management self-efficacy and quality of life in patients with nonalcoholic steatohepatitis(NASH)of liver depression and spleen deficiency type.Methods:Sixty patients with NASH of liver depression and spleen deficiency type admitted to our hospital from January 2021 to December 2023 were selected and divided into an acupoint catgut embedding group(n=30)and a combined group(n=30)using the envelope lottery method.The acupoint catgut embedding group received acupoint catgut embedding intervention,while the combined group received auricular point pressing with beans on the basis of the acupoint catgut embedding group.The two groups were compared in terms of TCM syndrome scores,symptom management self-efficacy[Chronic Disease Self-Efficacy Scale(CDSES)],and quality of life[Chronic Liver Disease Questionnaire(CLDQ)].Results:After intervention,the combined group had lower TCM syndrome scores for both primary and secondary symptoms compared to the acupoint catgut embedding group(P<0.05).The combined group also had higher scores in all dimensions and total score of the CDSES compared to the acupoint catgut embedding group(P<0.05).Similarly,the combined group had higher scores in all dimensions and total score of the CLDQ compared to the acupoint catgut embedding group(P<0.05).Conclusion:Acupoint catgut embedding combined with auricular point pressing with beans can effectively improve TCM symptoms,enhance symptom management self-efficacy,and improve quality of life in patients with NASH of liver depression and spleen deficiency type.
基金This study was supported by the National Natural Science Foundation of China(61911540482 and 61702324).
文摘Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity of clinical terminology,the complexity of Chinese text semantics,and the uncertainty of Chinese entity boundaries.To address these issues,we propose an improved CNER model,which is based on multi-feature fusion and multi-scale local context enhancement.The model simultaneously fuses multi-feature representations of pinyin,radical,Part of Speech(POS),word boundary with BERT deep contextual representations to enhance the semantic representation of text for more effective entity recognition.Furthermore,to address the model’s limitation of focusing just on global features,we incorporate Convolutional Neural Networks(CNNs)with various kernel sizes to capture multi-scale local features of the text and enhance the model’s comprehension of the text.Finally,we integrate the obtained global and local features,and employ multi-head attention mechanism(MHA)extraction to enhance the model’s focus on characters associated with medical entities,hence boosting the model’s performance.We obtained 92.74%,and 87.80%F1 scores on the two CNER benchmark datasets,CCKS2017 and CCKS2019,respectively.The results demonstrate that our model outperforms the latest models in CNER,showcasing its outstanding overall performance.It can be seen that the CNER model proposed in this study has an important application value in constructing clinical medical knowledge graph and intelligent Q&A system.
基金supported by the National Natural Science Foundation of China(Nos.92152301,12072282)。
文摘Aerodynamic surrogate modeling mostly relies only on integrated loads data obtained from simulation or experiment,while neglecting and wasting the valuable distributed physical information on the surface.To make full use of both integrated and distributed loads,a modeling paradigm,called the heterogeneous data-driven aerodynamic modeling,is presented.The essential concept is to incorporate the physical information of distributed loads as additional constraints within the end-to-end aerodynamic modeling.Towards heterogenous data,a novel and easily applicable physical feature embedding modeling framework is designed.This framework extracts lowdimensional physical features from pressure distribution and then effectively enhances the modeling of the integrated loads via feature embedding.The proposed framework can be coupled with multiple feature extraction methods,and the well-performed generalization capabilities over different airfoils are verified through a transonic case.Compared with traditional direct modeling,the proposed framework can reduce testing errors by almost 50%.Given the same prediction accuracy,it can save more than half of the training samples.Furthermore,the visualization analysis has revealed a significant correlation between the discovered low-dimensional physical features and the heterogeneous aerodynamic loads,which shows the interpretability and credibility of the superior performance offered by the proposed deep learning framework.
基金Supported by the Scientific Research Foundation for Science&Technology Innovation Talent Team of the Intelligent Computing and Monitoring of Guizhou Province(Grant No.QJJ[2023]063)the Science and Technology Program of Guizhou Province(Grant Nos.ZK[2023]025+4 种基金QKHZC[2023]372ZK[2022]031)the National Natural Science Foundation of China(Grant No.12161013)the Scientific Research Foundation of Guizhou University of Finance and Economics(Grant No.2022KYYB08)the Doctoral Research Start-Up Fund of Guiyang University(Grant No.GYU-KY-2024).
文摘In this paper,we introduce the notion of embedding tensors on 3-Hom-Lie algebras and show that embedding tensors induce naturally 3-Hom-Leibniz algebras.Moreover,the cohomology theory of embedding tensors on 3-Hom-Lie algebras is defined.As an application,we show that if two linear deformations of an embedding tensor on a 3-Hom-Lie algebra are equivalent,then their infinitesimals belong to the same cohomology class in the first cohomology group.
基金supported by the Key Area R&D Program of Guangdong Province (Grant No.2022B0701180001)the National Natural Science Foundation of China (Grant No.61801127)+1 种基金the Science Technology Planning Project of Guangdong Province,China (Grant Nos.2019B010140002 and 2020B111110002)the Guangdong-Hong Kong-Macao Joint Innovation Field Project (Grant No.2021A0505080006)。
文摘A novel image encryption scheme based on parallel compressive sensing and edge detection embedding technology is proposed to improve visual security. Firstly, the plain image is sparsely represented using the discrete wavelet transform.Then, the coefficient matrix is scrambled and compressed to obtain a size-reduced image using the Fisher–Yates shuffle and parallel compressive sensing. Subsequently, to increase the security of the proposed algorithm, the compressed image is re-encrypted through permutation and diffusion to obtain a noise-like secret image. Finally, an adaptive embedding method based on edge detection for different carrier images is proposed to generate a visually meaningful cipher image. To improve the plaintext sensitivity of the algorithm, the counter mode is combined with the hash function to generate keys for chaotic systems. Additionally, an effective permutation method is designed to scramble the pixels of the compressed image in the re-encryption stage. The simulation results and analyses demonstrate that the proposed algorithm performs well in terms of visual security and decryption quality.