Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal ...Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.展开更多
Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted vid...Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted videos can assist drivers in making decisions.However,Car-mounted video text images pose challenges such as complex backgrounds,small fonts,and the need for real-time detection.We proposed a robust Car-mounted Video Text Detector(CVTD).It is a lightweight text detection model based on ResNet18 for feature extraction,capable of detecting text in arbitrary shapes.Our model efficiently extracted global text positions through the Coordinate Attention Threshold Activation(CATA)and enhanced the representation capability through stacking two Feature Pyramid Enhancement Fusion Modules(FPEFM),strengthening feature representation,and integrating text local features and global position information,reinforcing the representation capability of the CVTD model.The enhanced feature maps,when acted upon by Text Activation Maps(TAM),effectively distinguished text foreground from non-text regions.Additionally,we collected and annotated a dataset containing 2200 images of Car-mounted Video Text(CVT)under various road conditions for training and evaluating our model’s performance.We further tested our model on four other challenging public natural scene text detection benchmark datasets,demonstrating its strong generalization ability and real-time detection speed.This model holds potential for practical applications in real-world scenarios.展开更多
The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situati...The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.展开更多
近年来场景文本检测技术飞速发展,提出一种可适用于任意形状文本检测的新颖算法Mask Text Detector.该算法在Mask R-CNN的基础上,用anchor-free的方法替代了原本的RPN层生成建议框,减少了超参、模型参数和计算量.还提出LQCS(Localizatio...近年来场景文本检测技术飞速发展,提出一种可适用于任意形状文本检测的新颖算法Mask Text Detector.该算法在Mask R-CNN的基础上,用anchor-free的方法替代了原本的RPN层生成建议框,减少了超参、模型参数和计算量.还提出LQCS(Localization Quality and Classification Score)joint regression,能够将坐标质量和类别分数关联到一起,消除预测阶段不一致的问题.为了让网络区分复杂样本,结合传统的边缘检测算法提出Socle-Mask分支生成分割掩码.该模块在水平和垂直方向上分区别提取纹理特征,并加入通道自注意力机制,让网络自主选择通道特征.我们在三个具有挑战性的数据集(Total-Text、CTW1500和ICDAR2015)中进行了广泛的实验,验证了该算法具有很好的文本检测性能.展开更多
In recent years,images have played a more and more important role in our daily life and social communication.To some extent,the textual information contained in the pictures is an important factor in understanding the...In recent years,images have played a more and more important role in our daily life and social communication.To some extent,the textual information contained in the pictures is an important factor in understanding the content of the scenes themselves.The more accurate the text detection of the natural scenes is,the more accurate our semantic understanding of the images will be.Thus,scene text detection has also become the hot spot in the domain of computer vision.In this paper,we have presented a modified text detection network which is based on further research and improvement of Connectionist Text Proposal Network(CTPN)proposed by previous researchers.To extract deeper features that are less affected by different images,we use Residual Network(ResNet)to replace Visual Geometry Group Network(VGGNet)which is used in the original network.Meanwhile,to enhance the robustness of the models to multiple languages,we use the datasets for training from multi-lingual scene text detection and script identification datasets(MLT)of 2017 International Conference on Document Analysis and Recognition(ICDAR2017).And apart from that,the attention mechanism is used to get more reasonable weight distribution.We found the proposed models achieve 0.91 F1-score on ICDAR2011 test,better than CTPN trained on the same datasets by about 5%.展开更多
Text in natural scene images usually carries abundant semantic information. However, due to variations of text and complexity of background, detecting text in scene images becomes a critical and challenging task. In t...Text in natural scene images usually carries abundant semantic information. However, due to variations of text and complexity of background, detecting text in scene images becomes a critical and challenging task. In this paper, we present a novel method to detect text from scene images. Firstly, we decompose scene images into background and text components using morphological component analysis(MCA), which will reduce the adverse effects of complex backgrounds on the detection results.In order to improve the performance of image decomposition,two discriminative dictionaries of background and text are learned from the training samples. Moreover, Laplacian sparse regularization is introduced into our proposed dictionary learning method which improves discrimination of dictionary. Based on the text dictionary and the sparse-representation coefficients of text, we can construct the text component. After that, the text in the query image can be detected by applying certain heuristic rules. The results of experiments show the effectiveness of the proposed method.展开更多
This paper proposes a learning-based method for text detection and text segmentation in natural scene images. First, the input image is decomposed into multiple connected-components (CCs) by Niblack clustering algorit...This paper proposes a learning-based method for text detection and text segmentation in natural scene images. First, the input image is decomposed into multiple connected-components (CCs) by Niblack clustering algorithm. Then all the CCs including text CCs and non-text CCs are verified on their text features by a 2-stage classification module, where most non-text CCs are discarded by an attentional cascade classifier and remaining CCs are further verified by an SVM. All the accepted CCs are output to result in text only binary image. Experiments with many images in different scenes showed satisfactory performance of our proposed method.展开更多
Text embedded in images is one of many important cues for indexing and retrieval of images and videos. In the paper, we present a novel method of detecting text aligned either horizontally or vertically, in which a py...Text embedded in images is one of many important cues for indexing and retrieval of images and videos. In the paper, we present a novel method of detecting text aligned either horizontally or vertically, in which a pyramid structure is used to represent an image and the features of the text are extracted using SUSAN edge detector. Text regions at each level of the pyramid are identified according to the autocorrelation analysis. New techniques are introduced to split the text regions into basic ones and merge them into text lines. By evaluating the method on a set of images, we obtain a very good performance of text detection.展开更多
Topic models such as Latent Dirichlet Allocation(LDA) have been successfully applied to many text mining tasks for extracting topics embedded in corpora. However, existing topic models generally cannot discover bursty...Topic models such as Latent Dirichlet Allocation(LDA) have been successfully applied to many text mining tasks for extracting topics embedded in corpora. However, existing topic models generally cannot discover bursty topics that experience a sudden increase during a period of time. In this paper, we propose a new topic model named Burst-LDA, which simultaneously discovers topics and reveals their burstiness through explicitly modeling each topic's burst states with a first order Markov chain and using the chain to generate the topic proportion of documents in a Logistic Normal fashion. A Gibbs sampling algorithm is developed for the posterior inference of the proposed model. Experimental results on a news data set show our model can efficiently discover bursty topics, outperforming the state-of-the-art method.展开更多
This paper proposes a new two-phase approach to robust text detection by integrating the visual appearance and the geometric reasoning rules. In the first phase, geometric rules are used to achieve a higher recall rat...This paper proposes a new two-phase approach to robust text detection by integrating the visual appearance and the geometric reasoning rules. In the first phase, geometric rules are used to achieve a higher recall rate. Specifically, a robust stroke width transform(RSWT) feature is proposed to better recover the stroke width by additionally considering the cross of two strokes and the continuousness of the letter border. In the second phase, a classification scheme based on visual appearance features is used to reject the false alarms while keeping the recall rate. To learn a better classifier from multiple visual appearance features, a novel classification method called double soft multiple kernel learning(DS-MKL) is proposed. DS-MKL is motivated by a novel kernel margin perspective for multiple kernel learning and can effectively suppress the influence of noisy base kernels. Comprehensive experiments on the benchmark ICDAR2005 competition dataset demonstrate the effectiveness of the proposed two-phase text detection approach over the state-of-the-art approaches by a performance gain up to 4.4% in terms of F-measure.展开更多
In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the...In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the aspect ratios andlayouts of a scene text may differ significantly. All these variations appear assignificant challenges for the detection and recognition algorithms that are consideredfor the text in natural scenes. In this paper, a new intelligent text detection andrecognition method for detectingthe text from natural scenes and forrecognizingthe text by applying the newly proposed Conditional Random Field-based fuzzyrules incorporated Convolutional Neural Network (CR-CNN) has been proposed.Moreover, we have recommended a new text detection method for detecting theexact text from the input natural scene images. For enhancing the presentation ofthe edge detection process, image pre-processing activities such as edge detectionand color modeling have beenapplied in this work. In addition, we have generatednew fuzzy rules for making effective decisions on the processes of text detectionand recognition. The experiments have been directedusing the standard benchmark datasets such as the ICDAR 2003, the ICDAR 2011, the ICDAR2005 and the SVT and have achieved better detection accuracy intext detectionand recognition. By using these three datasets, five different experiments havebeen conducted for evaluating the proposed model. And also, we have comparedthe proposed system with the other classifiers such as the SVM, the MLP and theCNN. In these comparisons, the proposed model has achieved better classificationaccuracywhen compared with the other existing works.展开更多
Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However...Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However,in this paper,a prototype for text detection and recognition from natural scene images is proposed.This prototype is based on the Raspberry Pi 4 and the Universal Serial Bus(USB)camera and embedded our text detection and recognition model,which was developed using the Python language.Our model is based on the deep learning text detector model through the Efficient and Accurate Scene Text Detec-tor(EAST)model for text localization and detection and the Tesseract-OCR,which is used as an Optical Character Recognition(OCR)engine for text recog-nition.Our prototype is controlled by the Virtual Network Computing(VNC)tool through a computer via a wireless connection.The experiment results show that the recognition rate for the captured image through the camera by our prototype can reach 99.75%with low computational complexity.Furthermore,our proto-type is more performant than the Tesseract software in terms of the recognition rate.Besides,it provides the same performance in terms of the recognition rate with a huge decrease in the execution time by an average of 89%compared to the EasyOCR software on the Raspberry Pi 4 board.展开更多
Segmentation-based scene text detection has drawn a great deal of attention,as it can describe the text instance with arbitrary shapes based on its pixel-level prediction.However,most segmentation-based methods suffer...Segmentation-based scene text detection has drawn a great deal of attention,as it can describe the text instance with arbitrary shapes based on its pixel-level prediction.However,most segmentation-based methods suffer from complex post-processing to separate the text instances which are close to each other,resulting in considerable time consumption during the inference procedure.A label enhancement method is proposed to construct two kinds of training labels for segmentation-based scene text detection in this paper.The label distribution learning(LDL)method is used to overcome the problem brought by pure shrunk text labels that might result in suboptimal detection perfor⁃mance.The experimental results on three benchmarks demonstrate that the proposed method can consistently improve the performance with⁃out sacrificing inference speed.展开更多
Scene text detection is an important step in the scene text reading system.There are still two problems during the existing text detection methods:(1)The small receptive of the convolutional layer in text detection is...Scene text detection is an important step in the scene text reading system.There are still two problems during the existing text detection methods:(1)The small receptive of the convolutional layer in text detection is not sufficiently sensitive to the target area in the image;(2)The deep receptive of the convolutional layer in text detection lose a lot of spatial feature information.Therefore,detecting scene text remains a challenging issue.In this work,we design an effective text detector named Adaptive Multi-Scale HyperNet(AMSHN)to improve texts detection performance.Specifically,AMSHN enhances the sensitivity of target semantics in shallow features with a new attention mechanism to strengthen the region of interest in the image and weaken the region of no interest.In addition,it reduces the loss of spatial feature by fusing features on multiple paths,which significantly improves the detection performance of text.Experimental results on the Robust Reading Challenge on Reading Chinese Text on Signboard(ReCTS)dataset show that the proposed method has achieved the state-of-the-art results,which proves the ability of our detector on both particularity and universality applications.展开更多
Aiming at the challenges associated with the absence of a labeled dataset for Yi characters and the complexity of Yi character detection and recognition,we present a deep learning-based approach for Yi character detec...Aiming at the challenges associated with the absence of a labeled dataset for Yi characters and the complexity of Yi character detection and recognition,we present a deep learning-based approach for Yi character detection and recognition.In the detection stage,an improved Differentiable Binarization Network(DBNet)framework is introduced to detect Yi characters,in which the Omni-dimensional Dynamic Convolution(ODConv)is combined with the ResNet-18 feature extraction module to obtain multi-dimensional complementary features,thereby improving the accuracy of Yi character detection.Then,the feature pyramid network fusion module is used to further extract Yi character image features,improving target recognition at different scales.Further,the previously generated feature map is passed through a head network to produce two maps:a probability map and an adaptive threshold map of the same size as the original map.These maps are then subjected to a differentiable binarization process,resulting in an approximate binarization map.This map helps to identify the boundaries of the text boxes.Finally,the text detection box is generated after the post-processing stage.In the recognition stage,an improved lightweight MobileNetV3 framework is used to recognize the detect character regions,where the original Squeeze-and-Excitation(SE)block is replaced by the efficient Shuffle Attention(SA)that integrates spatial and channel attention,improving the accuracy of Yi characters recognition.Meanwhile,the use of depth separable convolution and reversible residual structure can reduce the number of parameters and computation of the model,so that the model can better understand the contextual information and improve the accuracy of text recognition.The experimental results illustrate that the proposed method achieves good results in detecting and recognizing Yi characters,with detection and recognition accuracy rates of 97.5%and 96.8%,respectively.And also,we have compared the detection and recognition algorithms proposed in this paper with other typical algorithms.In these comparisons,the proposed model achieves better detection and recognition results with a certain reliability.展开更多
在复杂自然场景的端到端文本识别中,由于文本和背景难以区分,文本检测的位置信息和识别的语义信息不匹配,无法有效利用检测和识别之间的相关性.针对该问题,本文提出双域感知下多方显式信息协同的自然场景端到端文本识别方法(Multi-party...在复杂自然场景的端到端文本识别中,由于文本和背景难以区分,文本检测的位置信息和识别的语义信息不匹配,无法有效利用检测和识别之间的相关性.针对该问题,本文提出双域感知下多方显式信息协同的自然场景端到端文本识别方法(Multi-party Synergetic explicit Information with Dual-domain Awareness text spotting,MSIDA),通过强化文本区域特征和边缘纹理,利用文本检测和识别特征之间的协同作用提高端到端文本识别性能.首先,设计融合文本空间和方向信息的双域感知模块(Dual-Domain Awareness,DDA),增强文本实例的视觉特征信息;其次,提出多方显式信息协同模块(Multi-party Explicit Information Synergy,MEIS)提取编码特征中的显式信息,通过匹配对齐用于检测和识别的位置、分类和字符多方信息生成候选文本实例;最后,协同特征通过解码器引导可学习的查询序列获得文本检测和识别的结果 .相比最新的DeepSolo(Decoder with explicit points Solo)方法,在Total-Text、ICDAR 2015和CTW1500数据集上,MSIDA模型的准确率分别提升0.8%、0.8%和0.4%.代码和数据集在https://github.com/msida2024/MSIDA.git可以获取.展开更多
基金the National Natural Science Foundation of PRChina(42075130)Nari Technology Co.,Ltd.(4561655965)。
文摘Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.
基金This work is supported in part by the National Natural Science Foundation of China(Grant Number 61971078)which provided domain expertise and computational power that greatly assisted the activity+1 种基金This work was financially supported by Chongqing Municipal Education Commission Grants forMajor Science and Technology Project(KJZD-M202301901)the Science and Technology Research Project of Jiangxi Department of Education(GJJ2201049).
文摘Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted videos can assist drivers in making decisions.However,Car-mounted video text images pose challenges such as complex backgrounds,small fonts,and the need for real-time detection.We proposed a robust Car-mounted Video Text Detector(CVTD).It is a lightweight text detection model based on ResNet18 for feature extraction,capable of detecting text in arbitrary shapes.Our model efficiently extracted global text positions through the Coordinate Attention Threshold Activation(CATA)and enhanced the representation capability through stacking two Feature Pyramid Enhancement Fusion Modules(FPEFM),strengthening feature representation,and integrating text local features and global position information,reinforcing the representation capability of the CVTD model.The enhanced feature maps,when acted upon by Text Activation Maps(TAM),effectively distinguished text foreground from non-text regions.Additionally,we collected and annotated a dataset containing 2200 images of Car-mounted Video Text(CVT)under various road conditions for training and evaluating our model’s performance.We further tested our model on four other challenging public natural scene text detection benchmark datasets,demonstrating its strong generalization ability and real-time detection speed.This model holds potential for practical applications in real-world scenarios.
文摘The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality.
文摘近年来场景文本检测技术飞速发展,提出一种可适用于任意形状文本检测的新颖算法Mask Text Detector.该算法在Mask R-CNN的基础上,用anchor-free的方法替代了原本的RPN层生成建议框,减少了超参、模型参数和计算量.还提出LQCS(Localization Quality and Classification Score)joint regression,能够将坐标质量和类别分数关联到一起,消除预测阶段不一致的问题.为了让网络区分复杂样本,结合传统的边缘检测算法提出Socle-Mask分支生成分割掩码.该模块在水平和垂直方向上分区别提取纹理特征,并加入通道自注意力机制,让网络自主选择通道特征.我们在三个具有挑战性的数据集(Total-Text、CTW1500和ICDAR2015)中进行了广泛的实验,验证了该算法具有很好的文本检测性能.
基金supported by National Natural Science Foundation of China(Nos.U1536121,61370195).
文摘In recent years,images have played a more and more important role in our daily life and social communication.To some extent,the textual information contained in the pictures is an important factor in understanding the content of the scenes themselves.The more accurate the text detection of the natural scenes is,the more accurate our semantic understanding of the images will be.Thus,scene text detection has also become the hot spot in the domain of computer vision.In this paper,we have presented a modified text detection network which is based on further research and improvement of Connectionist Text Proposal Network(CTPN)proposed by previous researchers.To extract deeper features that are less affected by different images,we use Residual Network(ResNet)to replace Visual Geometry Group Network(VGGNet)which is used in the original network.Meanwhile,to enhance the robustness of the models to multiple languages,we use the datasets for training from multi-lingual scene text detection and script identification datasets(MLT)of 2017 International Conference on Document Analysis and Recognition(ICDAR2017).And apart from that,the attention mechanism is used to get more reasonable weight distribution.We found the proposed models achieve 0.91 F1-score on ICDAR2011 test,better than CTPN trained on the same datasets by about 5%.
基金supported in part by the National Natural Science Foundation of China(61302041,61363044,61562053,61540042)the Applied Basic Research Foundation of Yunnan Provincial Science and Technology Department(2013FD011,2016FD039)
文摘Text in natural scene images usually carries abundant semantic information. However, due to variations of text and complexity of background, detecting text in scene images becomes a critical and challenging task. In this paper, we present a novel method to detect text from scene images. Firstly, we decompose scene images into background and text components using morphological component analysis(MCA), which will reduce the adverse effects of complex backgrounds on the detection results.In order to improve the performance of image decomposition,two discriminative dictionaries of background and text are learned from the training samples. Moreover, Laplacian sparse regularization is introduced into our proposed dictionary learning method which improves discrimination of dictionary. Based on the text dictionary and the sparse-representation coefficients of text, we can construct the text component. After that, the text in the query image can be detected by applying certain heuristic rules. The results of experiments show the effectiveness of the proposed method.
基金Project supported by the OMRON and SJTU Collaborative Founda-tion under PVS project (2005.03~2005.10)
文摘This paper proposes a learning-based method for text detection and text segmentation in natural scene images. First, the input image is decomposed into multiple connected-components (CCs) by Niblack clustering algorithm. Then all the CCs including text CCs and non-text CCs are verified on their text features by a 2-stage classification module, where most non-text CCs are discarded by an attentional cascade classifier and remaining CCs are further verified by an SVM. All the accepted CCs are output to result in text only binary image. Experiments with many images in different scenes showed satisfactory performance of our proposed method.
文摘Text embedded in images is one of many important cues for indexing and retrieval of images and videos. In the paper, we present a novel method of detecting text aligned either horizontally or vertically, in which a pyramid structure is used to represent an image and the features of the text are extracted using SUSAN edge detector. Text regions at each level of the pyramid are identified according to the autocorrelation analysis. New techniques are introduced to split the text regions into basic ones and merge them into text lines. By evaluating the method on a set of images, we obtain a very good performance of text detection.
基金Supported by the National High Technology Research and Development Program of China(No.2012AA011005)
文摘Topic models such as Latent Dirichlet Allocation(LDA) have been successfully applied to many text mining tasks for extracting topics embedded in corpora. However, existing topic models generally cannot discover bursty topics that experience a sudden increase during a period of time. In this paper, we propose a new topic model named Burst-LDA, which simultaneously discovers topics and reveals their burstiness through explicitly modeling each topic's burst states with a first order Markov chain and using the chain to generate the topic proportion of documents in a Logistic Normal fashion. A Gibbs sampling algorithm is developed for the posterior inference of the proposed model. Experimental results on a news data set show our model can efficiently discover bursty topics, outperforming the state-of-the-art method.
基金supported by National Natural Science Foundation of China(Nos.61300163,61125106 and 61300162)Jiangsu Key Laboratory of Big Data Analysis Technology
文摘This paper proposes a new two-phase approach to robust text detection by integrating the visual appearance and the geometric reasoning rules. In the first phase, geometric rules are used to achieve a higher recall rate. Specifically, a robust stroke width transform(RSWT) feature is proposed to better recover the stroke width by additionally considering the cross of two strokes and the continuousness of the letter border. In the second phase, a classification scheme based on visual appearance features is used to reject the false alarms while keeping the recall rate. To learn a better classifier from multiple visual appearance features, a novel classification method called double soft multiple kernel learning(DS-MKL) is proposed. DS-MKL is motivated by a novel kernel margin perspective for multiple kernel learning and can effectively suppress the influence of noisy base kernels. Comprehensive experiments on the benchmark ICDAR2005 competition dataset demonstrate the effectiveness of the proposed two-phase text detection approach over the state-of-the-art approaches by a performance gain up to 4.4% in terms of F-measure.
文摘In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the aspect ratios andlayouts of a scene text may differ significantly. All these variations appear assignificant challenges for the detection and recognition algorithms that are consideredfor the text in natural scenes. In this paper, a new intelligent text detection andrecognition method for detectingthe text from natural scenes and forrecognizingthe text by applying the newly proposed Conditional Random Field-based fuzzyrules incorporated Convolutional Neural Network (CR-CNN) has been proposed.Moreover, we have recommended a new text detection method for detecting theexact text from the input natural scene images. For enhancing the presentation ofthe edge detection process, image pre-processing activities such as edge detectionand color modeling have beenapplied in this work. In addition, we have generatednew fuzzy rules for making effective decisions on the processes of text detectionand recognition. The experiments have been directedusing the standard benchmark datasets such as the ICDAR 2003, the ICDAR 2011, the ICDAR2005 and the SVT and have achieved better detection accuracy intext detectionand recognition. By using these three datasets, five different experiments havebeen conducted for evaluating the proposed model. And also, we have comparedthe proposed system with the other classifiers such as the SVM, the MLP and theCNN. In these comparisons, the proposed model has achieved better classificationaccuracywhen compared with the other existing works.
基金This work was funded by the Deanship of Scientific Research at Jouf University(Kingdom of Saudi Arabia)under Grant No.DSR-2021-02-0392.
文摘Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However,in this paper,a prototype for text detection and recognition from natural scene images is proposed.This prototype is based on the Raspberry Pi 4 and the Universal Serial Bus(USB)camera and embedded our text detection and recognition model,which was developed using the Python language.Our model is based on the deep learning text detector model through the Efficient and Accurate Scene Text Detec-tor(EAST)model for text localization and detection and the Tesseract-OCR,which is used as an Optical Character Recognition(OCR)engine for text recog-nition.Our prototype is controlled by the Virtual Network Computing(VNC)tool through a computer via a wireless connection.The experiment results show that the recognition rate for the captured image through the camera by our prototype can reach 99.75%with low computational complexity.Furthermore,our proto-type is more performant than the Tesseract software in terms of the recognition rate.Besides,it provides the same performance in terms of the recognition rate with a huge decrease in the execution time by an average of 89%compared to the EasyOCR software on the Raspberry Pi 4 board.
基金supported by ZTE Industry⁃University⁃Institute Coopera⁃tion Funds under Grant No.HC⁃CN⁃20200717012.
文摘Segmentation-based scene text detection has drawn a great deal of attention,as it can describe the text instance with arbitrary shapes based on its pixel-level prediction.However,most segmentation-based methods suffer from complex post-processing to separate the text instances which are close to each other,resulting in considerable time consumption during the inference procedure.A label enhancement method is proposed to construct two kinds of training labels for segmentation-based scene text detection in this paper.The label distribution learning(LDL)method is used to overcome the problem brought by pure shrunk text labels that might result in suboptimal detection perfor⁃mance.The experimental results on three benchmarks demonstrate that the proposed method can consistently improve the performance with⁃out sacrificing inference speed.
基金This work is supported by the National Natural Science Foundation of China(61872231,61701297).
文摘Scene text detection is an important step in the scene text reading system.There are still two problems during the existing text detection methods:(1)The small receptive of the convolutional layer in text detection is not sufficiently sensitive to the target area in the image;(2)The deep receptive of the convolutional layer in text detection lose a lot of spatial feature information.Therefore,detecting scene text remains a challenging issue.In this work,we design an effective text detector named Adaptive Multi-Scale HyperNet(AMSHN)to improve texts detection performance.Specifically,AMSHN enhances the sensitivity of target semantics in shallow features with a new attention mechanism to strengthen the region of interest in the image and weaken the region of no interest.In addition,it reduces the loss of spatial feature by fusing features on multiple paths,which significantly improves the detection performance of text.Experimental results on the Robust Reading Challenge on Reading Chinese Text on Signboard(ReCTS)dataset show that the proposed method has achieved the state-of-the-art results,which proves the ability of our detector on both particularity and universality applications.
基金The work was supported by the National Natural Science Foundation of China(61972062,62306060)the Basic Research Project of Liaoning Province(2023JH2/101300191)+1 种基金the Liaoning Doctoral Research Start-Up Fund Project(2023-BS-078)the Dalian Academy of Social Sciences(2023dlsky028).
文摘Aiming at the challenges associated with the absence of a labeled dataset for Yi characters and the complexity of Yi character detection and recognition,we present a deep learning-based approach for Yi character detection and recognition.In the detection stage,an improved Differentiable Binarization Network(DBNet)framework is introduced to detect Yi characters,in which the Omni-dimensional Dynamic Convolution(ODConv)is combined with the ResNet-18 feature extraction module to obtain multi-dimensional complementary features,thereby improving the accuracy of Yi character detection.Then,the feature pyramid network fusion module is used to further extract Yi character image features,improving target recognition at different scales.Further,the previously generated feature map is passed through a head network to produce two maps:a probability map and an adaptive threshold map of the same size as the original map.These maps are then subjected to a differentiable binarization process,resulting in an approximate binarization map.This map helps to identify the boundaries of the text boxes.Finally,the text detection box is generated after the post-processing stage.In the recognition stage,an improved lightweight MobileNetV3 framework is used to recognize the detect character regions,where the original Squeeze-and-Excitation(SE)block is replaced by the efficient Shuffle Attention(SA)that integrates spatial and channel attention,improving the accuracy of Yi characters recognition.Meanwhile,the use of depth separable convolution and reversible residual structure can reduce the number of parameters and computation of the model,so that the model can better understand the contextual information and improve the accuracy of text recognition.The experimental results illustrate that the proposed method achieves good results in detecting and recognizing Yi characters,with detection and recognition accuracy rates of 97.5%and 96.8%,respectively.And also,we have compared the detection and recognition algorithms proposed in this paper with other typical algorithms.In these comparisons,the proposed model achieves better detection and recognition results with a certain reliability.
文摘在复杂自然场景的端到端文本识别中,由于文本和背景难以区分,文本检测的位置信息和识别的语义信息不匹配,无法有效利用检测和识别之间的相关性.针对该问题,本文提出双域感知下多方显式信息协同的自然场景端到端文本识别方法(Multi-party Synergetic explicit Information with Dual-domain Awareness text spotting,MSIDA),通过强化文本区域特征和边缘纹理,利用文本检测和识别特征之间的协同作用提高端到端文本识别性能.首先,设计融合文本空间和方向信息的双域感知模块(Dual-Domain Awareness,DDA),增强文本实例的视觉特征信息;其次,提出多方显式信息协同模块(Multi-party Explicit Information Synergy,MEIS)提取编码特征中的显式信息,通过匹配对齐用于检测和识别的位置、分类和字符多方信息生成候选文本实例;最后,协同特征通过解码器引导可学习的查询序列获得文本检测和识别的结果 .相比最新的DeepSolo(Decoder with explicit points Solo)方法,在Total-Text、ICDAR 2015和CTW1500数据集上,MSIDA模型的准确率分别提升0.8%、0.8%和0.4%.代码和数据集在https://github.com/msida2024/MSIDA.git可以获取.