This paper briefly introduces the main ideas of a sustainable development OCR system based on open architecture techniques and then describes the construction of an optical character recognition (OCR) center built on ...This paper briefly introduces the main ideas of a sustainable development OCR system based on open architecture techniques and then describes the construction of an optical character recognition (OCR) center built on computer clusters, for the purpose of dynamically improving the recognition precision of the digitized texts of a million volumes of books produced by the China-US Million Books Digital Library (CADAL) Project. The practice of this center will provide helpful reference for other digital library projects.展开更多
Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless...Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless links, some QoS mechanisms should be provided. We put forward a RTP/RSVP transmission scheme with DSR-specific payload and QoS parameters by modifying the present WAP protocol stack. The simulation result shows that this scheme will provide adequate network bandwidth to keep the real-time transport of DSR data over either wirelined or wireless channels.展开更多
Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint r...Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint recognition methods,which rely on preannotated feature matching,face inherent limitations due to the ever-evolving nature and diverse landscape of web applications.In response to these challenges,this work proposes an innovative web application fingerprint recognition method founded on clustering techniques.The method involves extensive data collection from the Tranco List,employing adjusted feature selection built upon Wappalyzer and noise reduction through truncated SVD dimensionality reduction.The core of the methodology lies in the application of the unsupervised OPTICS clustering algorithm,eliminating the need for preannotated labels.By transforming web applications into feature vectors and leveraging clustering algorithms,our approach accurately categorizes diverse web applications,providing comprehensive and precise fingerprint recognition.The experimental results,which are obtained on a dataset featuring various web application types,affirm the efficacy of the method,demonstrating its ability to achieve high accuracy and broad coverage.This novel approach not only distinguishes between different web application types effectively but also demonstrates superiority in terms of classification accuracy and coverage,offering a robust solution to the challenges of web application fingerprint recognition.展开更多
Four parameters of chemical bond havebeen used to span a feature space to classifyquasicrystal-forming Al-alloys from thatalloys without quasicrystal formationwith good result. Since the first quasicrystal-formingsyst...Four parameters of chemical bond havebeen used to span a feature space to classifyquasicrystal-forming Al-alloys from thatalloys without quasicrystal formationwith good result. Since the first quasicrystal-formingsystem, Al-Mn system, discovered by She-chtman in 1984[1], a series of quasicrystal-forming binary alloy systems have beenfound. Most of these systems are Al-contain-ing systems. Bancel has indicated thatthere are three factors affecting theformability of quasicrystals [2]: (1) ele-ctrochemical factor (this factor can be展开更多
The purpose of the paper is to develop a mobile Android application--"Car Log" that gives to users the ability to track all the costs for a vehicle and the ability to add fuel cost data by taking a photo of the cash...The purpose of the paper is to develop a mobile Android application--"Car Log" that gives to users the ability to track all the costs for a vehicle and the ability to add fuel cost data by taking a photo of the cash receipt from the respective gas station where the charging was performed. OCR (optical character recognition) is the conversion of images of typed, handwritten or printed text into machine-encoded text. Once we have the text machine-encoded we can further use it in machine processes, like translation, or extracted, meaning text-to-speech transformed, helping people in simple everyday tasks. Users of the application will be able to enter other completely different costs grouped into categories and other charges. Car Log application quickly and easily can visualize, edit and add different costs for a ear. It also supports the ability to add multiple profiles, by entering data for all ears in a single family, for example, or a small business. The test results are positive thus we intend to further develop a cloud ready application.展开更多
In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfe...In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis(FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.展开更多
This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognit...This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.展开更多
Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only...Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The dataset consists of 14,186 images across 19 activity classes,from dynamic activities such as running and swimming to static activities such as sitting and sleeping.Preprocessing included resizing all images to 512512 pixels,annotating them in YOLO’s bounding box format,and applying data augmentation methods such as flipping,rotation,and cropping to enhance model generalization.The proposed model was trained for 100 epochs with adaptive learning rate methods and hyperparameter optimization for performance improvement,with a mAP@0.5 of 74.93%and a mAP@0.5-0.95 of 64.11%,outperforming previous versions of YOLO(v10,v9,and v8)and general-purpose architectures like ResNet50 and EfficientNet.It exhibited improved precision and recall for all activity classes with high precision values of 0.76 for running,0.79 for swimming,0.80 for sitting,and 0.81 for sleeping,and was tested for real-time deployment with an inference time of 8.9 ms per image,being computationally light.Proposed YOLOv11’s improvements are attributed to architectural advancements like a more complex feature extraction process,better attention modules,and an anchor-free detection mechanism.While YOLOv10 was extremely stable in static activity recognition,YOLOv9 performed well in dynamic environments but suffered from overfitting,and YOLOv8,while being a decent baseline,failed to differentiate between overlapping static activities.The experimental results determine proposed YOLOv11 to be the most appropriate model,providing an ideal balance between accuracy,computational efficiency,and robustness for real-world deployment.Nevertheless,there exist certain issues to be addressed,particularly in discriminating against visually similar activities and the use of publicly available datasets.Future research will entail the inclusion of 3D data and multimodal sensor inputs,such as depth and motion information,for enhancing recognition accuracy and generalizability to challenging real-world environments.展开更多
在对互感器铭牌图像进行扫描输入时,铭牌图像或多或少会出现一定程度的倾斜,这种图像的倾斜最终会导致其字符识别准确率下降。针对此问题提出一种基于霍夫变换获取图像倾斜角度,进而通过图像旋转矫正提高光学字符识别(Optical Character...在对互感器铭牌图像进行扫描输入时,铭牌图像或多或少会出现一定程度的倾斜,这种图像的倾斜最终会导致其字符识别准确率下降。针对此问题提出一种基于霍夫变换获取图像倾斜角度,进而通过图像旋转矫正提高光学字符识别(Optical Character Recognition,OCR)准确率的方法:首先对原始图像进行二值化,进而获得铭牌的轮廓,再采用基于霍夫变换的方法获得铭牌中的水平线段,通过计算得到线段的水平倾斜角,利用此倾角对图像进行还原。实验结果表明,该方法能快速地计算图像的倾斜角度,提高了OCR识别准确率且准确率可达95%以上。展开更多
基金Project supported by China-US Million Books Digital Library Project
文摘This paper briefly introduces the main ideas of a sustainable development OCR system based on open architecture techniques and then describes the construction of an optical character recognition (OCR) center built on computer clusters, for the purpose of dynamically improving the recognition precision of the digitized texts of a million volumes of books produced by the China-US Million Books Digital Library (CADAL) Project. The practice of this center will provide helpful reference for other digital library projects.
文摘Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless links, some QoS mechanisms should be provided. We put forward a RTP/RSVP transmission scheme with DSR-specific payload and QoS parameters by modifying the present WAP protocol stack. The simulation result shows that this scheme will provide adequate network bandwidth to keep the real-time transport of DSR data over either wirelined or wireless channels.
基金supported in part by the National Science Foundation of China under Grants U22B2027,62172297,62102262,61902276 and 62272311,Tianjin Intelligent Manufacturing Special Fund Project under Grant 20211097the China Guangxi Science and Technology Plan Project(Guangxi Science and Technology Base and Talent Special Project)under Grant AD23026096(Application Number 2022AC20001)+1 种基金Hainan Provincial Natural Science Foundation of China under Grant 622RC616CCF-Nsfocus Kunpeng Fund Project under Grant CCF-NSFOCUS202207.
文摘Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint recognition methods,which rely on preannotated feature matching,face inherent limitations due to the ever-evolving nature and diverse landscape of web applications.In response to these challenges,this work proposes an innovative web application fingerprint recognition method founded on clustering techniques.The method involves extensive data collection from the Tranco List,employing adjusted feature selection built upon Wappalyzer and noise reduction through truncated SVD dimensionality reduction.The core of the methodology lies in the application of the unsupervised OPTICS clustering algorithm,eliminating the need for preannotated labels.By transforming web applications into feature vectors and leveraging clustering algorithms,our approach accurately categorizes diverse web applications,providing comprehensive and precise fingerprint recognition.The experimental results,which are obtained on a dataset featuring various web application types,affirm the efficacy of the method,demonstrating its ability to achieve high accuracy and broad coverage.This novel approach not only distinguishes between different web application types effectively but also demonstrates superiority in terms of classification accuracy and coverage,offering a robust solution to the challenges of web application fingerprint recognition.
文摘Four parameters of chemical bond havebeen used to span a feature space to classifyquasicrystal-forming Al-alloys from thatalloys without quasicrystal formationwith good result. Since the first quasicrystal-formingsystem, Al-Mn system, discovered by She-chtman in 1984[1], a series of quasicrystal-forming binary alloy systems have beenfound. Most of these systems are Al-contain-ing systems. Bancel has indicated thatthere are three factors affecting theformability of quasicrystals [2]: (1) ele-ctrochemical factor (this factor can be
文摘The purpose of the paper is to develop a mobile Android application--"Car Log" that gives to users the ability to track all the costs for a vehicle and the ability to add fuel cost data by taking a photo of the cash receipt from the respective gas station where the charging was performed. OCR (optical character recognition) is the conversion of images of typed, handwritten or printed text into machine-encoded text. Once we have the text machine-encoded we can further use it in machine processes, like translation, or extracted, meaning text-to-speech transformed, helping people in simple everyday tasks. Users of the application will be able to enter other completely different costs grouped into categories and other charges. Car Log application quickly and easily can visualize, edit and add different costs for a ear. It also supports the ability to add multiple profiles, by entering data for all ears in a single family, for example, or a small business. The test results are positive thus we intend to further develop a cloud ready application.
基金supported by the National Natural Science Foundation of China(No.61502256)
文摘In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis(FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.
基金supported by National 2011 Collaborative Innovation Center of Wireless Communication Technologies under Grant 2242022k60006。
文摘This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.
基金supported by King Saud University,Riyadh,Saudi Arabia,under Ongoing Research Funding Program(ORF-2025-951).
文摘Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The dataset consists of 14,186 images across 19 activity classes,from dynamic activities such as running and swimming to static activities such as sitting and sleeping.Preprocessing included resizing all images to 512512 pixels,annotating them in YOLO’s bounding box format,and applying data augmentation methods such as flipping,rotation,and cropping to enhance model generalization.The proposed model was trained for 100 epochs with adaptive learning rate methods and hyperparameter optimization for performance improvement,with a mAP@0.5 of 74.93%and a mAP@0.5-0.95 of 64.11%,outperforming previous versions of YOLO(v10,v9,and v8)and general-purpose architectures like ResNet50 and EfficientNet.It exhibited improved precision and recall for all activity classes with high precision values of 0.76 for running,0.79 for swimming,0.80 for sitting,and 0.81 for sleeping,and was tested for real-time deployment with an inference time of 8.9 ms per image,being computationally light.Proposed YOLOv11’s improvements are attributed to architectural advancements like a more complex feature extraction process,better attention modules,and an anchor-free detection mechanism.While YOLOv10 was extremely stable in static activity recognition,YOLOv9 performed well in dynamic environments but suffered from overfitting,and YOLOv8,while being a decent baseline,failed to differentiate between overlapping static activities.The experimental results determine proposed YOLOv11 to be the most appropriate model,providing an ideal balance between accuracy,computational efficiency,and robustness for real-world deployment.Nevertheless,there exist certain issues to be addressed,particularly in discriminating against visually similar activities and the use of publicly available datasets.Future research will entail the inclusion of 3D data and multimodal sensor inputs,such as depth and motion information,for enhancing recognition accuracy and generalizability to challenging real-world environments.
文摘在对互感器铭牌图像进行扫描输入时,铭牌图像或多或少会出现一定程度的倾斜,这种图像的倾斜最终会导致其字符识别准确率下降。针对此问题提出一种基于霍夫变换获取图像倾斜角度,进而通过图像旋转矫正提高光学字符识别(Optical Character Recognition,OCR)准确率的方法:首先对原始图像进行二值化,进而获得铭牌的轮廓,再采用基于霍夫变换的方法获得铭牌中的水平线段,通过计算得到线段的水平倾斜角,利用此倾角对图像进行还原。实验结果表明,该方法能快速地计算图像的倾斜角度,提高了OCR识别准确率且准确率可达95%以上。