期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
Sign Language Recognition and Classification Model to Enhance Quality of Disabled People
1
作者 Fadwa Alrowais Saud S.Alotaibi +3 位作者 Sami Dhahbi Radwa Marzouk Abdullah Mohamed Anwer Mustafa Hilal 《Computers, Materials & Continua》 SCIE EI 2022年第11期3419-3432,共14页
Sign language recognition can be considered as an effective solution for disabled people to communicate with others.It helps them in conveying the intended information using sign languages without any challenges.Recen... Sign language recognition can be considered as an effective solution for disabled people to communicate with others.It helps them in conveying the intended information using sign languages without any challenges.Recent advancements in computer vision and image processing techniques can be leveraged to detect and classify the signs used by disabled people in an effective manner.Metaheuristic optimization algorithms can be designed in a manner such that it fine tunes the hyper parameters,used in Deep Learning(DL)models as the latter considerably impacts the classification results.With this motivation,the current study designs the Optimal Deep Transfer Learning Driven Sign Language Recognition and Classification(ODTL-SLRC)model for disabled people.The aim of the proposed ODTL-SLRC technique is to recognize and classify sign languages used by disabled people.The proposed ODTL-SLRC technique derives EfficientNet model to generate a collection of useful feature vectors.In addition,the hyper parameters involved in EfficientNet model are fine-tuned with the help of HGSO algorithm.Moreover,Bidirectional Long Short Term Memory(BiLSTM)technique is employed for sign language classification.The proposed ODTL-SLRC technique was experimentally validated using benchmark dataset and the results were inspected under several measures.The comparative analysis results established the superior performance of the proposed ODTL-SLRC technique over recent approaches in terms of efficiency. 展开更多
关键词 sign language image processing computer vision disabled people deep learning parameter tuning
在线阅读 下载PDF
Sign Language to Sentence Formation:A Real Time Solution for Deaf People
2
作者 Muhammad Sanaullah Muhammad Kashif +4 位作者 Babar Ahmad Tauqeer Safdar Mehdi Hassan Mohd Hilmi Hasan Amir Haider 《Computers, Materials & Continua》 SCIE EI 2022年第8期2501-2519,共19页
Communication is a basic need of every human being to exchange thoughts and interact with the society.Acute peoples usually confab through different spoken languages,whereas deaf people cannot do so.Therefore,the Sign... Communication is a basic need of every human being to exchange thoughts and interact with the society.Acute peoples usually confab through different spoken languages,whereas deaf people cannot do so.Therefore,the Sign Language(SL)is the communication medium of such people for their conversation and interaction with the society.The SL is expressed in terms of specific gesture for every word and a gesture is consisted in a sequence of performed signs.The acute people normally observe these signs to understand the difference between single and multiple gestures for singular and plural words respectively.The signs for singular words such as I,eat,drink,home are unalike the plural words as school,cars,players.A special training is required to gain the sufficient knowledge and practice so that people can differentiate and understand every gesture/sign appropriately.Innumerable researches have been performed to articulate the computer-based solution to understand the single gesture with the help of a single hand enumeration.The complete understanding of such communications are possible only with the help of this differentiation of gestures in computer-based solution of SL to cope with the real world environment.Hence,there is still a demand for specific environment to automate such a communication solution to interact with such type of special people.This research focuses on facilitating the deaf community by capturing the gestures in video format and then mapping and differentiating as single or multiple gestures used in words.Finally,these are converted into the respective words/sentences within a reasonable time.This provide a real time solution for the deaf people to communicate and interact with the society. 展开更多
关键词 sign language machine learning conventional neural network image processing deaf community
在线阅读 下载PDF
Continuous Arabic Sign Language Recognition in User Dependent Mode
3
作者 K. Assaleh T. Shanableh +2 位作者 M. Fanaswala F. Amin H. Bajaj 《Journal of Intelligent Learning Systems and Applications》 2010年第1期19-27,共9页
Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. ... Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. In this paper we report the first continuous Arabic Sign Language by building on existing research in feature extraction and pattern recognition. The development of the presented work required collecting a continuous Arabic Sign Language database which we designed and recorded in cooperation with a sign language expert. We intend to make the collected database available for the research community. Our system which we based on spatio-temporal feature extraction and hidden Markov models has resulted in an average word recognition rate of 94%, keeping in the mind the use of a high perplex-ity vocabulary and unrestrictive grammar. We compare our proposed work against existing sign language techniques based on accumulated image difference and motion estimation. The experimental results section shows that the pro-posed work outperforms existing solutions in terms of recognition accuracy. 展开更多
关键词 PATTERN Recognition Motion Analysis image/ video processing and sign language
在线阅读 下载PDF
Mexican Sign Language Recognition Using Jacobi-Fourier Moments
4
作者 Francisco Solís Carina Toxqui David Martínez 《Engineering(科研)》 2015年第10期700-705,共6页
The present work introduces a system for recognizing static signs in Mexican Sign Language (MSL) using Jacobi-Fourier Moments (JFMs) and Artificial Neural Networks (ANN). The original color images of static signs are ... The present work introduces a system for recognizing static signs in Mexican Sign Language (MSL) using Jacobi-Fourier Moments (JFMs) and Artificial Neural Networks (ANN). The original color images of static signs are cropped, segmented and converted to grayscale. Then to reduce computational costs 64 JFMs were calculated to represent each image. The JFMs are sorted to select a subset that improves recognition according to a metric proposed by us based on a ratio between dispersion measures. Using WEKA software to test a Multilayer-Perceptron with this subset of JFMs reached 95% of recognition rate. 展开更多
关键词 MEXICAN sign language Jacobi-Fourier MOMENTS Digital image processing
在线阅读 下载PDF
铁路云视频会议系统人工智能技术融合应用方案研究
5
作者 毛健 《铁道通信信号》 2025年第8期58-64,共7页
铁路云视频会议系统在铁路日常生产和运营中发挥着重要作用。通过对当前铁路云视频会议系统进行调研分析,发现系统存在功能单一、智能化程度低等问题。结合系统使用需求,提出一种现实可行的技术方案:在既有铁路云视频会议系统的内部服... 铁路云视频会议系统在铁路日常生产和运营中发挥着重要作用。通过对当前铁路云视频会议系统进行调研分析,发现系统存在功能单一、智能化程度低等问题。结合系统使用需求,提出一种现实可行的技术方案:在既有铁路云视频会议系统的内部服务网络和外部服务网络分别增设业务应用服务器、存储服务器和计算服务器,在终端接入区增设全景会议相机和麦克风阵列等设备;将基于人工智能的语音处理、图像处理、自然语言处理和多模态融合等技术应用于铁路视频会议系统,实现实时翻译与字幕显示、发言者方位识别、自动纪要生成与分类归档等智能化功能应用;从国产自主可控和系统安全角度对系统主要设备配置提出选型建议。该方案部署后可大幅提升会议效率和系统服务质量,为铁路云视频会议系统未来升级和高质量发展提供技术支撑。 展开更多
关键词 人工智能 云视频会议系统 语音处理 图像处理 自然语言处理 多模态融合
在线阅读 下载PDF
移动化并可拓展的音频系统及其应用——以2025年总台春晚听障版音频制作为例
6
作者 付昱 《演艺科技》 2025年第1期17-21,共5页
基于中央广播电视总台第十二演播室音频系统的架构、信号流程、监控等功能与特点,解析2025年总台春晚听障版(竖屏)的音频制作,重点解析了信号的延时处理;并结合转播实践进一步探讨了技术应用的深度升级,以及轻量化、智能监控运维等制作... 基于中央广播电视总台第十二演播室音频系统的架构、信号流程、监控等功能与特点,解析2025年总台春晚听障版(竖屏)的音频制作,重点解析了信号的延时处理;并结合转播实践进一步探讨了技术应用的深度升级,以及轻量化、智能监控运维等制作模式的优化。 展开更多
关键词 2025年总台春晚听障版 无障碍转播 现场画面+字幕+手语表演 IP化双冗余架构 实时监测 AI同声字幕系统 语音实时转写 延时
在线阅读 下载PDF
基于实时视频的笔画信息提取技术
7
作者 蒙世斌 周明全 《计算机工程》 EI CAS CSCD 北大核心 2005年第6期151-153,共3页
描述了一种基于实时视频的笔画信息提取方法,采用视频输入设备捕捉用户手指的运动轨迹,并对此进行实时处理、简化和分析,根据手指的轨迹特征以及笔画的特点从中滤除无用的信息的方法,提取出笔画。通过笔画信息识别定义手势控制,可实现... 描述了一种基于实时视频的笔画信息提取方法,采用视频输入设备捕捉用户手指的运动轨迹,并对此进行实时处理、简化和分析,根据手指的轨迹特征以及笔画的特点从中滤除无用的信息的方法,提取出笔画。通过笔画信息识别定义手势控制,可实现简单的人机交互。 展开更多
关键词 笔画识别 手势控制 实时视频处理 边缘检测
在线阅读 下载PDF
基于低分辨率视频图像的手语识别方法
8
作者 严焰 刘蓉 《计算机应用与软件》 CSCD 2016年第9期151-153,共3页
实际环境中常遇到大量低分辨率手语视频图像需要识别,但其只含有相对有限的判别信息,识别效率不高,因此提出一种手语识别方法。该方法在采用实时皮肤颜色特征提取目标区域的基础上,计算目标区域形心、边界链码两种识别特征值,利用动态... 实际环境中常遇到大量低分辨率手语视频图像需要识别,但其只含有相对有限的判别信息,识别效率不高,因此提出一种手语识别方法。该方法在采用实时皮肤颜色特征提取目标区域的基础上,计算目标区域形心、边界链码两种识别特征值,利用动态时间规整算法依次识别手势起始帧与结束帧,结合识别结果还原手语单词。在南佛罗里达大学公共手语数据集进行实验,采用该方法与现有方法比较,识别出正确手语单词增加21个,错误手语单词减少1个,消除了手语单词残缺干扰,证明该方法的有效性。 展开更多
关键词 手语识别 动态时间规整算法 数字图像处理
在线阅读 下载PDF
基于压缩感知与SURF特征的手语关键帧提取算法 被引量:11
9
作者 王民 李泽洋 +1 位作者 王纯 石新源 《激光与光电子学进展》 CSCD 北大核心 2018年第5期184-191,共8页
针对实时、大词汇集、连续的手语视频高效准确地识别,提出了一种基于压缩感知与加速稳健特征(SURF)的手语关键帧提取算法。利用压缩感知将手语视频降维成低维多尺度帧图像特征,通过自适应阈值完成子镜头分割,以处理大量的手语帧数据;运... 针对实时、大词汇集、连续的手语视频高效准确地识别,提出了一种基于压缩感知与加速稳健特征(SURF)的手语关键帧提取算法。利用压缩感知将手语视频降维成低维多尺度帧图像特征,通过自适应阈值完成子镜头分割,以处理大量的手语帧数据;运用SURF特征点完成特征匹配,绘制其间的相似度曲线进而提取关键帧。在前期预处理阶段,采用基于HSV空间自适应颜色检测提取手势区域。实验验证,由本文算法提取到的关键帧具有较高的准确性,且算法具备处理大量复杂数据的能力。 展开更多
关键词 图像处理 图像特征提取 压缩感知 加速稳健特征 关键帧 手势检测
原文传递
基于优化全卷积神经网络的手语语义识别 被引量:2
10
作者 王民 郝静 +1 位作者 要趁红 史其琦 《激光与光电子学进展》 CSCD 北大核心 2018年第11期208-214,共7页
手语特征提取的传统算法仅仅依靠底层特征完成识别,难以获得高层语义特征,进而对手语理解产生分歧。针对这一问题,将图像语义分析的思维引入手语识别研究中,提出一种优化全卷积神经网络算法。采用全卷积神经网络提取手语图像的语义特征... 手语特征提取的传统算法仅仅依靠底层特征完成识别,难以获得高层语义特征,进而对手语理解产生分歧。针对这一问题,将图像语义分析的思维引入手语识别研究中,提出一种优化全卷积神经网络算法。采用全卷积神经网络提取手语图像的语义特征,并通过判别随机场进行语义标注做后期平滑处理,恢复像素间的细节信息,从而完成手语识别。实验结果表明,所提出的算法具有较强的稳健性,能有效学习到语义特征。与传统算法对比分析表明,此方法能精准的识别到手语,其平均识别率达97.41%。 展开更多
关键词 图像处理 图像语义 手语识别 全卷积神经网络 判别随机场
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部