Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body mo...Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body movements including head,facial expressions,eyes,shoulder shrugging,etc.Previously both gestures have been detected;identifying separately may have better accuracy,butmuch communicational information is lost.Aproper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others.Our novel proposed system contributes as Sign LanguageAction Transformer Network(SLATN),localizing hand,body,and facial gestures in video sequences.Here we are expending a Transformer-style structural design as a“base network”to extract features from a spatiotemporal domain.Themodel impulsively learns to track individual persons and their action context inmultiple frames.Furthermore,a“head network”emphasizes hand movement and facial expression simultaneously,which is often crucial to understanding sign language,using its attention mechanism for creating tight bounding boxes around classified gestures.The model’s work is later compared with the traditional identification methods of activity recognition.It not only works faster but achieves better accuracy as well.Themodel achieves overall 82.66%testing accuracy with a very considerable performance of computation with 94.13 Giga-Floating Point Operations per Second(G-FLOPS).Another contribution is a newly created dataset of Pakistan Sign Language forManual and Non-Manual(PkSLMNM)gestures.展开更多
Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, mos...Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, most existing deep learning based recognition frameworks are not optimized for action in the surveillance videos. In this paper, we propose a novel method to deal with the recognition of different types of actions in outdoor surveillance videos. The proposed method first introduces motion compensation to improve the detection of human target. Then, it uses three different types of deep models with single and sequenced images as inputs for the recognition of different types of actions. Finally, predictions from different models are fused with a linear model. Experimental results show that the proposed method works well on the real surveillance videos.展开更多
The interpenetrating polymer networks (IPN) thin film with the –C=O group in one network and the terminal –N=C=O group in another network on an aluminum substrate to reinforce the adherence between IPN and aluminum ...The interpenetrating polymer networks (IPN) thin film with the –C=O group in one network and the terminal –N=C=O group in another network on an aluminum substrate to reinforce the adherence between IPN and aluminum through interfacial reactions, were obtained by dip-pulling the pretreated aluminum substrate into the viscous-controlled IPN precursors and by the following thinning treatment to the IPN film to a suitable thickness. The interfacial actions and the adhesion strengths of the IPN on the pretreated aluminum substrate were investigated by the X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FTIR) and strain-stress(?-?) measurements. The XPS and FTIR detection results indicated that the elements’ contents of N, O, and Al varied from the depths of IPN. The in-terfacial reaction occurred between the –N=C=O group of IPN and the AlO(OH) of pretreated aluminum. The in-creased force constant for –C=O double bond and the lower frequency shift of –C=O stretching vibration absorption peak both verified the formation of hydrogen bond between the –OH group in AlO(OH) and the –C=O group in IPN. The adherence detections indicated that the larger amount of –N=C=O group in the IPN, the higher shear strengths between the IPN thin film and the aluminum substrate.展开更多
Action recognition is important for understanding the human behaviors in the video,and the video representation is the basis for action recognition.This paper provides a new video representation based on convolution n...Action recognition is important for understanding the human behaviors in the video,and the video representation is the basis for action recognition.This paper provides a new video representation based on convolution neural networks(CNN).For capturing human motion information in one CNN,we take both the optical flow maps and gray images as input,and combine multiple convolutional features by max pooling across frames.In another CNN,we input single color frame to capture context information.Finally,we take the top full connected layer vectors as video representation and train the classifiers by linear support vector machine.The experimental results show that the representation which integrates the optical flow maps and gray images obtains more discriminative properties than those which depend on only one element.On the most challenging data sets HMDB51 and UCF101,this video representation obtains competitive performance.展开更多
In this paper, we propose a novel game-theoretical solution to the multi-path routing problem in wireless ad hoc networks comprising selfish nodes with hidden information and actions. By incorporating a suitable traff...In this paper, we propose a novel game-theoretical solution to the multi-path routing problem in wireless ad hoc networks comprising selfish nodes with hidden information and actions. By incorporating a suitable traffic allocation policy, the proposed mechanism results in Nash equilibria where each node honestly reveals its true cost, and forwarding subgame perfect equilibrium in which each node does provide forwarding service with its declared service reliability. Based on the generalised second price auction, this mechanism effectively alleviates the over-payment of the well-known VCG mechanism. The effectiveness of this mechanism will be shown through simulations.展开更多
以专利引证网络为载体,从知识基因稳定性、遗传性以及变异性等基本特征出发,提出一种基于subject-action-object三元组的知识基因提取方法.应用连接度算法分析专利引证关系,挖掘引证专利和被引专利之间继承和发展的知识流,建立知识进化...以专利引证网络为载体,从知识基因稳定性、遗传性以及变异性等基本特征出发,提出一种基于subject-action-object三元组的知识基因提取方法.应用连接度算法分析专利引证关系,挖掘引证专利和被引专利之间继承和发展的知识流,建立知识进化轨迹;利用文本语法分析技术,从专利权利要求书中提取subject-action-object三元组;基于语义词库WordNet进行语义加工,计算语义相似度,合并同义的subject-action-object三元组,绘制知识基因图谱.从美国专利数据库中采集了5 073项1975—1999年授权的数据挖掘领域的相关专利,分析了专利的地区分布情况和年度分布情况.从NBER(National Bureau of Economic Research)的专利数据集中查询得到专利引证关系,利用网络分析软件Pajek构建专利引证网络,作为实验数据样本,对所提出的知识基因提取方法进行验证.实验结果表明:所提取的subject-action-object三元组具备了知识基因稳定性、遗传性和变异性等特征,可以作为知识基因的一种表现形式.展开更多
目的:通过数据挖掘和网络药理学探究中医药治疗溃疡性结肠炎常用中药及作用机制。方法:通过检索中国知网、维普网、万方医学、PubMed、EmBase等数据库中关于中药口服法治疗溃疡性结肠炎的相关文献,获取文献中的中药并对其进行频数和频...目的:通过数据挖掘和网络药理学探究中医药治疗溃疡性结肠炎常用中药及作用机制。方法:通过检索中国知网、维普网、万方医学、PubMed、EmBase等数据库中关于中药口服法治疗溃疡性结肠炎的相关文献,获取文献中的中药并对其进行频数和频率分析。使用中药系统药理学数据库和分析平台筛选频次前五名中药的有效成分及相关靶点,使用GeneCards数据库检索溃疡性结肠炎的基因靶点,并与中药相关靶点进行映射,得到中药治疗溃疡性结肠炎的潜在作用靶点,采用Cytoscape软件构建“成分-靶点”网络图。将潜在作用靶点导入String数据库,获得蛋白质-蛋白质相互作用(protein-protein interaction,PPI)关系,使用Cytoscape软件进行PPI网络可视化,并筛选核心靶点。使用R软件对靶点进行基因本体论(Gene Ontology,GO)富集分析和京都基因与基因组百科全书(Kyoto Encyclopedia of Genes and Genomes,KEGG)信号通路富集分析。结果:本次检索共获得有效文献213篇,涉及不同中药134味,主要包括补虚药、清热药、理气药、解表药等,使用频次最高的前6味中药为黄连、甘草、白术、黄芩、黄芪、白芍。基于TCMSP数据库获取5味中药(除甘草)有效成分55个,靶点212个;在GnenCards数据库中检索得到1226个靶点基因,与有效成分靶点进行映射得到103个潜在作用靶点。GO富集分析得到生物过程1865个、细胞组分39个、分子功能98个;KEGG富集到123条相关通路。结论:中药治疗溃疡性结肠炎作用途径复杂,涉及多靶点、多通路,具有“小分散大集中”的特点。展开更多
This paper researches robot soccer action selection based on Q learning .The robot learn to activate particular behavior given their current situation and reward signal. We adopt neural network to implementations ...This paper researches robot soccer action selection based on Q learning .The robot learn to activate particular behavior given their current situation and reward signal. We adopt neural network to implementations of Q learning for their generalization properties and limited computer memory requirements.展开更多
<span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream a...<span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream approaches to video understanding can be categorized into two-dimensional and three-dimensional convolutional neural networks. Although three-dimensional convolutional filters can learn the temporal correlation between different frames by extracting the features of multiple frames simultaneously, it results in an explosive number of parameters and calculation cost. Methods based on two-dimensional convolutional neural networks use fewer parameters;they often incorporate optical flow to compensate for their inability to learn temporal relationships. However, calculating the corresponding optical flow results in additional calculation cost;further, it necessitates the use of another model to learn the features of optical flow. We proposed an action recognition framework based on the two-dimensional convolutional neural network;therefore, it was necessary to resolve the lack of temporal relationships. To expand the temporal receptive field, we proposed a multi-scale temporal shift module, which was then combined with a temporal feature difference extraction module to extract the difference between the features of different frames. Finally, the model was compressed to make it more compact. We evaluated our method on two major action recognition benchmarks: the HMDB51 and UCF-101 datasets. Before compression, the proposed method achieved an accuracy of 72.83% on the HMDB51 dataset and 96.25% on the UCF-101 dataset. Following compression, the accuracy was still impressive, at 95.57% and 72.19% on each dataset. The final model was more compact than most related works.</span>展开更多
文摘Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body movements including head,facial expressions,eyes,shoulder shrugging,etc.Previously both gestures have been detected;identifying separately may have better accuracy,butmuch communicational information is lost.Aproper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others.Our novel proposed system contributes as Sign LanguageAction Transformer Network(SLATN),localizing hand,body,and facial gestures in video sequences.Here we are expending a Transformer-style structural design as a“base network”to extract features from a spatiotemporal domain.Themodel impulsively learns to track individual persons and their action context inmultiple frames.Furthermore,a“head network”emphasizes hand movement and facial expression simultaneously,which is often crucial to understanding sign language,using its attention mechanism for creating tight bounding boxes around classified gestures.The model’s work is later compared with the traditional identification methods of activity recognition.It not only works faster but achieves better accuracy as well.Themodel achieves overall 82.66%testing accuracy with a very considerable performance of computation with 94.13 Giga-Floating Point Operations per Second(G-FLOPS).Another contribution is a newly created dataset of Pakistan Sign Language forManual and Non-Manual(PkSLMNM)gestures.
文摘Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, most existing deep learning based recognition frameworks are not optimized for action in the surveillance videos. In this paper, we propose a novel method to deal with the recognition of different types of actions in outdoor surveillance videos. The proposed method first introduces motion compensation to improve the detection of human target. Then, it uses three different types of deep models with single and sequenced images as inputs for the recognition of different types of actions. Finally, predictions from different models are fused with a linear model. Experimental results show that the proposed method works well on the real surveillance videos.
文摘The interpenetrating polymer networks (IPN) thin film with the –C=O group in one network and the terminal –N=C=O group in another network on an aluminum substrate to reinforce the adherence between IPN and aluminum through interfacial reactions, were obtained by dip-pulling the pretreated aluminum substrate into the viscous-controlled IPN precursors and by the following thinning treatment to the IPN film to a suitable thickness. The interfacial actions and the adhesion strengths of the IPN on the pretreated aluminum substrate were investigated by the X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FTIR) and strain-stress(?-?) measurements. The XPS and FTIR detection results indicated that the elements’ contents of N, O, and Al varied from the depths of IPN. The in-terfacial reaction occurred between the –N=C=O group of IPN and the AlO(OH) of pretreated aluminum. The in-creased force constant for –C=O double bond and the lower frequency shift of –C=O stretching vibration absorption peak both verified the formation of hydrogen bond between the –OH group in AlO(OH) and the –C=O group in IPN. The adherence detections indicated that the larger amount of –N=C=O group in the IPN, the higher shear strengths between the IPN thin film and the aluminum substrate.
基金Supported by the National High Technology Research and Development Program of China(863 Program,2015AA016306)National Nature Science Foundation of China(61231015)+2 种基金Internet of Things Development Funding Project of Ministry of Industry in 2013(25)Technology Research Program of Ministry of Public Security(2016JSYJA12)the Nature Science Foundation of Hubei Province(2014CFB712)
文摘Action recognition is important for understanding the human behaviors in the video,and the video representation is the basis for action recognition.This paper provides a new video representation based on convolution neural networks(CNN).For capturing human motion information in one CNN,we take both the optical flow maps and gray images as input,and combine multiple convolutional features by max pooling across frames.In another CNN,we input single color frame to capture context information.Finally,we take the top full connected layer vectors as video representation and train the classifiers by linear support vector machine.The experimental results show that the representation which integrates the optical flow maps and gray images obtains more discriminative properties than those which depend on only one element.On the most challenging data sets HMDB51 and UCF101,this video representation obtains competitive performance.
文摘In this paper, we propose a novel game-theoretical solution to the multi-path routing problem in wireless ad hoc networks comprising selfish nodes with hidden information and actions. By incorporating a suitable traffic allocation policy, the proposed mechanism results in Nash equilibria where each node honestly reveals its true cost, and forwarding subgame perfect equilibrium in which each node does provide forwarding service with its declared service reliability. Based on the generalised second price auction, this mechanism effectively alleviates the over-payment of the well-known VCG mechanism. The effectiveness of this mechanism will be shown through simulations.
文摘以专利引证网络为载体,从知识基因稳定性、遗传性以及变异性等基本特征出发,提出一种基于subject-action-object三元组的知识基因提取方法.应用连接度算法分析专利引证关系,挖掘引证专利和被引专利之间继承和发展的知识流,建立知识进化轨迹;利用文本语法分析技术,从专利权利要求书中提取subject-action-object三元组;基于语义词库WordNet进行语义加工,计算语义相似度,合并同义的subject-action-object三元组,绘制知识基因图谱.从美国专利数据库中采集了5 073项1975—1999年授权的数据挖掘领域的相关专利,分析了专利的地区分布情况和年度分布情况.从NBER(National Bureau of Economic Research)的专利数据集中查询得到专利引证关系,利用网络分析软件Pajek构建专利引证网络,作为实验数据样本,对所提出的知识基因提取方法进行验证.实验结果表明:所提取的subject-action-object三元组具备了知识基因稳定性、遗传性和变异性等特征,可以作为知识基因的一种表现形式.
文摘目的:通过数据挖掘和网络药理学探究中医药治疗溃疡性结肠炎常用中药及作用机制。方法:通过检索中国知网、维普网、万方医学、PubMed、EmBase等数据库中关于中药口服法治疗溃疡性结肠炎的相关文献,获取文献中的中药并对其进行频数和频率分析。使用中药系统药理学数据库和分析平台筛选频次前五名中药的有效成分及相关靶点,使用GeneCards数据库检索溃疡性结肠炎的基因靶点,并与中药相关靶点进行映射,得到中药治疗溃疡性结肠炎的潜在作用靶点,采用Cytoscape软件构建“成分-靶点”网络图。将潜在作用靶点导入String数据库,获得蛋白质-蛋白质相互作用(protein-protein interaction,PPI)关系,使用Cytoscape软件进行PPI网络可视化,并筛选核心靶点。使用R软件对靶点进行基因本体论(Gene Ontology,GO)富集分析和京都基因与基因组百科全书(Kyoto Encyclopedia of Genes and Genomes,KEGG)信号通路富集分析。结果:本次检索共获得有效文献213篇,涉及不同中药134味,主要包括补虚药、清热药、理气药、解表药等,使用频次最高的前6味中药为黄连、甘草、白术、黄芩、黄芪、白芍。基于TCMSP数据库获取5味中药(除甘草)有效成分55个,靶点212个;在GnenCards数据库中检索得到1226个靶点基因,与有效成分靶点进行映射得到103个潜在作用靶点。GO富集分析得到生物过程1865个、细胞组分39个、分子功能98个;KEGG富集到123条相关通路。结论:中药治疗溃疡性结肠炎作用途径复杂,涉及多靶点、多通路,具有“小分散大集中”的特点。
文摘This paper researches robot soccer action selection based on Q learning .The robot learn to activate particular behavior given their current situation and reward signal. We adopt neural network to implementations of Q learning for their generalization properties and limited computer memory requirements.
文摘<span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream approaches to video understanding can be categorized into two-dimensional and three-dimensional convolutional neural networks. Although three-dimensional convolutional filters can learn the temporal correlation between different frames by extracting the features of multiple frames simultaneously, it results in an explosive number of parameters and calculation cost. Methods based on two-dimensional convolutional neural networks use fewer parameters;they often incorporate optical flow to compensate for their inability to learn temporal relationships. However, calculating the corresponding optical flow results in additional calculation cost;further, it necessitates the use of another model to learn the features of optical flow. We proposed an action recognition framework based on the two-dimensional convolutional neural network;therefore, it was necessary to resolve the lack of temporal relationships. To expand the temporal receptive field, we proposed a multi-scale temporal shift module, which was then combined with a temporal feature difference extraction module to extract the difference between the features of different frames. Finally, the model was compressed to make it more compact. We evaluated our method on two major action recognition benchmarks: the HMDB51 and UCF-101 datasets. Before compression, the proposed method achieved an accuracy of 72.83% on the HMDB51 dataset and 96.25% on the UCF-101 dataset. Following compression, the accuracy was still impressive, at 95.57% and 72.19% on each dataset. The final model was more compact than most related works.</span>