Receptor Tyrosine kinases (RTKs) play a crucial role in the signal transduction pathways at cellular levels. RTK plays a vital role in cellular communication and transmission of signals to the adjacent cells and regul...Receptor Tyrosine kinases (RTKs) play a crucial role in the signal transduction pathways at cellular levels. RTK plays a vital role in cellular communication and transmission of signals to the adjacent cells and regulates different functions of the cell, such as cellular growth, differentiation, metabolism and motility. RTK s triggers growth factor receptors such as epidermal growth factor, insulin growth factor-1 receptor, platelet derived growth factor receptor, and fibro blast growth factor receptor and vascular endothelial growth factor receptor, thereby initiating and regulating cell growth and proliferation. MAPK/RAS and PI3/AKT pathways are the major pathways of RTK’s function. Dysregulation of these RTK’s and pathways often leads to many diseases such as Noonan Syndrome, Logius Syndrome, CFC syndrome and different types of cancer. Point mutation and over expression of receptors and mutations in Ras leads to 30% of human cancers. Also over expression of different growth factor receptors by RTK too lead to several types of cancers as Glioblastoma, Thyroid cancer, Colon cancer and Non-small cell lung cancer. PTEN mutation in PI3/AKT pathway often leads to carcinoma relative to Thyroid, Skin, Large intestine, eye and Bone. Therefore, these RTK’s often used as targets for cancer therapies. The medical sector uses various types of small molecule tyrosine kinase inhibitors such as ATP competitive inhibitors, Allosteric inhibitors and covalent inhibitors which are known as Afatinib, Crizotinib, Eroltinib, Icotinib, Lepatinib and Lenvatinib in treatment and management of differential carcinomas.展开更多
The interaction between humans and machines has become an issue of concern in recent years.Besides facial expressions or gestures,speech has been evidenced as one of the foremost promising modalities for automatic emo...The interaction between humans and machines has become an issue of concern in recent years.Besides facial expressions or gestures,speech has been evidenced as one of the foremost promising modalities for automatic emotion recognition.Effective computing means to support HCI(Human-Computer Interaction)at a psychological level,allowing PCs to adjust their reactions as per human requirements.Therefore,the recognition of emotion is pivotal in High-level interactions.Each Emotion has distinctive properties that form us to recognize them.The acoustic signal produced for identical expression or sentence changes is essentially a direct result of biophysical changes,(for example,the stress instigated narrowing of the larynx)set off by emotions.This connection between acoustic cues and emotions made Speech Emotion Recognition one of the moving subjects of the emotive computing area.The most motivation behind a Speech Emotion Recognition algorithm is to observe the emotional condition of a speaker from recorded Speech signals.The results from the application of k-NN and OVA-SVM for MFCC features without and with a feature selection approach are presented in this research.The MFCC features from the audio signal were initially extracted to characterize the properties of emotional speech.Secondly,nine basic statistical measures were calculated from MFCC and 117-dimensional features were consequently obtained to train the classifiers for seven different classes(Anger,Happiness,Disgust,Fear,Sadness,Disgust,Boredom and Neutral)of emotions.Next,Classification was done in four steps.First,all the 117-features are classified using both classifiers.Second,the best classifier was found and then features were scaled to[-1,1]and classified.In the third step,the with or without feature scaling which gives better performance was derived from the results of the second step and the classification was done for each of the basic statistical measures separately.Finally,in the fourth step,the combination of statistical measures which gives better performance was derived using the forward feature selection method Experiments were carried out using k-NN with different k values and a linear OVA-based SVM classifier with different optimal values.Berlin emotional speech database for the German language was utilized for testing the planned methodology and recognition rates as high as 60%accomplished for the recognition of emotion from voice signal for the set of statistical measures(median,maximum,mean,Inter-quartile range,skewness).OVA-SVM performs better than k-NN and the use of the feature selection technique gives a high rate.展开更多
文摘Receptor Tyrosine kinases (RTKs) play a crucial role in the signal transduction pathways at cellular levels. RTK plays a vital role in cellular communication and transmission of signals to the adjacent cells and regulates different functions of the cell, such as cellular growth, differentiation, metabolism and motility. RTK s triggers growth factor receptors such as epidermal growth factor, insulin growth factor-1 receptor, platelet derived growth factor receptor, and fibro blast growth factor receptor and vascular endothelial growth factor receptor, thereby initiating and regulating cell growth and proliferation. MAPK/RAS and PI3/AKT pathways are the major pathways of RTK’s function. Dysregulation of these RTK’s and pathways often leads to many diseases such as Noonan Syndrome, Logius Syndrome, CFC syndrome and different types of cancer. Point mutation and over expression of receptors and mutations in Ras leads to 30% of human cancers. Also over expression of different growth factor receptors by RTK too lead to several types of cancers as Glioblastoma, Thyroid cancer, Colon cancer and Non-small cell lung cancer. PTEN mutation in PI3/AKT pathway often leads to carcinoma relative to Thyroid, Skin, Large intestine, eye and Bone. Therefore, these RTK’s often used as targets for cancer therapies. The medical sector uses various types of small molecule tyrosine kinase inhibitors such as ATP competitive inhibitors, Allosteric inhibitors and covalent inhibitors which are known as Afatinib, Crizotinib, Eroltinib, Icotinib, Lepatinib and Lenvatinib in treatment and management of differential carcinomas.
文摘The interaction between humans and machines has become an issue of concern in recent years.Besides facial expressions or gestures,speech has been evidenced as one of the foremost promising modalities for automatic emotion recognition.Effective computing means to support HCI(Human-Computer Interaction)at a psychological level,allowing PCs to adjust their reactions as per human requirements.Therefore,the recognition of emotion is pivotal in High-level interactions.Each Emotion has distinctive properties that form us to recognize them.The acoustic signal produced for identical expression or sentence changes is essentially a direct result of biophysical changes,(for example,the stress instigated narrowing of the larynx)set off by emotions.This connection between acoustic cues and emotions made Speech Emotion Recognition one of the moving subjects of the emotive computing area.The most motivation behind a Speech Emotion Recognition algorithm is to observe the emotional condition of a speaker from recorded Speech signals.The results from the application of k-NN and OVA-SVM for MFCC features without and with a feature selection approach are presented in this research.The MFCC features from the audio signal were initially extracted to characterize the properties of emotional speech.Secondly,nine basic statistical measures were calculated from MFCC and 117-dimensional features were consequently obtained to train the classifiers for seven different classes(Anger,Happiness,Disgust,Fear,Sadness,Disgust,Boredom and Neutral)of emotions.Next,Classification was done in four steps.First,all the 117-features are classified using both classifiers.Second,the best classifier was found and then features were scaled to[-1,1]and classified.In the third step,the with or without feature scaling which gives better performance was derived from the results of the second step and the classification was done for each of the basic statistical measures separately.Finally,in the fourth step,the combination of statistical measures which gives better performance was derived using the forward feature selection method Experiments were carried out using k-NN with different k values and a linear OVA-based SVM classifier with different optimal values.Berlin emotional speech database for the German language was utilized for testing the planned methodology and recognition rates as high as 60%accomplished for the recognition of emotion from voice signal for the set of statistical measures(median,maximum,mean,Inter-quartile range,skewness).OVA-SVM performs better than k-NN and the use of the feature selection technique gives a high rate.