期刊文献+
共找到31篇文章
< 1 2 >
每页显示 20 50 100
Step-by-step to success:Multi-stage learning driven robust audiovisual fusion network for fine-grained bird species classification
1
作者 Shanshan Xie Jiangjian Xie +6 位作者 Yang Liu Lianshuai Sha Ye Tian Jiahua Dong Diwen Liang Kaijun Pan Junguo Zhang 《Avian Research》 2025年第4期818-831,共14页
Bird monitoring and protection are essential for maintaining biodiversity,and fine-grained bird classification has become a key focus in this field.Audio-visual modalities provide critical cues for this task,but robus... Bird monitoring and protection are essential for maintaining biodiversity,and fine-grained bird classification has become a key focus in this field.Audio-visual modalities provide critical cues for this task,but robust feature extraction and efficient fusion remain major challenges.We introduce a multi-stage fine-grained audiovisual fusion network(MSFG-AVFNet) for fine-grained bird species classification,which addresses these challenges through two key components:(1) the audiovisual feature extraction module,which adopts a multi-stage finetuning strategy to provide high-quality unimodal features,laying a solid foundation for modality fusion;(2) the audiovisual feature fusion module,which combines a max pooling aggregation strategy with a novel audiovisual loss function to achieve effective and robust feature fusion.Experiments were conducted on the self-built AVB81and the publicly available SSW60 datasets,which contain data from 81 and 60 bird species,respectively.Comprehensive experiments demonstrate that our approach achieves notable performance gains,outperforming existing state-of-the-art methods.These results highlight its effectiveness in leveraging audiovisual modalities for fine-grained bird classification and its potential to support ecological monitoring and biodiversity research. 展开更多
关键词 audiovisual modality Bird species classification Feature fusion FINE-GRAINED
在线阅读 下载PDF
Effect of Cross-Modal Perceptual Training on Audiovisual Integration in Older Adults
2
作者 Hang Ping Shujing Li +4 位作者 Canting Xiong Yan Li Xiaoyu Li Xiaofeng Fang Yan Li 《Open Journal of Therapy and Rehabilitation》 2024年第4期347-355,共9页
Background: Previous studies have demonstrated the plasticity of perceptual sensitivity and compensatory mechanisms of audiovisual integration (AVI) in older adults. However, the impact of perceptual training on audio... Background: Previous studies have demonstrated the plasticity of perceptual sensitivity and compensatory mechanisms of audiovisual integration (AVI) in older adults. However, the impact of perceptual training on audiovisual integrative abilities remains unclear. Methods: This study randomly assigned 40 older adults to either a training or control group. The training group underwent a five-day audiovisual perceptual program, while the control group received no training. Participants completed simultaneous judgment (SJ) and audiovisual detection tasks before and after training. Results: Findings indicated improved perceptual sensitivity to audiovisual synchrony in the training group, with AVI significantly higher post-test compared to pre-test (9.95% vs. 13.87%). No significant change was observed in the control group (9.61% vs. 10.77%). Conclusion: These results suggested that cross-modal perceptual training might be an effective candidate cognitive intervention to ease the dysfunction of unimodal sensory. 展开更多
关键词 Perceptual Training audiovisual Integration Simultaneous Judgement Task audiovisual Detection Task OLDER
在线阅读 下载PDF
Audiovisual speech recognition based on a deep convolutional neural network 被引量:1
3
作者 Shashidhar Rudregowda Sudarshan Patilkulkarni +2 位作者 Vinayakumar Ravi Gururaj H.L. Moez Krichen 《Data Science and Management》 2024年第1期25-34,共10页
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India... Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively. 展开更多
关键词 audiovisual speech recognition Custom dataset 1D Convolution neural network(CNN) Deep CNN(DCNN) Long short-term memory(LSTM) LIPREADING Dlib Mel-frequency cepstral coefficient(MFCC)
在线阅读 下载PDF
Research on the Collaborative Governance of Social Responsibility in Online Audiovisual Enterprises
4
作者 Chuying Kang Muhammad Zaffwan Idris Juan Liu 《Social Networking》 2024年第1期1-13,共13页
This paper aims to analyze the present conditions of the social responsibility ecosystem in online audiovisual enterprises in the digital age. It focuses on the governance of social responsibility in these enterprises... This paper aims to analyze the present conditions of the social responsibility ecosystem in online audiovisual enterprises in the digital age. It focuses on the governance of social responsibility in these enterprises and conducts an in-depth analysis of the problems and influencing factors related to the social responsibility aberrations of online audiovisual enterprises. Drawing upon social responsibility theory and collaborative governance theory, this research constructs a social responsibility guidance and governance system guided by the public, supported by the voluntary fulfillment of responsibilities by online audiovisual enterprises, and based on the collaborative participation of diverse stakeholders. It explores and optimizes the implementation pathways of this system, providing theoretical support and practical guidance for promoting the sustainable development of online audiovisual enterprises. Furthermore, it aims to contribute to the creation of a harmonious Internet ecosystem. 展开更多
关键词 Online audiovisual Enterprises Social Responsibility Collaborative Governance
在线阅读 下载PDF
Neural Integration of Audiovisual Sensory Inputs in Macaque Amygdala and Adjacent Regions 被引量:3
5
作者 Liang Shan Liu Yuan +5 位作者 Bo Zhang Jian Ma Xiao Xu Fei Gu Yi Jiang Ji Dai 《Neuroscience Bulletin》 SCIE CAS CSCD 2023年第12期1749-1761,共13页
Integrating multisensory inputs to generate accurate perception and guide behavior is among the most critical functions of the brain.Subcortical regions such as the amygdala are involved in sensory processing includin... Integrating multisensory inputs to generate accurate perception and guide behavior is among the most critical functions of the brain.Subcortical regions such as the amygdala are involved in sensory processing including vision and audition,yet their roles in multisensory integration remain unclear.In this study,we systematically investigated the function of neurons in the amygdala and adjacent regions in integrating audiovisual sensory inputs using a semi-chronic multi-electrode array and multiple combinations of audiovisual stimuli.From a sample of 332 neurons,we showed the diverse response patterns to audiovisual stimuli and the neural characteristics of bimodal over unimodal modulation,which could be classified into four types with differentiated regional origins.Using the hierarchical clustering method,neurons were further clustered into five groups and associated with different integrating functions and sub-regions.Finally,regions distinguishing congruent and incongruent bimodal sensory inputs were identified.Overall,visual processing dominates audiovisual integration in the amygdala and adjacent regions.Our findings shed new light on the neural mechanisms of multisensory integration in the primate brain. 展开更多
关键词 MACAQUE Amygdala-Multisensory audiovisual integration Neural activity Multichannel recording
原文传递
Humanoid Intelligent Display Platform for Audiovisual Interaction and Sound Identification 被引量:3
6
作者 Yang Wang Wenli Gao +11 位作者 Shuo Yang Qiaolin Chen Chao Ye Hao Wang Qiang Zhang Jing Ren Zhijun Ning Xin Chen Zhengzhong Shao Jian Li Yifan Liu Shengjie Ling 《Nano-Micro Letters》 SCIE EI CAS CSCD 2023年第12期82-98,共17页
This study proposes a rational strategy for the design,fabrication and system integration of the humanoid intelligent display platform(HIDP)to meet the requirements of highly humanized mechanical properties and intell... This study proposes a rational strategy for the design,fabrication and system integration of the humanoid intelligent display platform(HIDP)to meet the requirements of highly humanized mechanical properties and intelligence for human-machine interfaces.The platform’s sandwich structure comprises a middle lightemitting layer and surface electrodes,which consists of silicon elastomer embedded with phosphor and silk fibroin ionoelastomer,respectively.Both materials are highly stretchable and resilient,endowing the HIDP with skin-like mechanical properties and applicability in various extreme environments and complex mechanical stimulations.Furthermore,by establishing the numerical correlation between the amplitude change of animal sounds and the brightness variation,the HIDP realizes audiovisual interaction and successful identification of animal species with the aid of Internet of Things(IoT)and machine learning techniques.The accuracy of species identification reaches about 100%for 200 rounds of random testing.Additionally,the HIDP can recognize animal species and their corresponding frequencies by analyzing sound characteristics,displaying real-time results with an accuracy of approximately 99%and 93%,respectively.In sum,this study offers a rational route to designing intelligent display devices for audiovisual interaction,which can expedite the application of smart display devices in human-machine interaction,soft robotics,wearable sound-vision system and medical devices for hearing-impaired patients. 展开更多
关键词 Silk fibroin Ionoelastomer Luminescent display Machine learning audiovisual interaction
在线阅读 下载PDF
Changes of Effective Connectivity in the Alpha Band Characterize Differential Processing of Audiovisual Information in Cross-Modal Selective Attention 被引量:3
7
作者 Weikun Niu Yuying Jiang +3 位作者 Xin Zhang Tianzi Jiang Yujin Zhang Shan Yu 《Neuroscience Bulletin》 SCIE CAS CSCD 2020年第9期1009-1022,共14页
Cross-modal selective attention enhances the processing of sensory inputs that are most relevant to the task at hand.Such differential processing could be mediated by a swift network reconfiguration on the macroscopic... Cross-modal selective attention enhances the processing of sensory inputs that are most relevant to the task at hand.Such differential processing could be mediated by a swift network reconfiguration on the macroscopic level,but this remains a poorly understood process.To tackle this issue,we used a behavioral paradigm to introduce a shift of selective attention between the visual and auditory domains,and recorded scalp electroencephalographic signals from eight healthy participants.The changes in effective connectivity caused by the cross-modal attentional shift were delineated by analyzing spectral Granger Causality(GC),a metric of frequency-specific effective connectivity.Using data-driven methods of pattern-classification and feature-analysis,we found that a change in the a band(12 Hz-15 Hz) of GC is a stable feature across different individuals that can be used to decode the attentional shift.Specifically,auditory attention induces more pronounced information flow in the α band,especially from the parietal-occipital areas to the temporal-parietal areas,compared to the case of visual attention,reflecting a reconfiguration of interaction in the macroscopic brain network accompanying different processing.Our results support the role of α oscillation in organizing the information flow across spatially-separated brain areas and,thereby,mediating cross-modal selective attention. 展开更多
关键词 Human EEG audiovisual selective attention Granger Causality Pattern classification
原文传递
Do cardiopulmonary resuscitation real-time audiovisual feedback devices improve patient outcomes? A systematic review and metaanalysis 被引量:1
8
作者 Nitish Sood Anish Sangari +4 位作者 Arnav Goyal Christina Sun Madison Horinek Joseph Andy Hauger Lane Perry 《World Journal of Cardiology》 2023年第10期531-541,共11页
BACKGROUND Cardiac arrest is a leading cause of mortality in America and has increased in the incidence of cases over the last several years.Cardiopulmonary resuscitation(CPR)increases survival outcomes in cases of ca... BACKGROUND Cardiac arrest is a leading cause of mortality in America and has increased in the incidence of cases over the last several years.Cardiopulmonary resuscitation(CPR)increases survival outcomes in cases of cardiac arrest;however,healthcare workers often do not perform CPR within recommended guidelines.Real-time audiovisual feedback(RTAVF)devices improve the quality of CPR performed.This systematic review and meta-analysis aims to compare the effect of RTAVF-assisted CPR with conventional CPR and to evaluate whether the use of these devices improved outcomes in both in-hospital cardiac arrest(IHCA)and out-of-hospital cardiac arrest(OHCA)patients.AIM To identify the effect of RTAVF-assisted CPR on patient outcomes and CPR quality with in-and OHCA.METHODS We searched PubMed,SCOPUS,the Cochrane Library,and EMBASE from inception to July 27,2020,for studies comparing patient outcomes and/or CPR quality metrics between RTAVF-assisted CPR and conventional CPR in cases of IHCA or OHCA.The primary outcomes of interest were return of spontaneous circulation(ROSC)and survival to hospital discharge(SHD),with secondary outcomes of chest compression rate and chest compression depth.The methodo-logical quality of the included studies was assessed using the Newcastle-Ottawa scale and Cochrane Collaboration’s“risk of bias”tool.Data was analyzed using R statistical software 4.2.0.results were statistically significant if P<0.05.RESULTS Thirteen studies(n=17600)were included.Patients were on average 69±17.5 years old,with 7022(39.8%)female patients.Overall pooled ROSC in patients in this study was 37%(95%confidence interval=23%-54%).RTAVF-assisted CPR significantly improved ROSC,both overall[risk ratio(RR)1.17(1.001-1.362);P=0.048]and in cases of IHCA[RR 1.36(1.06-1.80);P=0.002].There was no significant improvement in ROSC for OHCA(RR 1.04;0.91-1.19;P=0.47).No significant effect was seen in SHD[RR 1.04(0.91-1.19);P=0.47]or chest compression rate[standardized mean difference(SMD)-2.1;(-4.6-0.5);P=0.09].A significant improvement was seen in chest compression depth[SMD 1.6;(0.02-3.1);P=0.047].CONCLUSION RTAVF-assisted CPR increases ROSC in cases of IHCA and chest compression depth but has no significant effect on ROSC in cases of OHCA,SHD,or chest compression rate. 展开更多
关键词 Real-time audiovisual feedback Cardiopulmonary resuscitation Cardiac arrest Return of spontaneous circulation Survival to hospital discharge Cardiopulmonary resuscitation quality
暂未订购
On the Application of Audiovisual Products in English Teaching
9
作者 伍青珠 《科教导刊》 2013年第13期110-110,114,共2页
As a special art form, English audiovisual products are utilized more and more in English teaching. This paper attempts to analyze its application by discussing the advantages and disadvantages. After that, suggestion... As a special art form, English audiovisual products are utilized more and more in English teaching. This paper attempts to analyze its application by discussing the advantages and disadvantages. After that, suggestions will be put forward on how to use this resource in English teaching. 展开更多
关键词 audiovisual products English teaching APPLICATION
在线阅读 下载PDF
Alterations of Audiovisual Integration in Alzheimer’s Disease
10
作者 Yufei Liu Zhibin Wang +5 位作者 Tao Wei Shaojiong Zhou Yunsi Yin Yingxin Mi Xiaoduo Liu Yi Tang 《Neuroscience Bulletin》 SCIE CAS CSCD 2023年第12期1859-1872,共14页
Audiovisual integration is a vital information process involved in cognition and is closely correlated with aging and Alzheimer’s disease(AD).In this review,we evaluated the altered audiovisual integrative behavioral... Audiovisual integration is a vital information process involved in cognition and is closely correlated with aging and Alzheimer’s disease(AD).In this review,we evaluated the altered audiovisual integrative behavioral symptoms in AD.We further analyzed the relationships between AD pathologies and audiovisual integration alterations bidirectionally and suggested the possible mechanisms of audiovisual integration alterations underlying AD,including the imbalance between energy demand and supply,activity-dependent degeneration,disrupted brain networks,and cognitive resource overloading.Then,based on the clinical characteristics including electrophysiological and imaging data related to audiovisual integration,we emphasized the value of audiovisual integration alterations as potential biomarkers for the early diagnosis and progression of AD.We also highlighted that treatments targeted audiovisual integration contributed to widespread pathological improvements in AD animal models and cognitive improvements in AD patients.Moreover,investigation into audiovisual integration alterations in AD also provided new insights and comprehension about sensory information processes. 展开更多
关键词 audiovisual integration-Aging Alzheimer's disease COGNITION Sensory information process
原文传递
Audiovisual Archiving in Lithuanian Central State Archive
11
作者 Jole Stimbiryte 《Journalism and Mass Communication》 2014年第2期86-100,共15页
Lithuanian Central State Archive is the biggest one within the state archival service and the only state archive where audiovisual documents are stored. There are more than 800,000 units of audiovisual documents in th... Lithuanian Central State Archive is the biggest one within the state archival service and the only state archive where audiovisual documents are stored. There are more than 800,000 units of audiovisual documents in the archive. The main laws regulating the activity of Lithuanian Central State Archive and related to audiovisual archiving are the Law on Documents and Archives of Lithuanian Republic, the Law of Cinema of Lithuanian Republic, and the Law on Copyright and Related Rights of Lithuanian Republic. There are four big collections of audiovisual documents in the Lithuanian Central State Archive--films, photo documents, sound recordings, and video recordings. The Archive's specialists have a large experience in the field of physical treatment and preservation of analogue audiovisual documents. Lithuanian Central State Archive digitizes audiovisual documents seeking the balance between long time preservation and nowadays access. Since May, 2010 till April 2013, Lithuanian Central State Archive implemented the project--Lithuanian documentaries on the Internet. During the project the Archives digitized and transferred to the Internet 1,000 titles of Lithuanian documentaries, created in the period 1919-1961. Lithuanian Central State Archive wants to popularize its collections, so various international projects are participated in. 展开更多
关键词 audiovisual archiving preservation of analogues DIGITIZATION projects
在线阅读 下载PDF
Foreign Language Web-Based Learning by Means of Audiovisual Interactive Activities
12
作者 Catherine Kanellopoulou Minas Pergantis +2 位作者 Nikolaos Konstantinou Nikolaos Grigorios Kanellopoulos Andreas Giannakoulopoulos 《Journal of Software Engineering and Applications》 2021年第6期207-232,共26页
<p align="left"> <span style="font-family:Verdana;">Online learning has been on an upward trend for many years and is becoming more and more prevalent every day, consistently presenting... <p align="left"> <span style="font-family:Verdana;">Online learning has been on an upward trend for many years and is becoming more and more prevalent every day, consistently presenting the less privileged parts of our society with an equal opportunity at education. Unfortunately, though, it seldom takes advantage of the new technologies and capabilities offered by the modern World Wide Web. In this article, we present an interactive online platform that provides users with learning activities for students of English as a foreign language. The platform focuses on using audiovisual multimedia content and a user experience (UX) centered approach to provide learners with an enhanced learning experience that aims at improving their knowledge level while at the same time increasing their engagement and motivation to participate in learning. To achieve this, the platform uses advanced techniques, such as interactive vocabulary and pronunciation assistance, mini-games, embedded media, voice recording, and more. In addition, the platform provides educators with analytics about user engagement and performance. In this study, more than 100 young students participated in a preliminary use of the aforementioned platform and provided feedback concerning their experience. Both the platform’s metrics and the user-provided feedback indicated increased engagement and a preference of the participants for interactive audiovisual multimedia-based learning activities.</span> </p> 展开更多
关键词 Online Learning MULTIMEDIA Interactivity World Wide Web Education English Language Teaching Learning Platform audiovisual
在线阅读 下载PDF
Hard Fight Gives Rise to Cleaner Market—An Interview with Mr.Zhang Xinjian on the Rectification of China's Audiovisual Market
13
作者 StaffReporterLiShengxian 《China & The World Cultural Exchange》 2002年第5期4-6,共3页
Reporter: Deputy Director-General Mr. Zhang, I heard that the Motion Picture Association of America (MPAA) recently presented a complementary wood board to the Ministry of Culture of China. Could you tell the inside s... Reporter: Deputy Director-General Mr. Zhang, I heard that the Motion Picture Association of America (MPAA) recently presented a complementary wood board to the Ministry of Culture of China. Could you tell the inside story about it to our readers?Zhang Xinjian: On May 30, a delegation from MPAA led by its vice president paid a visit to my ministry and I had a talk with them on behalf of the Department of Cultural Market. During 展开更多
关键词 In ZHANG Hard Fight Gives Rise to Cleaner Market An Interview with Mr.Zhang Xinjian on the Rectification of China’s audiovisual Market
在线阅读 下载PDF
Blacklist Established in Chinese Audiovisual Market
14
《China Today》 2002年第10期6-6,共1页
The Chinese audiovisual market is to impose a ban on audiovisual product dealers whose licenses have been revoked for violatingthe law. This ban will prohibit them from dealing in audiovisual products for ten year... The Chinese audiovisual market is to impose a ban on audiovisual product dealers whose licenses have been revoked for violatingthe law. This ban will prohibit them from dealing in audiovisual products for ten years. Their names are to be included on a blacklist made known to the public. 展开更多
关键词 Blacklist Established in Chinese audiovisual Market
在线阅读 下载PDF
Application of Audiovisual Language on the TIKTOK Platform and Analysis of Its Communication Effect
15
作者 Jingyao Chen 《Advances in Social Behavior Research》 2024年第7期33-40,共8页
This study explores the application of audiovisual language in short videos on the TIKTOK platform and its impact on content creators and the platform.By analyzing the video content of the popular TIKTOK account"... This study explores the application of audiovisual language in short videos on the TIKTOK platform and its impact on content creators and the platform.By analyzing the video content of the popular TIKTOK account"San Jin Qi Qi,"the research examines the key role of audiovisual language in enhancing user experience and promoting video dissemination.The study employs a case analysis method,selecting representative videos to investigate the specific application of audiovisual elements such as color,lighting,composition,and camera movement.It also analyzes user comments to understand how these elements influence emotional resonance and interactive behavior.The results show that well-designed audiovisual language significantly enhances viewers'immersion and emotional connection,boosting the artistic value and communicative power of the videos.Additionally,it fosters user interaction and engagement,further improving the overall dissemination effect of the videos.The study concludes that the successful application of audiovisual language is a crucial factor for creators in attracting audiences and enhancing the platform's competitiveness.Future research could explore the differences in the application of audiovisual language across different platforms. 展开更多
关键词 audiovisual language short video TIKTOK platform user experience video dissemination
在线阅读 下载PDF
Corrigendum regarding previously published articles
16
《Data Science and Management》 2025年第1期116-116,共1页
The editors regret that the following statements were missing in the published version for the following articles that appeared in previous issues of Data Science and Management:1.“Audiovisual speech recognition base... The editors regret that the following statements were missing in the published version for the following articles that appeared in previous issues of Data Science and Management:1.“Audiovisual speech recognition based on a deep convolutional neural network”(Data Science and Management,2024,7(1):25–34).https://doi.org/10.1016/j.dsm.2023.10.002.Ethics statement:The authors declare the Institutional Ethics Committee confirmed that no ethical review was required for this study.The authors have taken the participants’permission and consent to participate in this study. 展开更多
关键词 deep convolutional neural network data data science participant permission ethical review audiovisual speech recognition ethics statement deep convolutional neural network institutional ethics committee
在线阅读 下载PDF
Audiovisual Sexual Stimulation and RigiScan Test for the Diagnosis of Erectile Dysfunction 被引量:11
17
作者 Tao Wang Li Zhuan +7 位作者 Zhuo Liu Ming-Chao Li Jun Yang Shao-Gang Wang Ji-Hong Liu Qing Ling Wei-Min Yang Zhang-Qun Ye 《Chinese Medical Journal》 SCIE CAS CSCD 2018年第12期1465-1471,共7页
Background: Currently available evaluation criteria lbr penile tumescence and rigidity have been fraught with controversy. In tiffs study, we sought to establish normative Chinese evaluation criteria lbr penile tumes... Background: Currently available evaluation criteria lbr penile tumescence and rigidity have been fraught with controversy. In tiffs study, we sought to establish normative Chinese evaluation criteria lbr penile tumescence and rigidity by utilizing audiovisual sexual stimulation and RigiScanTM test (AVSS-Rigiscan test) with the administration of phosphodiesterase-5 inhibitor. Methods: A total of 1169 patients (aged 18-67 years) complained of erectile dysfunction (ED) underwent AVSS-RigiScan test with the administration of phosphodiesterase-5 inhibitor. A total of 1078 patients whose final etiological diagnosis was accurate by means of history, endocrine, vascular, and neurological diagnosis, International Index of Erectile Function 5 questionnaire, and erection hardness score were included in the research. Logistic regression model and receiver operating characteristic curve analysis were performed to determine the cutoffvalue of the RigiScanTM data. Then, the multivariable logistic analysis was used in the selected variables. Results: A normal restllt is defined as one erection with basal rigidity over 60% sustained for at least 8.75 rain, average event rigidity of tip at least 43.5% and base at least 50.5%, average maximum rigidity of tip at least 62.5% and base at least 67.5%, △tumescence (increase of tumescence or maxinaum-mininaum tumescence) of tip at least 1.75 cm and base at least 1.95 cm, total tumescence time at least 29.75 rain, and times of total tumescence at least once. Most importantly, basal rigidity over 60% sustained for at least 8.75 min, average event rigidity of tip at least 43.5%, and base at least 50.5% would be the new normative Chinese evaluation criteria for penile tumescence and rigidity. By multivariable logistic regression analysis, six significant RigiScanTM parameters including times of total tumescence, duration of erectile episodes over 60%, average event rigidity of tip, Atumescence of tip, average event rigidity of base, and Atunaescence of base contribute to the risk model of ED. In logistic regression equation, predict value P 〈 0.303 was considered as psychogenic ED. The sensitivity and specificity of the AVSS-RigiScan test with the administration ofphosphodiesterase-5 inhibitor in discriminating psychogenic from organic ED was 87.7% and 93.4%,, respectively. Conclusions: This study suggests that AVSS-RigiScan test with oral phosphodiesterase-5 inhibitors can objectively assess penile tumescence and rigidity and seems to be a better modality in differentiating psychogenic from organic ED. ttowever, due to the limited sample size, bias cannot be totally excluded. 展开更多
关键词 audiovisual Sexual Stimulation-RigiScan Test Erectile Dysfunction Phosphodiesterase-5 Inhibitor
原文传递
Audiovisual bimodal mutual compensation of Chinese
18
作者 周治 杜利民 徐彦居 《Science China(Technological Sciences)》 SCIE EI CAS 2001年第1期19-26,共8页
The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, S... The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved. 展开更多
关键词 audiovisual bimodal speech recognition bimodal speech perception perception experiment audio-visual information mutual compensation
原文传递
Suboptimal Auditory Dominance in Audiovisual Integration of Temporal Cues
19
作者 M Maiworm B Rder 《Tsinghua Science and Technology》 SCIE EI CAS 2011年第2期121-132,共12页
The present study examined whether audiovisual integration of temporal stimulus features in humans can be predicted by the maximum likelihood estimation (MLE) model which is based on the weighting of unisensory cues... The present study examined whether audiovisual integration of temporal stimulus features in humans can be predicted by the maximum likelihood estimation (MLE) model which is based on the weighting of unisensory cues by their relative reliabilities. In an audiovisual temporal order judgment paradigm, the reliability of the auditory signal was manipulated by Gaussian volume envelopes, introducing varying degrees of temporal uncertainty. While statistically optimal weighting according to the MLE rule was found in half of the participants, the other half consistently overweighted the auditory signal. The results are discussed in terms of a general auditory bias in time perception, interindividual differences, as well as in terms of the conditions and limits of statistically optimal multisensory integration. 展开更多
关键词 audiovisual integration statistically optimal behavior temporal perception
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部