This paper presents a real-time implementation of 4.2Kb/s CELP speech coding on single DSP chip. An algorithm reducing search complexity for adaptive codebook is suggested; the solving method that the parameters are c...This paper presents a real-time implementation of 4.2Kb/s CELP speech coding on single DSP chip. An algorithm reducing search complexity for adaptive codebook is suggested; the solving method that the parameters are changed into LSP parameters is discussed. The realtime implementation process of this coding on a commercial development board with a single TMS320C30 is described.展开更多
Realtime speech communications require high efficient compression algorithms to encode speech signals. As the compressed speech parameters are highly sensitive to transmission errors, robust source and channel decodin...Realtime speech communications require high efficient compression algorithms to encode speech signals. As the compressed speech parameters are highly sensitive to transmission errors, robust source and channel decoding and demodulation schemes are both important and of practical use. In this paper, an it- erative joint souree-channel decoding and demodulation algorithm is proposed for mixed excited linear pre- diction (MELP) vocoder by both exploiting the residual redundancy and passing soft information through- out the receiver while introducing systematic global iteration process to further enhance the performance. Being fully compatible with existing transmitter structure, the proposed algorithm does not introduce addi- tional bandwidth expansion and transmission delay. Simulations show substantial error correcting perfor- mance and synthesized speech quality improvement over conventional separate designed systems in delay and bandwidth constraint channels by using the joint source-channel decoding and demodulation (JSCCM) algorithm.展开更多
To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design...To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design a speech semantic coded communication system,referred to as Deep-STS(i.e.,Deep-learning based Speech To Speech),for the lowbandwidth speech communication.Specifically,we first deeply compress the speech data through extracting the textual information from the speech based on the conformer encoder and connectionist temporal classification decoder at the transmitter side of Deep-STS system.In order to facilitate the final speech timbre recovery,we also extract the short-term timbre feature of speech signals only for the starting 2s duration by the long short-term memory network.Then,the Reed-Solomon coding and hybrid automatic repeat request protocol are applied to improve the reliability of transmitting the extracted text and timbre feature over the wireless channel.Third,we reconstruct the speech signal by the mel spectrogram prediction network and vocoder,when the extracted text is received along with the timbre feature at the receiver of Deep-STS system.Finally,we develop the demo system based on the USRP and GNU radio for the performance evaluation of Deep-STS.Numerical results show that the ac-Received:Jan.17,2024 Revised:Jun.12,2024 Editor:Niu Kai curacy of text extraction approaches 95%,and the mel cepstral distortion between the recovered speech signal and the original one in the spectrum domain is less than 10.Furthermore,the experimental results show that the proposed Deep-STS system can reduce the total delay of speech communication by 85%on average compared to the G.723 coding at the transmission rate of 5.4 kbps.More importantly,the coding rate of the proposed Deep-STS system is extremely low,only 0.2 kbps for continuous speech communication.It is worth noting that the Deep-STS with lower coding rate can support the low-zero-power speech communication,unveiling a new era in ultra-efficient coded communications.展开更多
Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary ...Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary speech codecs, such as BV16, BV32, and SILK, that have structures different from CELP coding. In this article, we review NFC and describe a novel coding technique that optimally shapes coding noise in embedded pulse-code modulation (PCM) and embedded adaptive differential PCM (ADPCM). We describe how this new technique was incorporated into the recent ITU-T G.711.1, G.711 App. III, and G.722 Annex B (G.722B) speech-coding standards.展开更多
In this paper, the authors present a method to handle the Echo Canceller as an on-side job of LD-CELP codec and a circuitry to embed echo canceller into a LD-CELP codec. The Possibility to implement a system with t...In this paper, the authors present a method to handle the Echo Canceller as an on-side job of LD-CELP codec and a circuitry to embed echo canceller into a LD-CELP codec. The Possibility to implement a system with the integration of LD-CELP codec and echo canceller in real time by two chips of TMS320C30 isdiscussed.展开更多
A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8 kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is p...A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8 kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is proposed. Each input frame is classified into one of 4 phonetic classes. Non-speech frames are represented with Bark-band noise model. The extracted CWs become rapidly evolving waveforms (REWs) or slowly evolving waveforms (SEWs) in the cases of unvoiced or stationary voiced frames respectively, while mixed voiced frames use the same CW decomposition as that in the conventional CWI. Experimental results show that the proposed codec can eliminate most buzzy and noisy artifacts existing in the fixed-bit-rate characteristic waveform interpolation (FBR-CWI) speech codec, the average bit rate can be much lower, and its reconstructed speech quality is much better than FS 1 016 CELP at 4.8 kbit/s and similar to G. 723.1 ACELP at 5.3 kbit/s.展开更多
A very low bit rate algorithm for encoding speech signals at 825 bps based on a mixed harmonic and stochastic modeling of the excitation signal is presented. The algorithm is more robust in the V/UV decision, reliable...A very low bit rate algorithm for encoding speech signals at 825 bps based on a mixed harmonic and stochastic modeling of the excitation signal is presented. The algorithm is more robust in the V/UV decision, reliable pitch estimation, and excitation signals synthesis. The bit allocation schedules in every case and the analysis-by-synthesis processes of the parameters are also described. The Diagnostic Rhyme Test (DRT) results show that the performance of the proposed algorithm is comparable to that of the MELP algorithm at 2.4 kbps, and the speech distinctness is 90.25%.展开更多
Lattice vector quantization (LVQ) has been used for real-time speech and audio coding systems. Compared with conventional vector quantization, LVQ has two main advantages: It has a simple and fast encoding process,...Lattice vector quantization (LVQ) has been used for real-time speech and audio coding systems. Compared with conventional vector quantization, LVQ has two main advantages: It has a simple and fast encoding process, and it significantly reduces the amount of memory required. Therefore, LVQ is suitable for use in low-complexity speech and audio coding. In this paper, we describe the basic concepts of LVQ and its advantages over conventional vector quantization. We also describe some LVQ techniques that have been used in speech and audio coding standards of international standards developing organizations (SDOs).展开更多
It is supposed that speech is the output of a LPC filter which is excited by LPC residual. Consequently, speech can be reproduced if a signal, which occupies main characteristics of the LPC residual, excites the LPC f...It is supposed that speech is the output of a LPC filter which is excited by LPC residual. Consequently, speech can be reproduced if a signal, which occupies main characteristics of the LPC residual, excites the LPC filter. Based on this hypothesis, a new speech coding algorithm is proposed. Its excitation of synthesizer is the fractal interpolation of down sampled LPC residual with the same fractal dimension of LPC residual. Computer simulation shows that this speech coding algorithm can provide high quality coded speech at bit rate of 6.4 kb/s. Some essential issues are also presented to demonstrate this algorithm such as the calculation of fractal dimension, the implementation of fractal interpolation.展开更多
短波通信一般为窄带通信,传输速率严重受限,用于传输语音必须进行极低码率语音编码。Codec2声码器基于线性预测语音编码技术,采用语音信号基频及其谐波正弦信号构建浊音激励,采用白噪声信号构建清音激励,能够在保持话音质量的同时实现...短波通信一般为窄带通信,传输速率严重受限,用于传输语音必须进行极低码率语音编码。Codec2声码器基于线性预测语音编码技术,采用语音信号基频及其谐波正弦信号构建浊音激励,采用白噪声信号构建清音激励,能够在保持话音质量的同时实现极低码率编码,且采用复杂度较低的开源算法,可以绕开各种语音编码专利的限制。因此,设计了一种基于Codec2语音编码的短波正交频分复用(Orthogonal Frequency Division Multiplexing,OFDM)通信系统,能够实现以极低的传输速率传输高质量话音,并搭建了实时仿真系统。通过测试发现,Codec2能够以较好的话音质量在短波3kHz带宽信道下进行传输,支持远距离的短波话音通信。展开更多
To make the multiple descriptions codec adaptive to the packet loss rate, which can minimize the final distortion, a novel adaptive multiple descriptions sinusoidal coder (AMDSC) is proposed, which is based on a sin...To make the multiple descriptions codec adaptive to the packet loss rate, which can minimize the final distortion, a novel adaptive multiple descriptions sinusoidal coder (AMDSC) is proposed, which is based on a sinusoidal model and a noise model. Firstly, the sinusoidal parameters are extracted in the sinusoidal model, and ordered in a decrease manner. Odd indexed and even indexed parameters are divided into two descriptions. Secondly, the output vector from the noise model is split vector quantized. And the two sub-vectors are placed into two descriptions too. Finally, the number of the extracted parameters and the redundancy between the two descriptions are adjusted according to the packet loss rate of the network. Analytical and experimental resuits show that the proposed AMDSC outperforms existing MD speech coders by taking network loss characteristics into account. Therefore, it is very suitable for unreliable channels展开更多
Since Pulse Code Modulation emerged in 1937, digitized speech has experienced rapid development due to its outstanding voice quality, reliability, robustness and security in communication. But how to reduce channel wi...Since Pulse Code Modulation emerged in 1937, digitized speech has experienced rapid development due to its outstanding voice quality, reliability, robustness and security in communication. But how to reduce channel width without loss of speech quality remains a crucial problem in speech coding theory. A new full-duplex digital speech communication system based on the Vocoder of AMBE-1000(TM) and microcontroller ATMEL 89C51 is introduced. It shows higher voice quality than current mobile phone system with only a quarter of channel width needed for the latter. The prospective areas in which the system can be applied include satellite communication, IP Phone, virtual meeting and the most important, defence industry.展开更多
In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance...In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.展开更多
This paper presents the design of a full-duplex multi-rate vocoder which implements an LPC-10, CELPC and VSELPC algorithms in real time. A single commercially available digital signal processor IC, the TMS320C25, is u...This paper presents the design of a full-duplex multi-rate vocoder which implements an LPC-10, CELPC and VSELPC algorithms in real time. A single commercially available digital signal processor IC, the TMS320C25, is used to perform the digital processing. The channel interfaces are configured with the design of ASIC, and including timing and control logic circuits.展开更多
On the basis of asymptotic theory of Gersho, the isodistortion principle of vector clustering was discussed and a kind of competitive and selective learning method (CSL) which may avoid local optimization and have exc...On the basis of asymptotic theory of Gersho, the isodistortion principle of vector clustering was discussed and a kind of competitive and selective learning method (CSL) which may avoid local optimization and have excellent result in application to clusters of HMM model was also proposed. In combining the parallel, self organizational hierarchical neural networks (PSHNN) to reclassify the scores of every form output by HMM, the CSL speech recognition rate is obviously elevated.展开更多
A novel cochlear implant coding strategy based on the neural excitability has been developed and implemented using Matlab/Simulink. Unlike present day coding strategies, the Excitability Controlled Coding (ECC) strate...A novel cochlear implant coding strategy based on the neural excitability has been developed and implemented using Matlab/Simulink. Unlike present day coding strategies, the Excitability Controlled Coding (ECC) strategy uses a model of the excitability state of the target neural population to determine its stimulus selection, with the aim of more efficient stimulation as well as reduced channel interaction. Central to the ECC algorithm is an excitability state model, which takes into account the supposed refractory behaviour of the stimulated neural populations. The excitability state, used to weight the input signal for selecting the stimuli, is estimated and updated after the presentation of each stimulus, and used iteratively in selecting the next stimulus. Additionally, ECC regulates the frequency of stimulation on a given channel as a function of the corresponding input stimulus intensity. Details of the model, implementation and results of benchtop plus subjective tests are presented and discussed. Compared to the Advanced Combination Encoder (ACE) strategy, ECC produces a better spectral representation of an input signal, and can potentially reduce channel interactions. Pilot test results from 4 CI recipients suggest that ECC may have some advantage over ACE for complex situations such as speech in noise, possibly due to ECC’s ability to present more of the input spectral contents compared to ACE, which is restricted to a fixed number of maxima. The ECC strategy represents a neuro-physiological approach that could potentially improve the perception of more complex sound patterns with cochlear implants.展开更多
文摘This paper presents a real-time implementation of 4.2Kb/s CELP speech coding on single DSP chip. An algorithm reducing search complexity for adaptive codebook is suggested; the solving method that the parameters are changed into LSP parameters is discussed. The realtime implementation process of this coding on a commercial development board with a single TMS320C30 is described.
基金Supported by the National Natural Science Foundation of China (No. 60572081 )
文摘Realtime speech communications require high efficient compression algorithms to encode speech signals. As the compressed speech parameters are highly sensitive to transmission errors, robust source and channel decoding and demodulation schemes are both important and of practical use. In this paper, an it- erative joint souree-channel decoding and demodulation algorithm is proposed for mixed excited linear pre- diction (MELP) vocoder by both exploiting the residual redundancy and passing soft information through- out the receiver while introducing systematic global iteration process to further enhance the performance. Being fully compatible with existing transmitter structure, the proposed algorithm does not introduce addi- tional bandwidth expansion and transmission delay. Simulations show substantial error correcting perfor- mance and synthesized speech quality improvement over conventional separate designed systems in delay and bandwidth constraint channels by using the joint source-channel decoding and demodulation (JSCCM) algorithm.
基金supported in part by National Natural Science Foundation of China under Grants 62122069,62071431,and 62201507.
文摘To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design a speech semantic coded communication system,referred to as Deep-STS(i.e.,Deep-learning based Speech To Speech),for the lowbandwidth speech communication.Specifically,we first deeply compress the speech data through extracting the textual information from the speech based on the conformer encoder and connectionist temporal classification decoder at the transmitter side of Deep-STS system.In order to facilitate the final speech timbre recovery,we also extract the short-term timbre feature of speech signals only for the starting 2s duration by the long short-term memory network.Then,the Reed-Solomon coding and hybrid automatic repeat request protocol are applied to improve the reliability of transmitting the extracted text and timbre feature over the wireless channel.Third,we reconstruct the speech signal by the mel spectrogram prediction network and vocoder,when the extracted text is received along with the timbre feature at the receiver of Deep-STS system.Finally,we develop the demo system based on the USRP and GNU radio for the performance evaluation of Deep-STS.Numerical results show that the ac-Received:Jan.17,2024 Revised:Jun.12,2024 Editor:Niu Kai curacy of text extraction approaches 95%,and the mel cepstral distortion between the recovered speech signal and the original one in the spectrum domain is less than 10.Furthermore,the experimental results show that the proposed Deep-STS system can reduce the total delay of speech communication by 85%on average compared to the G.723 coding at the transmission rate of 5.4 kbps.More importantly,the coding rate of the proposed Deep-STS system is extremely low,only 0.2 kbps for continuous speech communication.It is worth noting that the Deep-STS with lower coding rate can support the low-zero-power speech communication,unveiling a new era in ultra-efficient coded communications.
文摘Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary speech codecs, such as BV16, BV32, and SILK, that have structures different from CELP coding. In this article, we review NFC and describe a novel coding technique that optimally shapes coding noise in embedded pulse-code modulation (PCM) and embedded adaptive differential PCM (ADPCM). We describe how this new technique was incorporated into the recent ITU-T G.711.1, G.711 App. III, and G.722 Annex B (G.722B) speech-coding standards.
文摘In this paper, the authors present a method to handle the Echo Canceller as an on-side job of LD-CELP codec and a circuitry to embed echo canceller into a LD-CELP codec. The Possibility to implement a system with the integration of LD-CELP codec and echo canceller in real time by two chips of TMS320C30 isdiscussed.
文摘A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8 kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is proposed. Each input frame is classified into one of 4 phonetic classes. Non-speech frames are represented with Bark-band noise model. The extracted CWs become rapidly evolving waveforms (REWs) or slowly evolving waveforms (SEWs) in the cases of unvoiced or stationary voiced frames respectively, while mixed voiced frames use the same CW decomposition as that in the conventional CWI. Experimental results show that the proposed codec can eliminate most buzzy and noisy artifacts existing in the fixed-bit-rate characteristic waveform interpolation (FBR-CWI) speech codec, the average bit rate can be much lower, and its reconstructed speech quality is much better than FS 1 016 CELP at 4.8 kbit/s and similar to G. 723.1 ACELP at 5.3 kbit/s.
文摘A very low bit rate algorithm for encoding speech signals at 825 bps based on a mixed harmonic and stochastic modeling of the excitation signal is presented. The algorithm is more robust in the V/UV decision, reliable pitch estimation, and excitation signals synthesis. The bit allocation schedules in every case and the analysis-by-synthesis processes of the parameters are also described. The Diagnostic Rhyme Test (DRT) results show that the performance of the proposed algorithm is comparable to that of the MELP algorithm at 2.4 kbps, and the speech distinctness is 90.25%.
文摘Lattice vector quantization (LVQ) has been used for real-time speech and audio coding systems. Compared with conventional vector quantization, LVQ has two main advantages: It has a simple and fast encoding process, and it significantly reduces the amount of memory required. Therefore, LVQ is suitable for use in low-complexity speech and audio coding. In this paper, we describe the basic concepts of LVQ and its advantages over conventional vector quantization. We also describe some LVQ techniques that have been used in speech and audio coding standards of international standards developing organizations (SDOs).
文摘It is supposed that speech is the output of a LPC filter which is excited by LPC residual. Consequently, speech can be reproduced if a signal, which occupies main characteristics of the LPC residual, excites the LPC filter. Based on this hypothesis, a new speech coding algorithm is proposed. Its excitation of synthesizer is the fractal interpolation of down sampled LPC residual with the same fractal dimension of LPC residual. Computer simulation shows that this speech coding algorithm can provide high quality coded speech at bit rate of 6.4 kb/s. Some essential issues are also presented to demonstrate this algorithm such as the calculation of fractal dimension, the implementation of fractal interpolation.
文摘短波通信一般为窄带通信,传输速率严重受限,用于传输语音必须进行极低码率语音编码。Codec2声码器基于线性预测语音编码技术,采用语音信号基频及其谐波正弦信号构建浊音激励,采用白噪声信号构建清音激励,能够在保持话音质量的同时实现极低码率编码,且采用复杂度较低的开源算法,可以绕开各种语音编码专利的限制。因此,设计了一种基于Codec2语音编码的短波正交频分复用(Orthogonal Frequency Division Multiplexing,OFDM)通信系统,能够实现以极低的传输速率传输高质量话音,并搭建了实时仿真系统。通过测试发现,Codec2能够以较好的话音质量在短波3kHz带宽信道下进行传输,支持远距离的短波话音通信。
文摘To make the multiple descriptions codec adaptive to the packet loss rate, which can minimize the final distortion, a novel adaptive multiple descriptions sinusoidal coder (AMDSC) is proposed, which is based on a sinusoidal model and a noise model. Firstly, the sinusoidal parameters are extracted in the sinusoidal model, and ordered in a decrease manner. Odd indexed and even indexed parameters are divided into two descriptions. Secondly, the output vector from the noise model is split vector quantized. And the two sub-vectors are placed into two descriptions too. Finally, the number of the extracted parameters and the redundancy between the two descriptions are adjusted according to the packet loss rate of the network. Analytical and experimental resuits show that the proposed AMDSC outperforms existing MD speech coders by taking network loss characteristics into account. Therefore, it is very suitable for unreliable channels
文摘Since Pulse Code Modulation emerged in 1937, digitized speech has experienced rapid development due to its outstanding voice quality, reliability, robustness and security in communication. But how to reduce channel width without loss of speech quality remains a crucial problem in speech coding theory. A new full-duplex digital speech communication system based on the Vocoder of AMBE-1000(TM) and microcontroller ATMEL 89C51 is introduced. It shows higher voice quality than current mobile phone system with only a quarter of channel width needed for the latter. The prospective areas in which the system can be applied include satellite communication, IP Phone, virtual meeting and the most important, defence industry.
文摘In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.
文摘This paper presents the design of a full-duplex multi-rate vocoder which implements an LPC-10, CELPC and VSELPC algorithms in real time. A single commercially available digital signal processor IC, the TMS320C25, is used to perform the digital processing. The channel interfaces are configured with the design of ASIC, and including timing and control logic circuits.
基金National Natural Science Foundation ofChina!( No.69672 0 0 7)
文摘On the basis of asymptotic theory of Gersho, the isodistortion principle of vector clustering was discussed and a kind of competitive and selective learning method (CSL) which may avoid local optimization and have excellent result in application to clusters of HMM model was also proposed. In combining the parallel, self organizational hierarchical neural networks (PSHNN) to reclassify the scores of every form output by HMM, the CSL speech recognition rate is obviously elevated.
文摘A novel cochlear implant coding strategy based on the neural excitability has been developed and implemented using Matlab/Simulink. Unlike present day coding strategies, the Excitability Controlled Coding (ECC) strategy uses a model of the excitability state of the target neural population to determine its stimulus selection, with the aim of more efficient stimulation as well as reduced channel interaction. Central to the ECC algorithm is an excitability state model, which takes into account the supposed refractory behaviour of the stimulated neural populations. The excitability state, used to weight the input signal for selecting the stimuli, is estimated and updated after the presentation of each stimulus, and used iteratively in selecting the next stimulus. Additionally, ECC regulates the frequency of stimulation on a given channel as a function of the corresponding input stimulus intensity. Details of the model, implementation and results of benchtop plus subjective tests are presented and discussed. Compared to the Advanced Combination Encoder (ACE) strategy, ECC produces a better spectral representation of an input signal, and can potentially reduce channel interactions. Pilot test results from 4 CI recipients suggest that ECC may have some advantage over ACE for complex situations such as speech in noise, possibly due to ECC’s ability to present more of the input spectral contents compared to ACE, which is restricted to a fixed number of maxima. The ECC strategy represents a neuro-physiological approach that could potentially improve the perception of more complex sound patterns with cochlear implants.