Federated learning is an effective distributed learning framework that protects privacy and allows multiple edge devices to work together to train models jointly without exchanging data.However,edge devices usually ha...Federated learning is an effective distributed learning framework that protects privacy and allows multiple edge devices to work together to train models jointly without exchanging data.However,edge devices usually have limited com-puting capabilities,and limited network bandwidth is often a major bottleneck.In order to reduce communication and computing costs,we introduced a horizon-tal pruning mechanism,combined federated learning and progressive learning,and proposed a progressive federated learning scheme based on model pruning.It gradually trains from simple models to more complex ones and trims the uploaded models horizontally.Our approach effectively reduces computational and bidirec-tional communication costs while maintaining model performance.Several image classification experiments on different models have been conducted by us,and the experimental results demonstrate that our approach can effectively save approxi-mately 10%of the computational cost and 48%of the communication cost when compared to FedAvg.展开更多
It is well known that automatic speech recognition(ASR) is a resource consuming task. It takes sufficient amount of data to train a state-of-the-art deep neural network acoustic model. As for some low-resource languag...It is well known that automatic speech recognition(ASR) is a resource consuming task. It takes sufficient amount of data to train a state-of-the-art deep neural network acoustic model. As for some low-resource languages where scripted speech is difficult to obtain, data sparsity is the main problem that limits the performance of speech recognition system. In this paper, several knowledge transfer methods are investigated to overcome the data sparsity problem with the help of high-resource languages.The first one is a pre-training and fine-tuning(PT/FT) method, in which the parameters of hidden layers are initialized with a welltrained neural network. Secondly, the progressive neural networks(Prognets) are investigated. With the help of lateral connections in the network architecture, Prognets are immune to forgetting effect and superior in knowledge transferring. Finally,bottleneck features(BNF) are extracted using cross-lingual deep neural networks and serves as an enhanced feature to improve the performance of ASR system. Experiments are conducted in a low-resource Vietnamese dataset. The results show that all three methods yield significant gains over the baseline system, and the Prognets acoustic model performs the best. Further improvements can be obtained by combining the Prognets model and bottleneck features.展开更多
文摘Federated learning is an effective distributed learning framework that protects privacy and allows multiple edge devices to work together to train models jointly without exchanging data.However,edge devices usually have limited com-puting capabilities,and limited network bandwidth is often a major bottleneck.In order to reduce communication and computing costs,we introduced a horizon-tal pruning mechanism,combined federated learning and progressive learning,and proposed a progressive federated learning scheme based on model pruning.It gradually trains from simple models to more complex ones and trims the uploaded models horizontally.Our approach effectively reduces computational and bidirec-tional communication costs while maintaining model performance.Several image classification experiments on different models have been conducted by us,and the experimental results demonstrate that our approach can effectively save approxi-mately 10%of the computational cost and 48%of the communication cost when compared to FedAvg.
基金partially supported by the National Natural Science Foundation of China(11590770-4,U1536117)the National Key Research and Development Program of China(2016YFB0801203,2016YFB0801200)+1 种基金the Key Science and Technology Project of the Xinjiang Uygur Autonomous Region(2016A03007-1)the Pre-research Project for Equipment of General Information System(JZX2017-0994/Y306)
文摘It is well known that automatic speech recognition(ASR) is a resource consuming task. It takes sufficient amount of data to train a state-of-the-art deep neural network acoustic model. As for some low-resource languages where scripted speech is difficult to obtain, data sparsity is the main problem that limits the performance of speech recognition system. In this paper, several knowledge transfer methods are investigated to overcome the data sparsity problem with the help of high-resource languages.The first one is a pre-training and fine-tuning(PT/FT) method, in which the parameters of hidden layers are initialized with a welltrained neural network. Secondly, the progressive neural networks(Prognets) are investigated. With the help of lateral connections in the network architecture, Prognets are immune to forgetting effect and superior in knowledge transferring. Finally,bottleneck features(BNF) are extracted using cross-lingual deep neural networks and serves as an enhanced feature to improve the performance of ASR system. Experiments are conducted in a low-resource Vietnamese dataset. The results show that all three methods yield significant gains over the baseline system, and the Prognets acoustic model performs the best. Further improvements can be obtained by combining the Prognets model and bottleneck features.