Word-embedding acts as one of the backbones of modern natural language processing(NLP).Recently,with the need for deploying NLP models to low-resource devices,there has been a surge of interest to compress word embedd...Word-embedding acts as one of the backbones of modern natural language processing(NLP).Recently,with the need for deploying NLP models to low-resource devices,there has been a surge of interest to compress word embeddings into hash codes or binary vectors so as to save the storage and memory consumption.Typically,existing work learns to encode an embedding into a compressed representation from which the original embedding can be reconstructed.Although these methods aim to preserve most information of every individual word,they often fail to retain the relation between words,thus can yield large loss on certain tasks.To this end,this paper presents Relation Reconstructive Binarization(R2B)to transform word embeddings into binary codes that can preserve the relation between words.At its heart,R2B trains an auto-encoder to generate binary codes that allow reconstructing the wordby-word relations in the original embedding space.Experiments showed that our method achieved significant improvements over previous methods on a number of tasks along with a space-saving of up to 98.4%.Specifically,our method reached even better results on word similarity evaluation than the uncompressed pre-trained embeddings,and was significantly better than previous compression methods that do not consider word relations.展开更多
基金The reseach work was supported by the National Key Research and Development Program of China(2017YFB1002104)the National Natural Science Foundation of China(Grant Nos.92046003,61976204,U1811461)Xiang Ao was also supported by the Project of Youth Innovation Promotion Association CAS and Beijing Nova Program(Z201100006820062).
文摘Word-embedding acts as one of the backbones of modern natural language processing(NLP).Recently,with the need for deploying NLP models to low-resource devices,there has been a surge of interest to compress word embeddings into hash codes or binary vectors so as to save the storage and memory consumption.Typically,existing work learns to encode an embedding into a compressed representation from which the original embedding can be reconstructed.Although these methods aim to preserve most information of every individual word,they often fail to retain the relation between words,thus can yield large loss on certain tasks.To this end,this paper presents Relation Reconstructive Binarization(R2B)to transform word embeddings into binary codes that can preserve the relation between words.At its heart,R2B trains an auto-encoder to generate binary codes that allow reconstructing the wordby-word relations in the original embedding space.Experiments showed that our method achieved significant improvements over previous methods on a number of tasks along with a space-saving of up to 98.4%.Specifically,our method reached even better results on word similarity evaluation than the uncompressed pre-trained embeddings,and was significantly better than previous compression methods that do not consider word relations.