How can I create and fit vocab.bpe file (GPT and GPT2 OpenAI models) with my own corpus text?

无人久伴 提交于 2020-08-05 05:23:31

问题


This question is for those who are familiar with GPT or GPT2 OpenAI models. In particular, with the encoding task (Byte-Pair Encoding). This is my problem:

I would like to know how I could create my own vocab.bpe file.

I have a spanish corpus text that I would like to use to fit my own bpe encoder. I have succeedeed in creating the encoder.json with the python-bpe library, but I have no idea on how to obtain the vocab.bpe file. I have reviewed the code in gpt-2/src/encoder.py but, I have not been able to find any hint. Any help or idea?

Thank you so much in advance.


回答1:


check out here, you can easily create the same vocab.bpe using the following command:

python learn_bpe -o ./vocab.bpe -i dataset.txt --symbols 50000



回答2:


I haven't worked with GPT2, but bpemb is a very good place to start for subword embeddings. According to the README

BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural language processing.

I've used the pretrained embeddings for one of my projects along with sentencepiece and it turned out to be very useful.



来源:https://stackoverflow.com/questions/55531061/how-can-i-create-and-fit-vocab-bpe-file-gpt-and-gpt2-openai-models-with-my-own

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!