How to do tokenization from a predifined vocab in tensorflow or pytorch or keras?

后端 未结 0 593
余生分开走
余生分开走 2020-12-31 11:32

I have a predefined vocab which build from the common-used 3500 Chinese characters. Now I want to tokenize the Dataset with this vocab to fix each character. Any mature

相关标签:
回答
  • 消灭零回复
提交回复
热议问题