How to implement network using Bert as a paragraph encoder in long text classification, in keras?

ぃ、小莉子 提交于 2019-12-13 02:59:20

问题


I am doing a long text classification task, which has more than 10000 words in doc, I am planing to use Bert as a paragraph encoder, then feed the embeddings of paragraph to BiLSTM step by step. The network is as below:

Input: (batch_size, max_paragraph_len, max_tokens_per_para,embedding_size)

bert layer: (max_paragraph_len,paragraph_embedding_size)

lstm layer: ???

output layer: (batch_size,classification_size)

How to implement it with keras? I am using keras's load_trained_model_from_checkpoint to load bert model

bert_model = load_trained_model_from_checkpoint(
        config_path,
        model_path,
        training=False,
        use_adapter=True,
        trainable=['Encoder-{}-MultiHeadSelfAttention-Adapter'.format(i + 1) for i in range(layer_num)] +
            ['Encoder-{}-FeedForward-Adapter'.format(i + 1) for i in range(layer_num)] +
            ['Encoder-{}-MultiHeadSelfAttention-Norm'.format(i + 1) for i in range(layer_num)] +
            ['Encoder-{}-FeedForward-Norm'.format(i + 1) for i in range(layer_num)],
        )

来源:https://stackoverflow.com/questions/58703885/how-to-implement-network-using-bert-as-a-paragraph-encoder-in-long-text-classifi

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!