TypeError: can't pickle _thread.lock objects in Seq2Seq

前端 未结 2 376
轮回少年
轮回少年 2020-12-10 12:07

I\'m having trouble using buckets in my Tensorflow model. When I run it with buckets = [(100, 100)], it works fine. When I run it with buckets = [(100, 10

相关标签:
2条回答
  • 2020-12-10 12:40

    This solution does not work for me. Any new solution?

    These two solutions work for me:

    change seq2seq.py under /yourpath/tensorflow/contrib/legacy_seq2seq/python/ops/

    #encoder_cell = copy.deepcopy(cell)
    encoder_cell = core_rnn_cell.EmbeddingWrapper(
        cell, #encoder_cell,
    

    or

    for nextBatch in tqdm(batches, desc="Training"):
        _, step_loss = model.step(...)
    

    fed one bucket at a step

    0 讨论(0)
  • 2020-12-10 12:43

    The problem is with latest changes in seq2seq.py. Add this to your script and it will avoid deep-coping of the cells:

    setattr(tf.contrib.rnn.GRUCell, '__deepcopy__', lambda self, _: self)
    setattr(tf.contrib.rnn.BasicLSTMCell, '__deepcopy__', lambda self, _: self)
    setattr(tf.contrib.rnn.MultiRNNCell, '__deepcopy__', lambda self, _: self)
    
    0 讨论(0)
提交回复
热议问题