PyTorch - contiguous()

前端 未结 6 2238
春和景丽
春和景丽 2020-12-22 15:40

I was going through this example of a LSTM language model on github (link). What it does in general is pretty clear to me. But I\'m still struggling to understand what calli

6条回答
  •  情话喂你
    2020-12-22 16:39

    From what I understand this a more summarized answer:

    Contiguous is the term used to indicate that the memory layout of a tensor does not align with its advertised meta-data or shape information.

    In my opinion the word contiguous is a confusing/misleading term since in normal contexts it means when memory is not spread around in disconnected blocks (i.e. its "contiguous/connected/continuous").

    Some operations might need this contiguous property for some reason (most likely efficiency in gpu etc).

    Note that .view is another operation that might cause this issue. Look at the following code I fixed by simply calling contiguous (instead of the typical transpose issue causing it here is an example that is cause when an RNN is not happy with its input):

            # normal lstm([loss, grad_prep, train_err]) = lstm(xn)
            n_learner_params = xn_lstm.size(1)
            (lstmh, lstmc) = hs[0] # previous hx from first (standard) lstm i.e. lstm_hx = (lstmh, lstmc) = hs[0]
            if lstmh.size(1) != xn_lstm.size(1): # only true when prev lstm_hx is equal to decoder/controllers hx
                # make sure that h, c from decoder/controller has the right size to go into the meta-optimizer
                expand_size = torch.Size([1,n_learner_params,self.lstm.hidden_size])
                lstmh, lstmc = lstmh.squeeze(0).expand(expand_size).contiguous(), lstmc.squeeze(0).expand(expand_size).contiguous()
            lstm_out, (lstmh, lstmc) = self.lstm(input=xn_lstm, hx=(lstmh, lstmc))
    

    Error I used to get:

    RuntimeError: rnn: hx is not contiguous
    
    

    Sources/Resource:

    • https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107/7
    • PyTorch - contiguous()

提交回复
热议问题