Pytorch tensor to numpy array

前端 未结 6 1354
名媛妹妹
名媛妹妹 2021-02-01 01:31

I have a pytorch Tensor of size torch.Size([4, 3, 966, 1296])

I want to convert it to numpy array using the following code:

<
6条回答
  •  Happy的楠姐
    2021-02-01 01:55

    While other answers perfectly explained the question I will add some real life examples converting tensors to numpy array:

    Example: Shared storage

    PyTorch tensor residing on CPU shares the same storage as numpy array na

    import torch
    a = torch.ones((1,2))
    print(a)
    na = a.numpy()
    na[0][0]=10
    print(na)
    print(a)
    

    Output:

    tensor([[1., 1.]])
    [[10.  1.]]
    tensor([[10.,  1.]])
    

    Example: Eliminate effect of shared storage, copy numpy array first

    To avoid the effect of shared storage we need to copy() the numpy array na to a new numpy array nac. Numpy copy() method creates the new separate storage.

    import torch
    a = torch.ones((1,2))
    print(a)
    na = a.numpy()
    nac = na.copy()
    nac[0][0]=10
    ​print(nac)
    print(na)
    print(a)
    

    Output:

    tensor([[1., 1.]])
    [[10.  1.]]
    [[1. 1.]]
    tensor([[1., 1.]])
    

    Now, just the nac numpy array will be altered with the line nac[0][0]=10, na and a will remain as is.

    Example: CPU tensor with requires_grad=True

    import torch
    a = torch.ones((1,2), requires_grad=True)
    print(a)
    na = a.detach().numpy()
    na[0][0]=10
    print(na)
    print(a)
    

    Output:

    tensor([[1., 1.]], requires_grad=True)
    [[10.  1.]]
    tensor([[10.,  1.]], requires_grad=True)
    

    In here we call:

    na = a.numpy() 
    

    This would cause: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead., because tensors that require_grad=True are recorded by PyTorch AD. Note that tensor.detach() is the new way for tensor.data.

    This explains why we need to detach() them first before converting using numpy().

    Example: CUDA tensor with requires_grad=False

    a = torch.ones((1,2), device='cuda')
    print(a)
    na = a.to('cpu').numpy()
    na[0][0]=10
    print(na)
    print(a)
    

    Output:

    tensor([[1., 1.]], device='cuda:0')
    [[10.  1.]]
    tensor([[1., 1.]], device='cuda:0')
    

    Example: CUDA tensor with requires_grad=True

    a = torch.ones((1,2), device='cuda', requires_grad=True)
    print(a)
    na = a.detach().to('cpu').numpy()
    na[0][0]=10
    ​print(na)
    print(a)
    

    Output:

    tensor([[1., 1.]], device='cuda:0', requires_grad=True)
    [[10.  1.]]
    tensor([[1., 1.]], device='cuda:0', requires_grad=True)
    

    Without detach() method the error RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. will be set.

    Without .to('cpu') method TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. will be set.

    You could use cpu() but instead of to('cpu') but I prefer the newer to('cpu').

提交回复
热议问题