Pytorch Quantization RuntimeError: Trying to create tensor with negative dimension

混江龙づ霸主 提交于 2021-02-11 17:03:48

问题


I am trying out the pytorch quantization module. When doing static post training quantization I follow the next procedure detailed in the documentation:

  1. adding QuantStub and DeQuantStub modules
  2. Fuse operations
  3. Specify qauntization config
  4. torch.quantization.prepare()
  5. Calibrate the model by running inference against a calibration dataset
  6. torch.quantization.convert()

However, when calibrating the model after preparing it the program breaks.

The error appears at the last fully connected layers. It seems that the observers introduced in the graph are trying to create an histogram of negative dimension.

Here is the error:

    x = self.fc(x)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 552, in __call__
    hook_result = hook(self, input, result)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/quantization/quantize.py", line 74, in _observer_forward_hook
    return self.activation_post_process(output)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/quantization/observer.py", line 805, in forward
    self.bins)
  File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/quantization/observer.py", line 761, in _combine_histograms
    histogram_with_output_range = torch.zeros((Nbins * downsample_rate))
RuntimeError: Trying to create tensor with negative dimension -4398046511104: [-4398046511104]

The fully connected are built as

class LinearReLU(nn.Sequential):
    def __init__(self, in_neurons, out_neurons):
        super(LinearReLU, self).__init__(
            nn.Linear(in_neurons, out_neurons),
            nn.ReLU(inplace=False)
        )

They are appended in the fc(x) as fc = nn.Sequential(*([LinearReLU, LinearReLU, ...]).

However, I suspect that it has something to do with the reshape between the convolutions and the fully connected layers.

x = x.reshape(-1, size)

Until now I have not been able to solve this error.

Thanks in advance


回答1:


For anybody that has the same problem.

The solution is in this line in pytorch quantization documentation:

View-based operations like view(), as_strided(), expand(), flatten(), select(), python-style indexing, etc - work as on regular tensor (if quantization is not per-channel)

The problem was using reshape and doing per channel quantization. If I do meanof the last two channels there is no problem.



来源:https://stackoverflow.com/questions/60567538/pytorch-quantization-runtimeerror-trying-to-create-tensor-with-negative-dimensi

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!