pytorch

【从零开始学习YOLOv3】1. YOLOv3的cfg文件解析与总结

徘徊边缘 提交于 2021-02-07 06:52:32
前言: 与其他框架不同,Darknet构建网络架构不是通过代码直接堆叠,而是通过解析cfg文件进行生成的。cfg文件格式是有一定规则,虽然比较简单,但是有些地方需要对yolov3有一定程度的熟悉,才能正确设置。 下边以 yolov3.cfg 为例进行讲解。 作者:pprp 首发:GiantPandaCV公众号 1. Net层 [net] #Testing #batch=1 #subdivisions=1 #在测试的时候,设置batch=1,subdivisions=1 #Training batch=16 subdivisions=4 #这里的batch与普遍意义上的batch不是一致的。 #训练的过程中将一次性加载16张图片进内存,然后分4次完成前向传播,每次4张。 #经过16张图片的前向传播以后,进行一次反向传播。 width=416 height=416 channels=3 #设置图片进入网络的宽、高和通道个数。 #由于YOLOv3的下采样一般是32倍,所以宽高必须能被32整除。 #多尺度训练选择为32的倍数最小320*320,最大608*608。 #长和宽越大,对小目标越好,但是占用显存也会高,需要权衡。 momentum=0.9 #动量参数影响着梯度下降到最优值得速度。 decay=0.0005 #权重衰减正则项,防止过拟合。 angle=0 #数据增强,设置旋转角度。

小白学PyTorch | 11 MobileNet详解及PyTorch实现

╄→гoц情女王★ 提交于 2021-02-07 00:18:35
【机器学习炼丹术】的学习笔记分享 <<小白学PyTorch>> 小白学PyTorch | 10 pytorch常见运算详解 小白学PyTorch | 9 tensor数据结构与存储结构 小白学PyTorch | 8 实战之MNIST小试牛刀 小白学PyTorch | 7 最新版本torchvision.transforms常用API翻译与讲解 小白学PyTorch | 6 模型的构建访问遍历存储(附代码) 小白学PyTorch | 5 torchvision预训练模型与数据集全览 小白学PyTorch | 4 构建模型三要素与权重初始化 小白学PyTorch | 3 浅谈Dataset和Dataloader 小白学PyTorch | 2 浅谈训练集验证集和测试集 小白学PyTorch | 1 搭建一个超简单的网络 小白学PyTorch | 动态图与静态图的浅显理解 参考目录: 1 背景 2 深度可分离卷积 2.2 一般卷积计算量 2.2 深度可分离卷积计算量 2.3 网络结构 3 PyTorch实现 本来计划是想在今天讲EfficientNet PyTorch的,但是发现EfficientNet是依赖于SENet和MobileNet两个网络结构,所以本着本系列是给“小白”初学者学习的,所以这一课先讲解MobileNet,然后下一课讲解SENet

LSTM Autoencoder problems

守給你的承諾、 提交于 2021-02-06 16:14:30
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

LSTM Autoencoder problems

拜拜、爱过 提交于 2021-02-06 16:07:07
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

LSTM Autoencoder problems

≯℡__Kan透↙ 提交于 2021-02-06 16:06:04
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

LSTM Autoencoder problems

旧巷老猫 提交于 2021-02-06 16:05:10
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

How to convert a list or numpy array to a 1d torch tensor?

為{幸葍}努か 提交于 2021-02-06 15:31:40
问题 I have a list (or, a numpy array) of float values. I want to create a 1d torch tensor that will contain all those values. I can create the torch tensor and run a loop to store the values. But I want to know is there any way, I can create a torch tensor with initial values from a list or array? Also suggest me if there is any pythonic way to achieve this as I am working in pytorch. 回答1: These are general operations in pytorch and available in the documentation. PyTorch allows easy interfacing

How to convert a list or numpy array to a 1d torch tensor?

我只是一个虾纸丫 提交于 2021-02-06 15:30:12
问题 I have a list (or, a numpy array) of float values. I want to create a 1d torch tensor that will contain all those values. I can create the torch tensor and run a loop to store the values. But I want to know is there any way, I can create a torch tensor with initial values from a list or array? Also suggest me if there is any pythonic way to achieve this as I am working in pytorch. 回答1: These are general operations in pytorch and available in the documentation. PyTorch allows easy interfacing

Understanding accumulated gradients in PyTorch

感情迁移 提交于 2021-02-05 20:34:09
问题 I am trying to comprehend inner workings of the gradient accumulation in PyTorch . My question is somewhat related to these two: Why do we need to call zero_grad() in PyTorch? Why do we need to explicitly call zero_grad()? Comments to the accepted answer to the second question suggest that accumulated gradients can be used if a minibatch is too large to perform a gradient update in a single forward pass, and thus has to be split into multiple sub-batches. Consider the following toy example:

Understanding accumulated gradients in PyTorch

╄→尐↘猪︶ㄣ 提交于 2021-02-05 20:33:14
问题 I am trying to comprehend inner workings of the gradient accumulation in PyTorch . My question is somewhat related to these two: Why do we need to call zero_grad() in PyTorch? Why do we need to explicitly call zero_grad()? Comments to the accepted answer to the second question suggest that accumulated gradients can be used if a minibatch is too large to perform a gradient update in a single forward pass, and thus has to be split into multiple sub-batches. Consider the following toy example: