entropy

PGP: Not enough random bytes available. Please do some other work to give the OS a chance to collect more entropy

纵饮孤独 提交于 2019-11-29 23:26:44
Setup : Ubuntu Server on Virtual Machine with 6 cores and 3GB of RAM. when I am trying to generate a asymmetric key pair via GPG like this gpg --gen-key . I get the following error : Not enough random bytes available. Please do some other work to give the OS a chance to collect more entropy! I tried to google a little bit. This is what I realise , I need to fire up another terminal and type in cat /udev/random --> It randomly generates a series of randomly generated values to increase the entropy. I dont see any change in here watch cat /proc/sys/kernel/random/entropy_avail and it still

Get, or calculate the entropy of an image with Ruby and imagemagick

a 夏天 提交于 2019-11-29 21:13:38
问题 How to find the "entropy" with imagemagick, preferably mini_magic, in Ruby? I need this as part of a larger project, finding "interestingness" in an image so to crop it . I found a good example in Python/Django, which gives the following pseudo-code: image = Image.open('example.png') histogram = image.histogram() # Fetch a list of pixel counts, one for each pixel value in the source image #Normalize, or average the result. for each histogram as pixel histogram_recalc << pixel / histogram.size

How to write single bits to a file in C

假如想象 提交于 2019-11-29 16:18:41
I am programming an entropy coding algorithm and I want to write single bits like an encoded character to a file. For example I want to write 011 to a file but if you would store it as character it'd take up 3 Bytes instead of 3 Bits. So my final question is: How can I write single bits to a file? Thanks in advance! paxdiablo You can't write individual bits to a file, the resolution is a single byte. If you want to write bits in sequence, you have to batch them up until you have a full byte, then write that. Psuedo-code (though C-like) for that would be along the lines of: currbyte = 0

CryptGenRandom Entropy

好久不见. 提交于 2019-11-29 15:07:22
问题 CryptGenRandom is a random number generator function in CryptoAPI in Windows. How much entropy has that random number generator ? I have already looked a lot, but I couldn't find it. Thanks in advance. 回答1: The exact algorithm of Windows CryptGenRandom was never published, therefore, some security experts suggest not to use it at all. Some reverse-engineering and cryptanalysis was made. A published research (Cryptanalysis of the Windows Random Number Generator - Leo Dorrendorf, 2007) examined

How can I determine the statistical randomness of a binary string?

*爱你&永不变心* 提交于 2019-11-29 10:00:12
问题 How can I determine the statistical randomness of a binary string? Ergo, how can I code my own test, and return a single value that corresponds to the statistical randomness, a value between 0 and 1.0 (0 being not random, 1.0 being random)? The test would need to work on binary strings of any size. When you do it with pen and paper, you might explore strings like this: 0 (arbitrary randomness, the only other choice is 1) 00 (not random, its a repeat and matches the size) 01 (better, two

【AI实战】快速掌握TensorFlow(四):损失函数

非 Y 不嫁゛ 提交于 2019-11-29 09:05:43
在前面的文章中,我们已经学习了 TensorFlow 激励函数的操作使用方法(见文章: 快速掌握 TensorFlow (三) ),今天我们将继续学习 TensorFlow 。 本文主要是 学习掌握 TensorFlow 的损失函数。 一、什么是损失函数 损失函数(loss function)是机器学习中非常重要的内容,它是度量模型输出值与目标值的差异,也就是作为评估模型效果的一种重要指标,损失函数越小,表明模型的鲁棒性就越好。 二、怎样使用损失函数 在TensorFlow中训练模型时,通过损失函数告诉TensorFlow预测结果相比目标结果是好还是坏。在多种情况下,我们会给出模型训练的样本数据和目标数据,损失函数即是比较预测值与给定的目标值之间的差异。 下面将介绍在TensorFlow中常用的损失函数。 1、回归模型的损失函数 首先讲解回归模型的损失函数,回归模型是预测连续因变量的。为方便介绍,先定义预测结果(-1至1的等差序列)、目标结果(目标值为0),代码如下: import tensorflow as tf sess=tf.Session() y_pred=tf.linspace(-1., 1., 100) y_target=tf.constant(0.) 注意,在实际训练模型时,预测结果是模型输出的结果值,目标结果是样本提供的。 (1)L1正则损失函数(即绝对值损失函数)

tensorflow框架学习 两个简单的神经网络示例,回归与分类

牧云@^-^@ 提交于 2019-11-29 04:36:05
一、回归神经网络 1、神经网络结构 定义一个简单的回归神经网络结构: 数据集为(xi,yi),数据的特征数为1,所以x的维度为1。 输入层1个神经元。 隐藏层数为1,4个神经元。 输出层1个神经元。 隐藏层的激活函数为f(x)=x,输出层的激活函数为ReLU 结构图如下: 2、代码示例 相关函数说明: tf.random_normal :用于生成正太分布随机数矩阵的tensor,tensorFlow有很多随机数函数,可以查找官方文档获得。 tf.zeros :用于生成0矩阵的tensor,tf.ones可以用来获得单位矩阵。 tf.nn.relu :tensorflow定义的用来实现ReLU激活函数的方法。 tf.reduce_sum :求和函数,通过axis来控制在哪个方向上求和,axis=[0]表示按行求和,axis=[1]表示按列求和。 tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) :梯度下降优化函数,learning_rate表示学习率,minimize表示最小化,loss是优化的损失函数。tensorFlow有很多优化函数,可以查找官方文档获得。 代码: import tensorflow as tf import numpy as np import matplotlib.pyplot

强化学习之策略梯度(Policy Gradient)

本小妞迷上赌 提交于 2019-11-29 02:42:50
1 、什么是 Policy Gradients 策略梯度的基本思想,就是直接根据状态输出动作或者动作的概率。那么怎么输出呢,最简单的就是使用神经网络。 我们使用神经网络输入当前的状态,网络就可以输出我们在这个状态下采取每个动作的概率,那么网络应该如何训练来实现最终的收敛呢?我们之前在训练神经网络时,使用最多的方法就是反向传播算法,我们需要一个误差函数,通过梯度下降来使我们的损失最小。但对于强化学习来说,我们不知道动作的正确与否,只能通过奖励值来判断这个动作的相对好坏。基于上面的想法,我们有个非常简单的想法: 如果一个动作得到的reward多,那么我们就使其出现的概率增加,如果一个动作得到的reward少,我们就使其出现的概率减小。 根据这个思想,我们构造如下的损失函数:loss= -log(prob)*vt 上式中log(prob)表示在状态 s 对所选动作 a 的吃惊度, 如果概率越小, 反向的log(prob) 反而越大. 而vt代表的是当前状态s下采取动作a所能得到的奖励,这是当前的奖励和未来奖励的贴现值的求和。也就是说,我们的策略梯度算法必须要完成一个完整的eposide才可以进行参数更新,而不是像值方法那样,每一个(s,a,r,s')都可以进行参数更新。如果在prob很小的情况下, 得到了一个大的Reward, 也就是大的vt, 那么-log(prob)*vt就更大,

KL Divergence in TensorFlow

懵懂的女人 提交于 2019-11-28 21:33:41
I have two tensors, prob_a and prob_b with shape [None, 1000] , and I want to compute the KL divergence from prob_a to prob_b . Is there a built-in function for this in TensorFlow? I tried using tf.contrib.distributions.kl(prob_a, prob_b) but it gives: NotImplementedError: No KL(dist_a || dist_b) registered for dist_a type Tensor and dist_b type Tensor If there is no built-in function, what would be a good workaround? Assuming that your input tensors prob_a and prob_b are probability tensors that sum to 1 along the last axis, you could do it like this: def kl(x, y): X = tf.distributions

Softmax,Softmax loss和Cross Entropy

我们两清 提交于 2019-11-28 17:43:59
卷积神经网络系列之softmax,softmax loss和cross entropy的讲解 链接: https://blog.csdn.net/u014380165/article/details/77284921 交叉熵代价函数(损失函数)及其求导推导 链接: https://blog.csdn.net/jasonzzj/article/details/52017438 softmax与cross-entropy loss 链接: https://blog.csdn.net/u012494820/article/details/52797916 来源: https://www.cnblogs.com/kandid/p/11417248.html