theano

python数据分析入门笔记系列一

淺唱寂寞╮ 提交于 2019-12-06 18:53:14
一、认识数据分析的各个库 1、numpy 提供数组支持以及相应的高效处理函数,是Python数据分析的基础,也是SciPy、Pandas等数据处理和科学计算库最基本的函数功能库。 2、pandas 强大、灵活的数据分析和探索工具,包含Series、DataFrame等高级数据结构和工具。 3、matplotlib 基于Numpy的一套Python包,强大的数据可视化工具和作图库,是主要用于绘制数据图表的Python库,提供了绘制各类可视化图形的命令字库、简单的接口,可以方便用户轻松掌握图形的格式,绘制各类可视化图形。 4、Scikit-Learn(Sklearn) 常用的机器学习工具包,提供了完善的机器学习工具箱,支持数据预处理、分类、回归、聚类、预测和模型分析等强大机器学习库,其依赖于Numpy、Scipy和Matplotlib等。 5、SciPy 一个高级的科学计算库,常见的是插值运算、优化算法、图像处理和数学统计等。 6、Keras 是一个高层神经网络API,Keras由纯Python编写而成并基 Tensorflow 、 Theano 以及 CNTK 后端。 TensorFlow和theano以及Keras都是深度学习框架,TensorFlow和theano比较灵活,也比较难学,它们其实就是一个微分器 Keras其实就是TensorFlow和Keras的接口

compilation issue when running theano

末鹿安然 提交于 2019-12-06 15:15:39
I installed theano on windows 8, 64 bit. I am using anaconda implementation, python 3.4. Trying to install theano, I diligently followed all steps on this link (which helped on another computer with similar configuration): http://rosinality.ncity.net/doku.php?id=python:installing_theano (English and Korean) whenever I type 'import theano' on my IDE (pycharm) I get a long error message, but I believe the most meaningful portion is: import theano >>>>Exception: Compilation failed (return status=1): C:\Users\xxx\AppData\Local\Theano\compiledir_Windows-8-6.2.9200- Intel64_Family_6_Model_69

python机器学习系列教程——深度学习框架比较TensorFlow、Theano、Caffe、SciKit-learn、Keras

吃可爱长大的小学妹 提交于 2019-12-06 13:40:19
全栈工程师开发手册 (作者:栾鹏) python教程全解 Theano Theano在深度学习框架中是祖师级的存在。Theano基于Python语言开发的,是一个擅长处理多维数组的库,这一点和numpy很像。当与其他深度学习库结合起来,它十分适合数据探索。它为执行深度学习中大规模神经网络算法的运算所设计。其实,它可以被更好的理解为一个数学表达式的编辑器:用符号式语言定义你想要的结果,该框架会对你的程序进行编译,来高效运行于GPU或CPU。它与后来出现的TensorFlow功能十分相似,因而两者常常被放在一起比较。它们本身都偏底层,同样的,Theano 像是一个研究平台多过是一个深度学习库。你需要从底层开始做许多工作,来创建你需要的模型。比方说,Theano 没有神经网络的分级。但由于它不支持多 GPU 和水平扩展,在 TensorFlow 的热潮下(它们针对同一个领域),Theano 已然开始被遗忘了。 TensorFlow TensorFlow是由google开源出来的,因为有google作为后台,Tensorflow在深度学习领域一直很有名气。TensorFlow是一个采用数据流图,用于数值计算的开源软件库。它支持Python和C++两种类型的接口。TensorFlow可支持分布式计算,它灵活的架构让你可以在多种平台上展开计算,例如台式计算机中的一个或多个CPU(或GPU)

Broadcasting for subtensor created from matrix (Theano)

余生长醉 提交于 2019-12-06 11:07:34
问题 I want to create two subtensors from a matrix, using indices to select the respective rows. One subtensor has several rows, the other just one, which should be broadcast to allow for element-wise addition. My question is: how do I indicate that I want to allow for broadcasting on the specific dimension in the sub-tensor resulting given the indices ( subtensorRight in the example below)? Here is the example showing what I want to do: import theano import numpy as np import theano.tensor as T

Why does the floatX's flag impact whether GPU is used in Theano?

会有一股神秘感。 提交于 2019-12-06 10:04:39
I am testing Theano with GPU using the script provided in the tutorial for that purpose : # Start gpu_test.py # From http://deeplearning.net/software/theano/tutorial/using_gpu.html#using-gpu from theano import function, config, shared, sandbox import theano.tensor as T import numpy import time vlen = 10 * 30 * 768 # 10 x #cores x # threads per core iters = 1000 rng = numpy.random.RandomState(22) x = shared(numpy.asarray(rng.rand(vlen), config.floatX)) f = function([], T.exp(x)) print(f.maker.fgraph.toposort()) t0 = time.time() for i in xrange(iters): r = f() t1 = time.time() print("Looping %d

How to code adagrad in python theano

一笑奈何 提交于 2019-12-06 10:02:15
To simplify the problem, say when a dimension (or a feature) is already updated n times, the next time I see the feature, I want to set the learning rate to be 1/n. I came up with these codes: def test_adagrad(): embedding = theano.shared(value=np.random.randn(20,10), borrow=True) times = theano.shared(value=np.ones((20,1))) lr = T.dscalar() index_a = T.lvector() hist = times[index_a] cost = T.sum(theano.sparse_grad(embedding[index_a])) gradients = T.grad(cost, embedding) updates = [(embedding, embedding+lr*(1.0/hist)*gradients)] ### Here should be some codes to update also times which are

Can not update a subset of shared tensor variable after a cast

做~自己de王妃 提交于 2019-12-06 08:25:52
I have the following code: import theano.tensor as T Words = theano.shared(value = U, name = 'Words') zero_vec_tensor = T.vector() zero_vec = np.zeros(img_w, dtype = theano.config.floatX) set_zero = theano.function([zero_vec_tensor], updates=[(Words, T.set_subtensor(Words[0,:], zero_vec_tensor))]) Which is compiling fine (where U is a numpy array of dtype float64 ). To prevent future type error I want to cast my shared tensor Words into float32 (or theano.config.floatX which is equivalent as I have set floatX to float32 in the config file). I so add Words = T.cast(Words, dtype = theano.config

Getting a particular version of a branch

最后都变了- 提交于 2019-12-06 07:36:47
问题 Is there a way to download a particular version of a branch? In particular I'd like to do a git clone of https://github.com/Theano/Theano now, and save a set of instructions on how to get the exact same version from github, regardless of future commits. 回答1: UPDATE There is an easier way to do this on github if no further changes are expected. In github, you can navigate to the 'tree view' of a repository from your browser via the URL https://github.com/<repo_name>/tree/<commit_sha> Clicking

Update keras.json on Google Colab

你离开我真会死。 提交于 2019-12-06 06:38:40
I tried to update keras.json on Google Colabs, but its throwing UnsupportedOperation error Is there any other alternative to achieve this? You're opening the file as readonly -- pass 'w' to the open call on line 9. ( docs ) 来源: https://stackoverflow.com/questions/50083075/update-keras-json-on-google-colab

Indexing tensor with index matrix in theano?

独自空忆成欢 提交于 2019-12-06 06:17:31
I have a theano tensor A such that A.shape = (40, 20, 5) and a theano matrix B such that B.shape = (40, 20). Is there a one-line operation I can perform to get a matrix C, where C.shape = (40, 20) and C(i,j) = A[i, j, B[i,j]] with theano syntax? Essentially, I want to use B as an indexing matrix; what is the most efficient/elegant to do this using theano? You can do the following in numpy: import numpy as np A = np.arange(4 * 2 * 5).reshape(4, 2, 5) B = np.arange(4 * 2).reshape(4, 2) % 5 C = A[np.arange(A.shape[0])[:, np.newaxis], np.arange(A.shape[1]), B] So you can do the same thing in