MXNet

Why is my CPU doing matrix operations faster than GPU instead?

江枫思渺然 提交于 2021-02-10 15:38:32
问题 When I tried to verify that the GPU does matrix operations over the CPU, I got unexpected results.CPU performs better than GPU according to my experience result, it makes me confused. I used cpu and gpu to do matrix multiplication respectively.Programming environment is MXNet and cuda-10.1. with gpu: import mxnet as mx from mxnet import nd x = nd.random.normal(shape=(100000,100000),ctx=mx.gpu()) y = nd.random.normal(shape=(100000,100000),ctx=mx.gpu()) %timeit nd.dot(x,y) 50.8 µs ± 1.76 µs per

MXNet package installation in R

二次信任 提交于 2021-02-07 12:50:01
问题 I get plenty of trouble when trying to install MXNet package in R I am using the 3.4.0 version of R and I am on windows 10 CPU intel i3, 64bits x64-based processor. I get prompted: install.packages("mxnet") Warning in install.packages : cannot open URL 'http://www.stats.ox.ac.uk/pub/RWin/src/contrib/PACKAGES.rds': HTTP status was '404 Not Found' Installing package into ‘C:/Users/los40/OneDrive/Documentos/R/win-library/3.4’ (as ‘lib’ is unspecified) Warning in install.packages : package ‘mxnet

安装mxnet失败的第31天!!!

蓝咒 提交于 2021-02-01 09:45:55
深度学习框架mxnet的安装: 放在前面,我终于成功了 C:\ProgramData\Anaconda3>python . exe - c "import mxnet as mx; print(mx.nd.zeros((1,2), ctx=mx.gpu()) + 1)" [ [ 1 . 1 . ] ] <NDArray 1x2 @gpu ( 0 ) > 根据各方提示,我先是去NVIDIA的控制面板查看自己需要安装的CUDA 的版本,显示我可以安的是cuda10.2,去官网,失败了N天以后,学会了百度下有没有好心人的国内资源。。。。 成功安好CUDA10.2,本身已经有了Anacoda,所以就剩下安装Mxnet了,然后按照教程进行了下面的操作: D: / / PortableProgram / Anaconda3 / python . exe - m pip install -- pre mxnet - cu102 - f https: / / dist . mxnet . io / python / cu102 果不其然的,失败了好多天,每天尝试一次,每天失败一次,网络上各种说法都有,每天试一试,心情特别好。。。。。 解决方案: 最终在今天,我决定放弃所谓的最适合的版本,卸载了CUDA10.2,重新安装了cuda10.1 再回到教程,进行安装: D: / /

Save/Load MXNet model parameters using NumPy

不问归期 提交于 2021-01-28 09:36:06
问题 How can I save the parameters for an MXNet model into a NumPy file (.npy)? After doing so, how can I load these parameters from the .npy file back into my model? Here is a minimal example to save the parameters for MXNet model using MXNet API. import mxnet as mx from mxnet import gluon from mxnet.gluon.model_zoo import vision import numpy as np num_gpus = 0 ctx = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()] resnet = vision.resnet50_v2(pretrained=True, ctx=ctx)

How to install “mxnet” package in R 4.0.2

ⅰ亾dé卋堺 提交于 2021-01-27 20:54:10
问题 Good afternoon. Recently I have encountered the problem with installing "mxnet" package. I have tried several variants of code, but neither of their actually installs this package. 1. cran <- getOption("repos") cran["dmlc"] <- "https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/15" options(repos = cran) install.packages("mxnet") library(mxnet) And get the error like this: Error: package or namespace load failed for ‘mxnet’: package ‘mxnet’ was installed before R 4.0.0: please

R: namespace load failed for ‘mxnet’: package ‘mxnet’ was installed before R 4.0.0: please re-install it

不羁的心 提交于 2021-01-13 10:53:23
问题 I am trying to install the R package "mxnet". However, this package does not seem to be available on CRAN. I found similar posts on stackoveflow where similar problems were encountered: How to install "mxnet" package in R 4.0.2 I tried to install this package three different ways, but both of them failed: #First Way: install.packages("https://s3.ca-central-1.amazonaws.com/jeremiedb/share/mxnet/CPU/3.6/mxnet.zip", repos = NULL) Installing package into ‘C:/Users/me/Documents/R/win-library/4.0’

MXNET--中文版学习 softmax手动实现 课后练习

北慕城南 提交于 2021-01-03 14:36:54
本节中,我们直接按照 softmax 运算的数学定义来实现 softmax 函数。这可能会造成什么问题? 答:参考知乎一位的回答, https://zhuanlan.zhihu.com/p/27223959 . 当exp()中的数过大,会导致计算溢出。所以可以给exp(x)中的数加一个常数F,F = -max(a1,a2,...an),保证指数函数的定义域在0附近。 本节中的cross_entropy函数是按照交叉熵损失函数的数学定义实现的。这样的实现方式可能有什么问题?(提示:思考一下对数函数的定义域。) 答:对数函数的定义域是(0.+&),当无限接近于0 .可能导致结果过大为nan. 来源: oschina 链接: https://my.oschina.net/u/3127014/blog/2875071

从零和使用mxnet实现softmax分类

谁说胖子不能爱 提交于 2020-12-31 04:35:18
1.softmax从零实现 from mxnet.gluon import data as gdata from sklearn import datasets from mxnet import nd,autograd # 加载数据集 digits = datasets.load_digits() features,labels = nd.array(digits['data']),nd.array(digits['target']) print(features.shape,labels.shape) labels_onehot = nd.one_hot(labels,10) print(labels_onehot.shape) (1797, 64) (1797,) (1797, 10) class softmaxClassifier: def __init__(self,inputs,outputs): self.inputs = inputs self.outputs = outputs self.weight = nd.random.normal(scale=0.01,shape=(inputs,outputs)) self.bias = nd.zeros(shape=(1,outputs)) self.weight.attach_grad() self.bias

(cupy,minpy,mars,numba)使用GPU,并行计算和编译优化加速矩阵运算

人盡茶涼 提交于 2020-12-19 11:19:38
使用GPU,并行计算和编译优化加速numpy矩阵运算(相关材料整理) v1(主要针对numpy运算的加速) 2020/12/16 总结:基于GPU加速numpy:cupy 和 minpy ​ 基于编译的优化加速numpy:numba ​ 基于并行计算加速numpy:Mars ​ 既可以并行又可以用GPU:Mars 文章目录 使用GPU,并行计算和编译优化加速numpy矩阵运算(相关材料整理) cupy minpy和MXnet mars jit和numba RAPIDS numpy学习网址: https://numpy.net/ ​ https://www.numpy.org.cn/ ​ http://cs231n.stanford.edu/syllabus.html cupy cupy支持使用GPU来加速Numpy。 cupy documents:https://docs.cupy.dev/en/stable/ 如果已经安装好cuda,安装cupy只需要( 安装之前一定要保证pip更新到最新的版本 ) $ pip install cupy 也可以使用下面这个方法安装, #根据自己安装的cuda版本是哪一个,然后直接下载安装适合的版本,实测这个方法比较快 #然后进行安装命令 # CUDA 8.0 pip install cupy-cuda80 # CUDA 9.0 pip

[Tensorflow] TensorFlow之Hello World!(1)

ぃ、小莉子 提交于 2020-12-04 08:26:59
哇!今天挺开心的,30天的时间,19篇文章,2459人阅读,5313人次阅读!今天开通的原创标识,恩!除了激动,就是非常感谢大家的支持! 感谢大家的支持! 大家的支持! 的 支持! 支持! 持!我会继续努力的!我们一起进步!(./鞠躬!) ***** ** *** *** *****分割线 ********** *** ** *** 在学习TensorFlow之前,先给大家安利一波其他的几个库,主要有caffe,CNTK,keras,Theano,Torch,MaxNet。 总的来说,Caffe,CNTK这类是基于配置文件来定义模型,而Torch,Theano,Keras,TensorFlow是基于语言来定义模型。其中Torch是基于lua,一个比较小众的语言,不过也有了Python版。基于Python的有Theano,TensorFlow,Keras。Theano是和TensorFlow最像的一个,应该说TensorFlow是受到了Theano的启发而开发的,他们都是利用了tensor张量的思想。但是Theano是由LISA lab基于学术目的而开发的一套底层算法库,而TensorFlow是由google支持的。他俩主要区别还在于TensorFlow支持分布式编程。 下面有些网址可能打不开,这不是说链接无效~,而是需要“翻墙”,我觉得肯定有人不知道怎么办,就像我刚听说的时候