google-colaboratory

Setting environment variables in Google Colab

╄→гoц情女王★ 提交于 2020-06-10 03:37:27
问题 I'm trying to use the Kaggle CLI API, and in order to do that, instead of using kaggle.json for authentication, I'm using environment variables to set the credentials. !pip install --upgrade kaggle !export KAGGLE_USERNAME=abcdefgh !export KAGGLE_KEY=abcdefgh !export -p However, the printed list of env. variables doesn't contain the ones I set above. declare -x CLICOLOR="1" declare -x CLOUDSDK_CONFIG="/content/.config" declare -x COLAB_GPU="1" declare -x CUDA_PKG_VERSION="9-2=9.2.148-1"

Can't load load my dataset to train my model on Google Colab

做~自己de王妃 提交于 2020-06-09 05:23:28
问题 I am currently facing the problems of dealing with a large dataset, I can not download the dataset directly into google colab due to the limited space google colab provides(37 GB) I have done some research and it seems that it depends on the GPU we get assigned, for some people the available space on the disk could be more. So my question is, can I download the dataset on a server such as Google Cloud on then load it from the server. The dataset is roughly 20 GB, the reason why 37 GB is not

Persisting data in Google Colaboratory

老子叫甜甜 提交于 2020-06-07 13:50:11
问题 Has anyone figured out a way to keep files persisted across sessions in Google's newly open sourced Colaboratory? Using the sample notebooks, I'm successfully authenticating and transferring csv files from my Google Drive instance and have stashed them in /tmp, my ~, and ~/datalab. Pandas can read them just fine off of disk too. But once the session times out , it looks like the whole filesystem is wiped and a new VM is spun up, without downloaded files. I guess this isn't surprising given

How to delete permanently from mounted Drive folder?

岁酱吖の 提交于 2020-06-01 07:45:48
问题 I wrote a script to upload my models and training examples to Google Drive after every iteration in case of crashes or anything that stops the notebook from running, which looks something like this: drive_path = 'drive/My Drive/Colab Notebooks/models/' if path.exists(drive_path): shutil.rmtree(drive_path) shutil.copytree('models', drive_path) Whenever I check my Google Drive, a few GBs is taken up by dozens of deleted models folder in the Trash, which I have to manually delete them. The only

cv2_imshow() doesn't render video file in Google Colab

大城市里の小女人 提交于 2020-05-31 07:32:18
问题 I am attempting to migrate some OpenCV image analysis (using Python3) from a local Jupyter notebook to Google Colab. My original Jupyter Notebook code works fine, and the video renders fine (in its own Window) (see a subset of the code below). This code uses cv2.imshow() to render the video. When using the same "cv2.imshow()" code in Colab, the video doesn't render. Based on this suggestion - I switched to using cv2_imshow()in Colab. However, this change leads to a vertical series of 470

Python 3.5 in google colab

别说谁变了你拦得住时间么 提交于 2020-05-30 03:42:25
问题 I'm runing a python code for deep learning in google colab. Python 3.5 is required for that code. How can I install Python 3.5 version in google colab ? 回答1: If you do !python3 --version you can see colab current uses Python 3.6.7 which is "python version of 3.5 or above" alternatively you can use local runtime, this will allow you to use different versions of python 回答2: This worked for me. !apt-get install python3.5 来源: https://stackoverflow.com/questions/54994129/python-3-5-in-google-colab

ValueError: Shapes (None, 50) and (None, 1) are incompatible in Tensorflow and Colab

时光总嘲笑我的痴心妄想 提交于 2020-05-29 10:35:08
问题 I am training a Tensorflow model with LSTMs for predictive maintenance. For each instance I create a matrix (50,4) where 50 is the length of the hisotry sequence, and 4 is the number of features for each records, so for training the model I use e.g. (55048, 50, 4) tensor and a (55048, 1) as labels. When I train on Jupyter on my computer it works (very slow, but it works), but on Colab I get this error: Training data shape is (55048, 50, 4) Labels shape is (55048, 1) WARNING:tensorflow:Layer

Is it possible to activate virtualenv in Google-colab? (/bin/sh: 1: source: not found)

荒凉一梦 提交于 2020-05-25 14:12:54
问题 I am trying to install theano in Google Colab for testing. I have installed virtualenv and created an environment: !pip3 install virtualenv !virtualenv theanoEnv But am not able to activate the virtual environment even explicitly mentioned the location of 'activate' command. !source /content/theanoEnv/bin/activate theanoEnv Error Message is: /bin/sh: 1: source: not found Is it even possible to do?: source /[SomeVirtualEnv]/bin/activate SomeVirtualEnv 回答1: Short answer, I don't believe it is

Is it possible to activate virtualenv in Google-colab? (/bin/sh: 1: source: not found)

房东的猫 提交于 2020-05-25 14:10:20
问题 I am trying to install theano in Google Colab for testing. I have installed virtualenv and created an environment: !pip3 install virtualenv !virtualenv theanoEnv But am not able to activate the virtual environment even explicitly mentioned the location of 'activate' command. !source /content/theanoEnv/bin/activate theanoEnv Error Message is: /bin/sh: 1: source: not found Is it even possible to do?: source /[SomeVirtualEnv]/bin/activate SomeVirtualEnv 回答1: Short answer, I don't believe it is