google-colaboratory

Tensorflow-Keras reproducibility problem on Google Colab

我与影子孤独终老i 提交于 2020-07-22 05:59:13
问题 I have a simple code to run on Google Colab (I use CPU mode): import numpy as np import pandas as pd ## LOAD DATASET datatrain = pd.read_csv("gdrive/My Drive/iris_train.csv").values xtrain = datatrain[:,:-1] ytrain = datatrain[:,-1] datatest = pd.read_csv("gdrive/My Drive/iris_test.csv").values xtest = datatest[:,:-1] ytest = datatest[:,-1] import tensorflow as tf from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.utils import to_categorical ## SET ALL SEED import os

Using cv2.imshow() in google Colab

孤者浪人 提交于 2020-07-20 04:22:14
问题 I am trying to conduct object detection for a video by inputting the video through cap = cv2.VideoCapture("video3.mp4") and after the processing part I want to display the video with real time object detection using while True: ret, image_np = cap.read() # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np_expanded, detection_graph) #

Read .mat file from github in Google colab

耗尽温柔 提交于 2020-07-20 03:45:09
问题 I want to read a .mat file from a github link (https://github.com/pranavn91/APS/blob/master/data.mat) and store it into a variable. I am using python3 and google colab. First method: !wget http://upscfever.com/upsc-fever/en/data/deeplearning2/images/data.mat -P drive/app f = h5py.File("drive/app/data.mat", "r") data = f.get('data/variable1') data = np.array(data) Second method: FileNotFoundError but file url is correct url = 'https://github.com/pranavn91/APS/blob/master/data.mat' import scipy

Google Colab: Why is CPU faster than TPU?

久未见 提交于 2020-07-19 06:45:18
问题 I'm using Google colab TPU to train a simple Keras model. Removing the distributed strategy and running the same program on the CPU is much faster than TPU . How is that possible? import timeit import os import tensorflow as tf from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam # Load Iris dataset x = load_iris().data y

Google Colab: Why is CPU faster than TPU?

情到浓时终转凉″ 提交于 2020-07-19 06:44:06
问题 I'm using Google colab TPU to train a simple Keras model. Removing the distributed strategy and running the same program on the CPU is much faster than TPU . How is that possible? import timeit import os import tensorflow as tf from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam # Load Iris dataset x = load_iris().data y

conda environment in google colab [google-colaboratory]

ぐ巨炮叔叔 提交于 2020-07-17 09:58:25
问题 I am trying to create a conda environmet in google colab notebook. I succesfully installed conda with the following comannd !wget -c https://repo.continuum.io/archive/Anaconda3-5.1.0-Linux-x86_64.sh !chmod +x Anaconda3-5.1.0-Linux-x86_64.sh !bash ./Anaconda3-5.1.0-Linux-x86_64.sh -b -f -p /usr/local Default python which is using by system is now Python 3.6.4 :: Anaconda, Inc. I am trying to create an environment in conda by conda env create -f environment.yml Every package got successfully

Get the path of the notebook on Google Colab

折月煮酒 提交于 2020-07-15 09:36:39
问题 I'm trying to use hyperas (hyperparameter optimization for keras) in a Google Colab notebook, I've installed hyperas sucefully with: !pip install hyperas but there is a problem with the minimize function parameter notebook_name that is mandatory to set when you are using a notebook This param has to be filled with the path of the notebook but in Colab I don't know how to get it 回答1: You can copy the notebook.ipybn from Google Drive. Then hyperas can extract the info from it. # Install the

TF2: Compute gradients in keras callback in non-eager mode

纵然是瞬间 提交于 2020-07-09 06:58:07
问题 TF Version: 2.2.0-rc3 (in Colab) I am using the following code (from tf.keras get computed gradient during training) in a callback to compute gradients for all parameters in a model. def on_train_begin(self, logs=None): # Functions return weights of each layer self.layerweights = [] for lndx, l in enumerate(self.model.layers): if hasattr(l, 'kernel'): self.layerweights.append(l.kernel) input_tensors = [self.model.inputs[0], self.model.sample_weights[0], self.model.targets[0], K.learning_phase

How to downgrade to tensorflow-gpu version 1.12 in google colab

我的梦境 提交于 2020-07-09 03:56:18
问题 I am running a GAN which is compatible only with a older version of tensorflow GPU so I need to downgrade tensorflow gpu from 1.15 in google colab to 1.12. I tried using following commands which has been suggested in this thread. %tensorflow_version 1.x import tensorflow as tf print(tf.__version__) !nvcc --version After the magic and the version check (which I get tensorflow version == 1.15.2 for now) I install below. After installation of tensorflow ==1.12.2 I restart the runtime as they

How to downgrade to tensorflow-gpu version 1.12 in google colab

旧街凉风 提交于 2020-07-09 03:55:50
问题 I am running a GAN which is compatible only with a older version of tensorflow GPU so I need to downgrade tensorflow gpu from 1.15 in google colab to 1.12. I tried using following commands which has been suggested in this thread. %tensorflow_version 1.x import tensorflow as tf print(tf.__version__) !nvcc --version After the magic and the version check (which I get tensorflow version == 1.15.2 for now) I install below. After installation of tensorflow ==1.12.2 I restart the runtime as they