tensorflow

Deploy python app to Heroku “Slug Size too large”

杀马特。学长 韩版系。学妹 提交于 2020-12-29 13:20:53
问题 I'm trying to deploy a Streamlit app written in python to Heroku. My whole directory is 4.73 MB, where 4.68 MB is my ML model. My requirements.txt looks like this: absl-py==0.9.0 altair==4.0.1 astor==0.8.1 attrs==19.3.0 backcall==0.1.0 base58==2.0.0 bleach==3.1.3 blinker==1.4 boto3==1.12.29 botocore==1.15.29 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.1 colorama==0.4.3 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 enum-compat==0.0.3

Deploy python app to Heroku “Slug Size too large”

生来就可爱ヽ(ⅴ<●) 提交于 2020-12-29 13:18:27
问题 I'm trying to deploy a Streamlit app written in python to Heroku. My whole directory is 4.73 MB, where 4.68 MB is my ML model. My requirements.txt looks like this: absl-py==0.9.0 altair==4.0.1 astor==0.8.1 attrs==19.3.0 backcall==0.1.0 base58==2.0.0 bleach==3.1.3 blinker==1.4 boto3==1.12.29 botocore==1.15.29 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.1 colorama==0.4.3 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 enum-compat==0.0.3

Purpose of using “with tf.Session()”?

删除回忆录丶 提交于 2020-12-29 12:23:53
问题 I am practicing the keras method called concatenate. And use of with statement in this example kind of got me thinking the purpose of this statement The example code looks like: import numpy as np import keras.backend as K import tensorflow as tf t1 = K.variable(np.array([ [[1, 2], [2, 3]], [[4, 4], [5, 3]]])) t2 = K.variable(np.array([[[7, 4], [8, 4]], [[2, 10], [15, 11]]])) d0 = K.concatenate([t1 , t2] , axis=-2) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run

Implementing custom loss function in keras with condition

杀马特。学长 韩版系。学妹 提交于 2020-12-29 10:45:44
问题 I need some help with keras loss function. I have been implementing custom loss function on keras with Tensorflow backend. I have implemented the custom loss function in numpy but it would be great if it could be translated into keras loss function. The loss function takes dataframe and series of user id. The Euclidean distance for same user_id are positive and negative if the user_id are different. The function returns summed up scalar distance of the dataframe. def custom_loss_numpy

Implementing custom loss function in keras with condition

情到浓时终转凉″ 提交于 2020-12-29 10:45:08
问题 I need some help with keras loss function. I have been implementing custom loss function on keras with Tensorflow backend. I have implemented the custom loss function in numpy but it would be great if it could be translated into keras loss function. The loss function takes dataframe and series of user id. The Euclidean distance for same user_id are positive and negative if the user_id are different. The function returns summed up scalar distance of the dataframe. def custom_loss_numpy

Issue with add method in tensorflow : AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'

若如初见. 提交于 2020-12-29 09:37:12
问题 import keras as K from keras.models import Sequential from keras.layers import Dense from tensorflow import set_random_seed for hidden_neuron in hidden_neurons: model = Sequential() model.add(Dense(hidden_neuron, input_dim=61, activation='relu')) -> i am getting error at this line. I am not really sure what am i missing here. Traceback (most recent call last): File "PycharmProjects/HW2/venv/bin/hw3q4.py", line 46, in model.add(Dense(hidden_neuron, input_dim=61, activation='relu')) File "

Parallelizing model predictions in keras using multiprocessing for python

为君一笑 提交于 2020-12-29 07:47:43
问题 I'm trying to perform model predictions in parallel using the model.predict command provided by keras in python2. I use tensorflow 1.14.0 for python2. I have 5 model (.h5) files and would like the predict command to run in parallel.This is being run in python 2.7. I'm using multiprocessing pool for mapping the model filenames with the prediction function on multiple processes as shown below, import matplotlib as plt import numpy as np import cv2 from multiprocessing import Pool pool=Pool()

Parallelizing model predictions in keras using multiprocessing for python

老子叫甜甜 提交于 2020-12-29 07:47:05
问题 I'm trying to perform model predictions in parallel using the model.predict command provided by keras in python2. I use tensorflow 1.14.0 for python2. I have 5 model (.h5) files and would like the predict command to run in parallel.This is being run in python 2.7. I'm using multiprocessing pool for mapping the model filenames with the prediction function on multiple processes as shown below, import matplotlib as plt import numpy as np import cv2 from multiprocessing import Pool pool=Pool()

GPU Sync Failed While using tensorflow

半世苍凉 提交于 2020-12-29 07:26:51
问题 I'm trying to run this simple code to test tensorflow from __future__ import print_function import tensorflow as tf a = tf.constant(2) b = tf.constant(3) with tf.Session() as sess: print("a=2, b=3") print("Addition with constants: %i" % sess.run(a+b)) But weirdly getting GPU sync failed error. Traceback: runfile('D:/tf_examples-master/untitled3.py', wdir='D:/tf_examples-master') a=2, b=3 Traceback (most recent call last): File "<ipython-input-5-d4753a508b93>", line 1, in <module> runfile('D:

Tensorflow: How to Pool over Depth?

南笙酒味 提交于 2020-12-29 06:25:32
问题 I have the following parameters defined for doing a max pool over the depth of the image (rgb) for compression before the dense layer and readout...and I am failing with an error that I cannot pool over depth and everything else: sunset_poolmax_1x1x3_div_2x2x3_params = \ {'pool_function':tf.nn.max_pool, 'ksize':[1,1,1,3], 'strides':[1,1,1,3], 'padding': 'SAME'} I changed the strides to [1,1,1,3] so that depth is the only dimension reduced by the pool...but it still doesn't work. I can't get