tensorflow

tf.estimator.add_metrics ends in Shapes (None, 12) and (None,) are incompatible

为君一笑 提交于 2021-02-11 15:14:32
问题 I am using a DNNClassifier as my estimator and wanted to add some additional metrics to the estimator. the code I am using is basically the one from the tf.estimator.add_metrics documentation (https://www.tensorflow.org/api_docs/python/tf/estimator/add_metrics). def my_auc(labels, predictions): auc_metric = tf.keras.metrics.AUC(name="my_auc") auc_metric.update_state(y_true=labels, y_pred=predictions['logits']) return {'auc': auc_metric} hidden_layers = len(training_data.__call__().element

SageMaker client create_endpoint() error 'does not have BatchGetImage permission for image: '763104351884…/tensorflow-inference:1.15.2-gpu'

喜欢而已 提交于 2021-02-11 15:12:28
问题 I have a pre-trained Tensorflow model, I'm trying to using SagaMaker client.create_endpoint() to create an endpoint so that I can call the API to get predictions, the doc is here After creating the model by using client.create_model() I have a model stored on SageMaker, and the base image I'm using is 763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference:1.15.2-gpu , this is my code: model_name = `xxx`, role = `xxx`, model_base_image = `763104351884.dkr.ecr.us-east-1.amazonaws.com

How to use tf.data.Dataset with kedro?

烈酒焚心 提交于 2021-02-11 14:58:23
问题 I am using tf.data.Dataset to prepare a streaming dataset which is used to train a tf.kears model. With kedro, is there a way to create a node and return the created tf.data.Dataset to use it in the next training node? The MemoryDataset will probably not work because tf.data.Dataset cannot be pickled ( deepcopy isn't possible), see also this SO question. According to issue #91 the deep copy in MemoryDataset is done to avoid modifying the data by some other node. Can someone please elaborate a

pytorch allocate memory for small size tensor on cpu and gpu but got error on a node with more than 400 GB

南笙酒味 提交于 2021-02-11 14:57:38
问题 I would like to build a torch.nn.embedding with tensors on databricks (the node is p2.8xlarge) by py3. My code: import numpy as np import torch from torch import nn num_embedding, num_dim = 14000, 300 embedding = nn.Embedding(num_embedding, num_dim) row, col = 800000, 302 t = [[x for x in range(col)] for _ in range(row)] t1 = torch.tensor(t) print(t1.shape) # torch.Size([800000, 302]) t1.dtype, t1.nelement() # torch.int64, 241600000 type(t1), t1.device, (t1.nelement() * t1.element_size())/

Error in calculation of inbuilt MS-SSIm function in tensorflow

耗尽温柔 提交于 2021-02-11 14:55:36
问题 t1=tf.image.ssim_multiscale(tf.convert_to_tensor(x_test1[i]),tf.convert_to_tensor(ans1[i]),max_val=1).eval() file1.write("\tMs-ssim:\t"+str(t1)+"\n") avgs+=ssim1 avgm+=t1 print(t1) print(i) file1.write("MS-SSIM:\t"+str(avgm/100)) When ms-ssim is calculated, its showing the following error, however everything seems to be correct, there are 2 numpy arrays, among which we are doing comparision of MS-SSIM, ans1[i] and x_test1[i] are two arrays in numpy I have calcualted the psnr and ssim using

ValueError: ('%s is not decorated with @add_arg_scope', ('__main__', 'bottleneck'))

☆樱花仙子☆ 提交于 2021-02-11 14:51:43
问题 Here is the code with tf.variable_scope(scope, 'resnet_v2', [inputs], reuse=reuse) as sc: end_points_collection = sc.original_name_scope + '_end_points' with slim.arg_scope([slim.conv2d, bottleneck, stack_blocks_dense], outputs_collections=[end_points_collection]): ValueError: ('%s is not decorated with @add_arg_scope', (' main ', 'bottleneck')) d:\resnet\main_resnet.py(219)resnet_v2() -> outputs_collections=[end_points_collection]) So what is wrong with the code? 回答1: forgive my stupid

AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'Optimizer' when importing BERT

限于喜欢 提交于 2021-02-11 14:44:14
问题 I'm am getting this error just in the being of importing my packages. I haven't been able to find the correct remedy to fix the issue. Any help is greatly appreciated. From what I can tell it looks to maybe be a Tensorflow issue? from sklearn.model_selection import train_test_split import pandas as pd import tensorflow as tf import tensorflow_hub as hub from datetime import datetime import bert from bert import run_classifier from bert import optimization from bert import tokenization 回答1:

Run Jupyter notebook in an anaconda environment on Windows 10

十年热恋 提交于 2021-02-11 14:40:42
问题 I recently create a anaconda env by: conda create -n tensorflow_env python=3.6 conda activate tensorflow_env conda install -c conda-forge tensorflow Then I install jupyter notebook in the tensorflow_env, conda install jupyter Then I run it with jupyter notebook I got a blank website: Anyone knows what's going on here? I use windows 10. And the jupyter notebook works fine if I don't run it with in the tensorflow_env environment. But if I don't run the jupyter in that environment, I can't

Why is Tensorflow Official CNN example stuck at 10 percent accuracy (= random prediction) on my machine?

岁酱吖の 提交于 2021-02-11 14:18:26
问题 I am running the CNN example from Tensorflow Official website - (https://www.tensorflow.org/tutorials/images/cnn) I have run the notebook as it is without any modifications whatsoever. My accuracy (training accuracy) is stuck at 10%. I tried to overfit by only using the first 10 (image, label) pairs, but the result is still the same. The network just does not learn. Here is my model.summary() - Model: "sequential" _________________________________________________________________ Layer (type)

Keras not running in multiprocessing

不打扰是莪最后的温柔 提交于 2021-02-11 14:18:15
问题 I'm trying to run my keras model using multiprocessing due to GPU OOM issue. I loaded all libraries and set up the model within the function for multiprocessing as below: When I execute this code, it gets stuck at history = q.get() , which is multiprocessing.Queue.get() . And when I remove all the code related to multiprocessing.Queue() , the code execution ends as soon as I execute the code, which I suspect that the code is not working. Even a simple print() function didn't show an output.