autoencoder

Can I use autoencoder for clustering?

本秂侑毒 提交于 2019-12-04 14:01:24
问题 In the below code, they use autoencoder as supervised clustering or classification because they have data labels. http://amunategui.github.io/anomaly-detection-h2o/ But, can I use autoencoder to cluster data if I did not have its labels.? Regards 回答1: The deep-learning autoencoder is always unsupervised learning. The "supervised" part of the article you link to is to evaluate how well it did. The following example (taken from ch.7 of my book, Practical Machine Learning with H2O, where I try

Reusing layer weights in Tensorflow

谁说胖子不能爱 提交于 2019-12-03 14:51:57
I am using tf.slim to implement an autoencoder. I's fully convolutional with the following architecture: [conv, outputs = 1] => [conv, outputs = 15] => [conv, outputs = 25] => => [conv_transpose, outputs = 25] => [conv_transpose, outputs = 15] => [conv_transpose, outputs = 1] It has to be fully convolutional and I cannot do pooling (limitations of the larger problem). I want to use tied weights, so encoder_W_3 = decoder_W_1_Transposed (so the weights of the first decoder layer are the ones of the last encoder layer, transposed). If I reuse weights the regular way tfslim lets you reuse them, i

How do I correctly implement a custom activity regularizer in Keras?

北慕城南 提交于 2019-12-03 12:49:22
I am trying to implement sparse autoencoders according to Andrew Ng's lecture notes as shown here . It requires that a sparsity constraint be applied on an autoencoder layer by introducing a penalty term (K-L divergence). I tried to implement this using the direction provided here , after some minor changes. Here is the K-L divergence and the sparsity penalty term implemented by the SparseActivityRegularizer class as shown below. def kl_divergence(p, p_hat): return (p * K.log(p / p_hat)) + ((1-p) * K.log((1-p) / (1-p_hat))) class SparseActivityRegularizer(Regularizer): sparsityBeta = None def

ValueError: Error when checking target: expected model_2 to have shape (None, 252, 252, 1) but got array with shape (300, 128, 128, 3)

匿名 (未验证) 提交于 2019-12-03 08:33:39
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: hi I am building a image classifier for one-class classification in which i've used autoencoder while running this model I am getting this error by this line (autoencoder_model.fit) (ValueError: Error when checking target: expected model_2 to have shape (None, 252, 252, 1) but got array with shape (300, 128, 128, 3).) num_of_samples = img_data . shape [ 0 ] labels = np . ones (( num_of_samples ,), dtype = 'int64' ) labels [ 0 : 376 ]= 0 names = [ 'cats' ] input_shape = img_data [ 0 ]. shape X_train , X_test = train_test_split ( img

Can I use autoencoder for clustering?

谁都会走 提交于 2019-12-03 08:22:37
In the below code, they use autoencoder as supervised clustering or classification because they have data labels. http://amunategui.github.io/anomaly-detection-h2o/ But, can I use autoencoder to cluster data if I did not have its labels.? Regards The deep-learning autoencoder is always unsupervised learning. The "supervised" part of the article you link to is to evaluate how well it did. The following example (taken from ch.7 of my book, Practical Machine Learning with H2O, where I try all the H2O unsupervised algorithms on the same data set - please excuse the plug) takes 563 features, and

LSTM Autoencoder

匿名 (未验证) 提交于 2019-12-03 02:16:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to build a LSTM autoencoder with the goal of getting a fixed sized vector from a sequence, which represents the sequence as good as possible. This autoencoder consists of two parts: LSTM Encoder: Takes a sequence and returns an output vector ( return_sequences = False ) LSTM Decoder: Takes an output vector and returns a sequence ( return_sequences = True ) So, in the end, the encoder is a many to one LSTM and the decoder is a one to many LSTM. Image source: Andrej Karpathy On a high level the coding looks like this (similar as

Deep Belief Networks vs Convolutional Neural Networks

China☆狼群 提交于 2019-12-03 01:58:34
问题 I am new to the field of neural networks and I would like to know the difference between Deep Belief Networks and Convolutional Networks. Also, is there a Deep Convolutional Network which is the combination of Deep Belief and Convolutional Neural Nets? This is what I have gathered till now. Please correct me if I am wrong. For an image classification problem, Deep Belief networks have many layers, each of which is trained using a greedy layer-wise strategy. For example, if my image size is 50

Layer conv2d_3 was called with an input that isn't a symbolic tensor

匿名 (未验证) 提交于 2019-12-03 01:27:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: hi I am building a image classifier for one-class classification in which i've used autoencoder while running this model I am getting this error (ValueError: Layer conv2d_3 was called with an input that isn't a symbolic tensor. Received type: . Full input: [(128, 128, 3)]. All inputs to the layer should be tensors.) num_of_samples = img_data . shape [ 0 ] labels = np . ones (( num_of_samples ,), dtype = 'int64' ) labels [ 0 : 376 ]= 0 names = [ 'cat' ] Y = np_utils . to_categorical ( labels , num_class ) input_shape = img_data [ 0

Save and load keras autoencoder

匿名 (未验证) 提交于 2019-12-03 01:26:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Look at this strange load/save model situation. I saved variational autoencoder model and its encoder and decoder: autoencoder.save("autoencoder_save", overwrite=True) encoder.save("encoder_save", overwrite=True) decoder.save("decoder_save", overwrite=T) After that I loaded all of it from the disk: autoencoder_disk = load_model("autoencoder_save", custom_objects={'KLDivergenceLayer': KLDivergenceLayer, 'nll': nll}) encoder_disk = load_model("encoder_save", custom_objects={'KLDivergenceLayer': KLDivergenceLayer, 'nll': nll}) decoder_disk =

autoencoder的tensorflow实现

匿名 (未验证) 提交于 2019-12-03 00:19:01
from __future__ import division, print_function, absolute_import import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets( 'MNIST_data' , one_hot = False ) # Visualize decoder setting # Parameters learning_rate = 0.01 training_epochs = 5 batch_size = 256 display_step = 1 examples_to_show = 10 # Network Parameters n_input = 784 # MNIST data input (img shape: 28*28) # tf Graph input (only pictures) X = tf.placeholder( "float" , [ None , n_input]) # hidden layer