neural-network

Understanding weird YOLO convolutional layer output size

和自甴很熟 提交于 2021-01-05 09:15:47
问题 I am trying to understand how Darknet works, and I was looking at the yolov3-tiny configuration file, specifically the layer number 13 (line 107). [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky The size of the kernel is 1x1, the stride is 1 and the padding is 1 too. When I load the network using darknet, it indicates that the output width and height are the same as the input: 13 conv 256 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 256 However, shouldn't the width

How can i create an instance of multi-layer perceptron network to use in bagging classifier?

只愿长相守 提交于 2021-01-01 06:44:21
问题 i am trying to create an instance of multi-layer perceptron network to use in bagging classifier. But i don't understand how to fix them. Here is my code: My task is: 1-To apply bagging classifier (with or without replacement) with eight base classifiers created at the previous step. It would be really great if you show me how can i implement this to my algorithm. I did my search but i couldn't find a way to do that 回答1: To train your BaggingClassifier : from sklearn.datasets import load

How can i create an instance of multi-layer perceptron network to use in bagging classifier?

只谈情不闲聊 提交于 2021-01-01 06:44:16
问题 i am trying to create an instance of multi-layer perceptron network to use in bagging classifier. But i don't understand how to fix them. Here is my code: My task is: 1-To apply bagging classifier (with or without replacement) with eight base classifiers created at the previous step. It would be really great if you show me how can i implement this to my algorithm. I did my search but i couldn't find a way to do that 回答1: To train your BaggingClassifier : from sklearn.datasets import load

How can i create an instance of multi-layer perceptron network to use in bagging classifier?

僤鯓⒐⒋嵵緔 提交于 2021-01-01 06:44:00
问题 i am trying to create an instance of multi-layer perceptron network to use in bagging classifier. But i don't understand how to fix them. Here is my code: My task is: 1-To apply bagging classifier (with or without replacement) with eight base classifiers created at the previous step. It would be really great if you show me how can i implement this to my algorithm. I did my search but i couldn't find a way to do that 回答1: To train your BaggingClassifier : from sklearn.datasets import load

ideas on quadrangle/rectangle detection using convolutional neural networks

孤者浪人 提交于 2020-12-29 13:22:34
问题 I'v been trying to do quadrangle detection and localization for weeks, my goal is to have a robust way of getting the 4 points of an quadrangle(rectangle), so I can apply projective transform to an Image then attach it to the source image. I have try the classic opencv contour method, and also using hough transform to find lines then calculate intersections, those two methods is unusable when apply it to real life images. So I turn to CNN for help, but currently i haven't find any one try to

what is the default kernel_initializer in keras

蹲街弑〆低调 提交于 2020-12-28 04:27:45
问题 In the user manual, it shows the different kernel_initializer below https://keras.io/initializers/ the main purpose is to initialize the weight matrix in the neural network. Anyone knows what the default initializer is? the document didn't show the default. 回答1: Usually, it's glorot_uniform by default. Different layer types might have different default kernel_initializer . When in doubt, just look in the source code. For example, for Dense layer: class Dense(Layer): ... def __init__(self,

Python scikit learn MLPClassifier “hidden_layer_sizes”

你说的曾经没有我的故事 提交于 2020-12-27 08:07:33
问题 I am lost in the scikit learn 0.18 user manual (http://scikit-learn.org/dev/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier): hidden_layer_sizes : tuple, length = n_layers - 2, default (100,) The ith element represents the number of neurons in the ith hidden layer. If I am looking for only 1 hidden layer and 7 hidden units in my model, should I put like this? Thanks! hidden_layer_sizes=(7, 1) 回答1: hidden_layer_sizes=(7,) if you want only 1

Python scikit learn MLPClassifier “hidden_layer_sizes”

别说谁变了你拦得住时间么 提交于 2020-12-27 08:05:47
问题 I am lost in the scikit learn 0.18 user manual (http://scikit-learn.org/dev/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier): hidden_layer_sizes : tuple, length = n_layers - 2, default (100,) The ith element represents the number of neurons in the ith hidden layer. If I am looking for only 1 hidden layer and 7 hidden units in my model, should I put like this? Thanks! hidden_layer_sizes=(7, 1) 回答1: hidden_layer_sizes=(7,) if you want only 1

Python scikit learn MLPClassifier “hidden_layer_sizes”

做~自己de王妃 提交于 2020-12-27 08:05:24
问题 I am lost in the scikit learn 0.18 user manual (http://scikit-learn.org/dev/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier): hidden_layer_sizes : tuple, length = n_layers - 2, default (100,) The ith element represents the number of neurons in the ith hidden layer. If I am looking for only 1 hidden layer and 7 hidden units in my model, should I put like this? Thanks! hidden_layer_sizes=(7, 1) 回答1: hidden_layer_sizes=(7,) if you want only 1

Create an LSTM layer with Attention in Keras for multi-label text classification neural network

笑着哭i 提交于 2020-12-23 06:49:49
问题 Greetings dear members of the community. I am creating a neural network to predict a multi-label y. Specifically, the neural network takes 5 inputs (list of actors, plot summary, movie features, movie reviews, title) and tries to predict the sequence of movie genres. In the neural network I use Embeddings Layer and Global Max Pooling layers. However, I recently discovered the Recurrent Layers with Attention, which are a very interesting topic these days in machine learning translation. So, I