deep-learning

Fine-tune Bert for specific domain (unsupervised)

孤人 提交于 2021-01-20 08:39:56
问题 I want to fine-tune BERT on texts that are related to a specific domain (in my case related to engineering). The training should be unsupervised since I don't have any labels or anything. Is this possible? 回答1: What you in fact want to is continue pre-training BERT on text from your specific domain. What you do in this case is to continue training the model as masked language model, but on your domain-specific data. You can use the run_mlm.py script from the Huggingface's Transformers. 来源:

Fine-tune Bert for specific domain (unsupervised)

自古美人都是妖i 提交于 2021-01-20 08:39:28
问题 I want to fine-tune BERT on texts that are related to a specific domain (in my case related to engineering). The training should be unsupervised since I don't have any labels or anything. Is this possible? 回答1: What you in fact want to is continue pre-training BERT on text from your specific domain. What you do in this case is to continue training the model as masked language model, but on your domain-specific data. You can use the run_mlm.py script from the Huggingface's Transformers. 来源:

Accuracy given from evaluating model not equal to sklearn classification_report accuracy

£可爱£侵袭症+ 提交于 2021-01-19 08:10:24
问题 I'm using sklearn classification_report for reporting test statistics. The accuracy given by this method is 42% while model evaluation gives 93% accuracy. Which one is the real accuracy and what's the reason of this difference? Model evaluation: results = model.evaluate(test_ds.values, test_lb.values) print(results) Output: 7397/7397 [==============================] - 0s 28us/sample - loss: 0.2309 - acc: 0.9305 Report Classification: import numpy as np from sklearn.metrics import

Accuracy given from evaluating model not equal to sklearn classification_report accuracy

梦想与她 提交于 2021-01-19 08:08:38
问题 I'm using sklearn classification_report for reporting test statistics. The accuracy given by this method is 42% while model evaluation gives 93% accuracy. Which one is the real accuracy and what's the reason of this difference? Model evaluation: results = model.evaluate(test_ds.values, test_lb.values) print(results) Output: 7397/7397 [==============================] - 0s 28us/sample - loss: 0.2309 - acc: 0.9305 Report Classification: import numpy as np from sklearn.metrics import

Accuracy given from evaluating model not equal to sklearn classification_report accuracy

狂风中的少年 提交于 2021-01-19 08:08:17
问题 I'm using sklearn classification_report for reporting test statistics. The accuracy given by this method is 42% while model evaluation gives 93% accuracy. Which one is the real accuracy and what's the reason of this difference? Model evaluation: results = model.evaluate(test_ds.values, test_lb.values) print(results) Output: 7397/7397 [==============================] - 0s 28us/sample - loss: 0.2309 - acc: 0.9305 Report Classification: import numpy as np from sklearn.metrics import

Keras custom data generator giving dimension errors with multi input and multi output( functional api model)

夙愿已清 提交于 2021-01-18 04:53:32
问题 I have written a generator function with Keras, before returning X,y from __getitem__ I have double check the shapes of the X's and Y's and they are alright, but generator is giving dimension mismatch array and warnings. (Colab Code to reproduce: https://colab.research.google.com/drive/1bSJm44MMDCWDU8IrG2GXKBvXNHCuY70G?usp=sharing) My training and validation generators are pretty much same as class ValidGenerator(Sequence): def __init__(self, df, batch_size=64): self.batch_size = batch_size

Custom Attention Layer using in Keras

非 Y 不嫁゛ 提交于 2021-01-13 09:49:51
问题 I want to create a custom attention layer that for input at any time this layer returns the weighted mean of inputs at all time inputs. For Example, I want that input tensor with shape [32,100,2048] goes to layer and I get the tensor with the shape [32,100,2048] . I wrote the Layer as follow: import tensorflow as tf from keras.layers import Layer, Dense #or from tensorflow.keras.layers import Layer, Dense class Attention(Layer): def __init__(self, units_att): self.units_att = units_att self.W

Keras failed to load SavedModel: TypeError 'module' object is not callable

試著忘記壹切 提交于 2021-01-07 01:19:50
问题 I trained an SSD MobileNet v2 network using the TensorFlow Object Detection API with TensorFlow 2 and then converted the trained model into a SavedModel. Now I need to convert the SavedModel to a FrozenGraph in order to make the model compatible with external libraries like OpenCV. I use this example for conversion and I cannot even load the Keras model. from keras.models import load_model model = load_model("training/model/saved_model") Calling load_model() produces an exception: Exception

Keras failed to load SavedModel: TypeError 'module' object is not callable

喜夏-厌秋 提交于 2021-01-07 01:19:05
问题 I trained an SSD MobileNet v2 network using the TensorFlow Object Detection API with TensorFlow 2 and then converted the trained model into a SavedModel. Now I need to convert the SavedModel to a FrozenGraph in order to make the model compatible with external libraries like OpenCV. I use this example for conversion and I cannot even load the Keras model. from keras.models import load_model model = load_model("training/model/saved_model") Calling load_model() produces an exception: Exception

Keras failed to load SavedModel: TypeError 'module' object is not callable

五迷三道 提交于 2021-01-07 01:17:26
问题 I trained an SSD MobileNet v2 network using the TensorFlow Object Detection API with TensorFlow 2 and then converted the trained model into a SavedModel. Now I need to convert the SavedModel to a FrozenGraph in order to make the model compatible with external libraries like OpenCV. I use this example for conversion and I cannot even load the Keras model. from keras.models import load_model model = load_model("training/model/saved_model") Calling load_model() produces an exception: Exception