gensim

word2vec - what is best? add, concatenate or average word vectors?

空扰寡人 提交于 2020-05-25 06:41:22
问题 I am working on a recurrent language model. To learn word embeddings that can be used to initialize my language model, I am using gensim's word2vec model. After training, the word2vec model holds two vectors for each word in the vocabulary: the word embedding (rows of input/hidden matrix) and the context embedding (columns of hidden/output matrix). As outlined in this post there are at least three common ways to combine these two embedding vectors: summing the context and word vector for each

Glove6b50d parsing: could not convert string to float: '-'

左心房为你撑大大i 提交于 2020-05-17 06:04:23
问题 I am trying to parse the Glove6b50d data from Kaggle in via Google Colab, then run it through the word2vec process (apologies for the huge URL - it's the fastest link I've found). However, I'm hitting a bug where '-' tokens are not parsed correctly, resulting in the above error. I have attempted to handle this in a few ways. I've also looked into the load_word2vec_format method itself and tried to ignore errors, however it doesn't seem to make a difference. I've tried a map method on line two

Glove6b50d parsing: could not convert string to float: '-'

守給你的承諾、 提交于 2020-05-17 06:04:11
问题 I am trying to parse the Glove6b50d data from Kaggle in via Google Colab, then run it through the word2vec process (apologies for the huge URL - it's the fastest link I've found). However, I'm hitting a bug where '-' tokens are not parsed correctly, resulting in the above error. I have attempted to handle this in a few ways. I've also looked into the load_word2vec_format method itself and tried to ignore errors, however it doesn't seem to make a difference. I've tried a map method on line two

No module named 'gensim' but already installed it

你。 提交于 2020-05-09 16:57:06
问题 i'm having this error problem, i have ran this script in jupyter notebook in base (root) environment, the log said that gensim library has been installed and i have run the command !pip install gensim before i import it, but it still can not be imported, and the error said ModuleNotFoundError: No module named 'gensim' !pip install gensim import gensim from gensim.models import KeyedVectors model = KeyedVectors.load('model_fasttext2.vec') model.vector_size -------------------------------------

Loading Gensim FastText Model with Callbacks Fails

半腔热情 提交于 2020-04-30 11:21:09
问题 After creating a FastText model using Gensim, I want to load it but am running into errors seemingly related to callbacks. The code used to create the model is TRAIN_EPOCHS = 30 WINDOW = 5 MIN_COUNT = 50 DIMS = 256 vocab_model = gensim.models.FastText(sentences=model_input, size=DIMS, window=WINDOW, iter=TRAIN_EPOCHS, workers=6, min_count=MIN_COUNT, callbacks=[EpochSaver("./ftchkpts/")]) vocab_model.save('ft_256_min_50_model_30eps') and the callback EpochSaver is defined as from gensim.models

Loading Gensim FastText Model with Callbacks Fails

£可爱£侵袭症+ 提交于 2020-04-30 11:19:59
问题 After creating a FastText model using Gensim, I want to load it but am running into errors seemingly related to callbacks. The code used to create the model is TRAIN_EPOCHS = 30 WINDOW = 5 MIN_COUNT = 50 DIMS = 256 vocab_model = gensim.models.FastText(sentences=model_input, size=DIMS, window=WINDOW, iter=TRAIN_EPOCHS, workers=6, min_count=MIN_COUNT, callbacks=[EpochSaver("./ftchkpts/")]) vocab_model.save('ft_256_min_50_model_30eps') and the callback EpochSaver is defined as from gensim.models

Loading Gensim FastText Model with Callbacks Fails

大憨熊 提交于 2020-04-30 11:18:59
问题 After creating a FastText model using Gensim, I want to load it but am running into errors seemingly related to callbacks. The code used to create the model is TRAIN_EPOCHS = 30 WINDOW = 5 MIN_COUNT = 50 DIMS = 256 vocab_model = gensim.models.FastText(sentences=model_input, size=DIMS, window=WINDOW, iter=TRAIN_EPOCHS, workers=6, min_count=MIN_COUNT, callbacks=[EpochSaver("./ftchkpts/")]) vocab_model.save('ft_256_min_50_model_30eps') and the callback EpochSaver is defined as from gensim.models

使用lsh快速检索语义-词向量结合

主宰稳场 提交于 2020-04-29 13:46:24
""" test """ import os import gensim import pickle import time import numpy as np DIR_PATH = os.path.dirname(os.path.abspath( __file__ )) HASHTABLES = os.path.join(DIR_PATH, ' resource ' , ' hashtables.pkl ' ) WORD2VEC = os.path.join(DIR_PATH, ' resource ' , ' sgns.weibo.word ' ) RESOURCES = os.path.join(DIR_PATH, ' resource ' , ' resources.pkl ' ) class MyClass(object): def __init__ (self, Table_num=5, Hashcode_fun=5 ): self.hashtables = HASHTABLES self.word2vec = WORD2VEC self.resources = RESOURCES self.table_num = Table_num self.Hashcode_fun = Hashcode_fun def load_traindata(self): model =

自然语言处理之word2vec

独自空忆成欢 提交于 2020-04-28 21:49:13
  在word2vec出现之前,自然语言处理经常把字词转为one-hot编码类型的词向量,这种方式虽然非常简单易懂,但是数据稀疏性非常高,维度很多,很容易造成维度灾难,尤其是在深度学习中;其次这种词向量中任意两个词之间都是孤立的,存在语义鸿沟(这样就不能体现词与词之间的关系)而有Hinton大神提出的Distributional Representation 很好的解决了one-hot编码的主要缺点。解决了语义之间的鸿沟,可以通过计算向量之间的距离来体现词与词之间的关系。Distributional Representation 词向量是密集的。word2vec是一个用来训练Distributional Representation 类型的词向量的一种工具。 1、CBOW与Skip-Gram模型   word2vec模型其实就是简单化的神经网络,主要包含两种词训练模型:CBOW模型和Skip-gram模型。模型的结构图如下(注意:这里只是模型结构,并不是神经网络的结构)      CBOW模型根据中心词W(t) 周围的词来预测中心词;Skip-gram模型则根据中心词W(t) 来预测周围的词。   1)CBOW模型的第一层是输入层,输入的值是周围每个词的one-hot编码形式,隐藏层只是对输出值做了权值加法,没有激活函数进行非线性的转换,输出值的维度和输入值的维度是一致的。   2

个性化召回算法实践(五)——item2vec

霸气de小男生 提交于 2020-04-28 21:48:02
item2vec将用户的行为序列转化成item组成的句子,模仿word2vec训练word embedding将item embedding。基本思想是把原来高维稀疏的表示方式(one_hot)映射到低维稠密的向量空间中,这样我们就可以用这个低维向量来表示该项目(电影),进而通过计算两个低维向量之间的相似度来衡量两个项目之间的相似性。 embedding就是用一个低维的向量表示一个物体,可以是一个词,或是一个商品,或是一个电影等等。这个embedding向量的性质是能使距离相近的向量对应的物体有相近的含义 类似于Word2vec,item2vec有两种方式:CBOW和skip-gram模型。 CBOW使用的是词袋模型,模型的训练输入是某一个特征词的上下文相关的词对应的词向量,而输出就是这特定的一个词的词向量。Skip-Gram模型和CBOW的思路是反着来的,即输入是特定的一个词的词向量,而输出是特定词对应的上下文词向量。 主流程: 从log中抽取用户行为序列 将行为序列当成预料训练word2Vec得到item embedding 得到item sim关系用于推荐 在代码中,我们直接用gensim库实现。在gensim中,word2vec 相关的API都在包gensim.models.word2vec中。和算法有关的参数都在类gensim.models.word2vec