I used this official example code to train a NER model from scratch using my own training samples.
When I predict using this model on new text, I want to get the pr
Sorry I do not have any better answer - I can only confirm that the 'beam' solution does provide some 'probabilities' - though in my case I am getting way too many entities with prob=1.0, even in cases where I can only shake my head and blame it on too little training data.
I find it quite strange that Spacy reports an 'entity' without having any confidence attached to it. I would assume there is some threshold to decide WHEN Spacy reports an entity and when it does NOT (perhaps I missed it). In my case, I see confidences 0.6 reported as 'this is an entity' while entity with confidence 0.001 is NOT reported.
In my use-case, the confidence is essential. For a given text, Spacy (and for example Google ML) report multiple instances of 'MY_ENTITY'. My code has to decide which ones are to be 'trusted' and which ones are false positive. I have yet to see IF the 'probability' returned by the above code has any practical value.