The text data is organized as vector with 20,000 elements, like [2, 1, 0, 0, 5, ...., 0]. i-th element indicates the frequency of the i-th word in a text.
The ground t
You have to use variations of cross entropy function in other to support multilabel classification. In case you have less than one thousand of ouputs you should use sigmoid_cross_entropy_with_logits, in your case that you have 4000 outputs you may consider candidate sampling as it is faster than the previous.
How to compute accuracy using TensorFlow.
This depends on your problem and what you want to achieve. If you don't want to miss any object in an image then if the classifier get all right but one, then you should consider the whole image an error. You can also consider that an object missed or missclassiffied is an error. The latter I think it supported by sigmoid_cross_entropy_with_logits.
How to set a threshold which judges whether a label is positive or negative. For instance, if the output is [0.80, 0.43, 0.21, 0.01, 0.32] and the ground truth is [1, 1, 0, 0, 1], the labels with scores over 0.25 should be judged as positive.
Threshold is one way to go, you have to decided which one. But that is some kind of hack, not real multilable classification. For that you need the previous functions I said before.