I am new to tensorflow and machine learning. I am facing issues with writing a tensorflow code which does the text classification similar to one I tried using sklearn libraries.
If you want to achieve seminal scores I'd rather use some embedder. Natural language is rather quite hyper-dimensional. Nowadays there's a lot of pretrained architectures. So, you simply encode your text to latent space and later train your model on those features. It's also much easier to apply resampling techniques, once you have numerical feature vector.
Myself, I mostly use LASER embedder from Facebook. Read more about it here. There's unofficial pypi package, which works just fine. Additionally, your model will be working on dozens of languages out-of-the-box, which is quite cute.
There's also BERT from Google, but the pretrained model is rather bare, so you have to push it a bit further first.