I am trying to add a deny rule to a robots.txt file, to deny access to a single page.
The website URLs work as follows:
You can't (per se). Search engines wouldn't run JavaScript anyway, so will generally ignore the fragment identifier. You can only deny the URLs that would be requested from the server (which are without fragment identifiers).
Google will map hashbangs onto different URIs and you can figure out what those are (and you should have done already because that is the point of using hash bangs) and put them in robots.txt.
Hash bangs, however, are problematic at best, so I'd scrap them in favour of using the history API which allows you to use sane URIs.