Robots.txt deny, for a #! URL

前端 未结 2 1247
刺人心
刺人心 2020-12-21 15:01

I am trying to add a deny rule to a robots.txt file, to deny access to a single page.

The website URLs work as follows:

  • http://example.com/#!/homepage<
2条回答
  •  我在风中等你
    2020-12-21 15:41

    You can't (per se). Search engines wouldn't run JavaScript anyway, so will generally ignore the fragment identifier. You can only deny the URLs that would be requested from the server (which are without fragment identifiers).

    Google will map hashbangs onto different URIs and you can figure out what those are (and you should have done already because that is the point of using hash bangs) and put them in robots.txt.

    Hash bangs, however, are problematic at best, so I'd scrap them in favour of using the history API which allows you to use sane URIs.

提交回复
热议问题