robots.txt allow root only, disallow everything else?

a 夏天 提交于 2019-12-02 23:25:15

According to the Backus-Naur Form (BNF) parsing definitions in Google's robots.txt documentation, the order of the Allow and Disallow directives doesn't matter. So changing the order really won't help you.

Instead you should use the $ operator to indicate the closing of your path.

Test this robots.txt. I'm certain it should work for you (I've also verified in Google Search Console):

user-agent: *
Allow: /$
Disallow: /

This will allow http://www.example.com and http://www.example.com/ to be crawled but everything else blocked.

note: that the Allow directive satisfies your particular use case, but if you have index.html or default.php, these URLs will not be crawled.

side note: I'm only really familiar with Googlebot and bingbot behaviors. If there are any other engines you are targeting, they may or may not have specific rules on how the directives are listed out. So if you want to be "extra" sure, you can always swap the positions of the Allow and Disallow directive blocks, I just set them that way to debunk some of the comments.

When you look at the google robots.txt specifications, you can see that:

Google, Bing, Yahoo, and Ask support a limited form of "wildcards" for path values. These are:

  1. * designates 0 or more instances of any valid character
  2. $ designates the end of the URL

see https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt?hl=en#example-path-matches

Then as eywu said, the solution is

user-agent: *
Allow: /$
Disallow: /
标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!