Ban robots from website [closed]

坚强是说给别人听的谎言 提交于 2019-12-03 15:35:59
Sharky

based on these

https://www.projecthoneypot.org/ip_46.229.164.98 https://www.projecthoneypot.org/ip_46.229.164.100 https://www.projecthoneypot.org/ip_46.229.164.101

it looks like the bot is http://www.semrush.com/bot.html

if thats actually the robot, in their page they say

To remove our bot from crawling your site simply insert the following lines to your
"robots.txt" file:

User-agent: SemrushBot
Disallow: /

Of course that does not guarantee that the bot will obey the rules. You can block him in several ways. .htaccess is one. Just like you did it.

Also you can do this little trick, deny ANY ip address that has "SemrushBot" in user agent string

Options +FollowSymlinks  
RewriteEngine On  
RewriteBase /  
SetEnvIfNoCase User-Agent "^SemrushBot" bad_user
SetEnvIfNoCase User-Agent "^WhateverElseBadUserAgentHere" bad_user
Deny from env=bad_user

This way will block other IP's that the bot may use.

see more on blocking by user agent string : https://stackoverflow.com/a/7372572/953684

Should i add, that if your site is down by a spider, usually it means you have a bad-written script or a very weak server.

edit:

this line

SetEnvIfNoCase User-Agent "^SemrushBot" bad_user

tries to match if User-Agent begins with the string SemrushBot (the caret ^ means "beginning with"). if you want to search for let's say SemrushBot ANYWHERE in the User-Agent string, simply remove the caret so it becomes:

SetEnvIfNoCase User-Agent "SemrushBot" bad_user

the above means if User-Agent contains the string SemrushBot anywhere (yes, no need for .*).

You are doing the right thing BUT

You have to write that code in .htaccess file , not in Robots.txt File.

For denying any Search Engine from crawling your site, the code should like this

User-Agent:Google
Disallow:/ 

It will disallow Google from crawling your Site.

I would prefer .htaccess method by the way.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!