How to stop Google indexing my Github repository

喜夏-厌秋 提交于 2019-12-17 17:25:35

问题


I use Github to store the text of one of my web sites, but the problem is Google indexing the text in Github as well. So the same text will show up both on my site and on Github. e.g. this search The top hit is my site. The second hit is the Github repository.

I don't mind if people see the sources but I don't want Google to index it (and maybe penalize for duplicate content.) Is there any way, besides taking the repository private, to tell Google to stop indexing it?

What happens in the case of Github Pages? Those are sites where the source is in a Github repository. Do they have the same problem of duplication?

Take this search the top most hit leads to the Marpa site but I don't see the source listed in the search result. How?


回答1:


The https://github.com/robots.txt file of GitHub allows the indexing of the blobs in the 'master' branch, but restricts all other branches. So if you don't have a 'master' branch, Google is not supposed to index your pages.

How to remove the 'master' branch:

In your clone create a new branch - let's call it 'main' and push it to GitHub

git checkout -b main
git push -u origin main

On GitHub change the default branch (see in the Settings section of your repository) or here https://github.com/blog/421-pick-your-default-branch

Then remove the master branch from your clone and from GitHub:

git branch -d master
git push origin :master

Get other people who might have already forked your repository to do the same.

Alternatively, if you'd like to financially support GitHub, you can go private https://help.github.com/articles/making-a-public-repository-private




回答2:


If want to stick to the master branch there seems to be no way around using a private repo (and upselling your GitHub account) or using another service that offers private repos for free like Bitbucket.




回答3:


simple answer: make your repo private.

https://help.github.com/articles/making-a-public-repository-private




回答4:


Short awnser. Yes you can with robots.txt.

If you want to prevent Googlebot from crawling content on your site, you have a number of options, including using robots.txt to block access to files and directories on your server.

You need a robots.txt file only if your site includes content that you don't want search engines to index. If you want search engines to index everything in your site, you don't need a robots.txt file (not even an empty one).

While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results.

Sources:

http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93708 http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449



来源:https://stackoverflow.com/questions/15844905/how-to-stop-google-indexing-my-github-repository

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!