seo

How to remove a URL's trailing slash in a Rails app? (in a SEO view)

China☆狼群 提交于 2019-12-22 07:47:12
问题 In order to avoid content duplication, I would like to avoid the pages of my site being accessible by several URLs (with or without trailing slash). Currently, the URLs catalog/product/1 and catalog/product/1/ lead to the same page. My goal is that the second URL redirect to the first (redirection 301, of course). None page of my site should be accessible with a trailing slash, except my home page / obviously. What is the best way to do this? Using .htaccess or routes.rb? How would you do

How to remove a URL's trailing slash in a Rails app? (in a SEO view)

大城市里の小女人 提交于 2019-12-22 07:47:07
问题 In order to avoid content duplication, I would like to avoid the pages of my site being accessible by several URLs (with or without trailing slash). Currently, the URLs catalog/product/1 and catalog/product/1/ lead to the same page. My goal is that the second URL redirect to the first (redirection 301, of course). None page of my site should be accessible with a trailing slash, except my home page / obviously. What is the best way to do this? Using .htaccess or routes.rb? How would you do

Is it good idea to use URL names with special characters? [closed]

走远了吗. 提交于 2019-12-22 06:56:46
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 5 years ago . Is it good SEO to have URL's (page names) with non-english characters like Chinese names in URL's? 回答1: from a SEO perspective GENERAL URL RULES: all URLs on an a webproperty must be according to these rules (listed in priority) 1) unique (1 URL == 1 ressource) 2) permanent

robots.txt allow all except few sub-directories

我与影子孤独终老i 提交于 2019-12-22 05:53:45
问题 I want my site to be indexed in search engines except few sub-directories. Following are my robots.txt settings: robots.txt in the root directory User-agent: * Allow: / Separate robots.txt in the sub-directory (to be excluded) User-agent: * Disallow: / Is it the correct way or the root directory rule will override the sub-directory rule? 回答1: No, this is wrong. You can’t have a robots.txt in a sub-directory. Your robots.txt must be placed in the document root of your host. If you want to

Link rel=“canonical”: Should different user guide versions use the same canonical url? [closed]

北城余情 提交于 2019-12-22 05:21:47
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed last year . Should 2 different versions of a user guide use a different canonical URL ? Documentation version 1.1.0.Final : <link rel="canonical" href="http://docs.foo.org/1.1.0.Final/index.html"> Documentation version 1.2.0.Final : <link rel="canonical" href="http://docs.foo.org/1.2.0

Can Search Engines Read CSS?

我们两清 提交于 2019-12-22 04:57:07
问题 I used tag to indicate the importance of a sentence. However, it disrupted the consistency of the page style. So I change it back with CSS. The result is that to visitors it is the same but to search engines(SEs), obviously, different. And this is what SEs are annoying about. So my question is can SEs read CSS, and further judge the whole page with it? If so,is my behavior acceptable or not by SEs. Thank you in advance! 回答1: Yes, at least searchengines try to determine whether you violate

Can I use the “Host” directive in robots.txt?

Deadly 提交于 2019-12-22 04:37:09
问题 Searching for specific information on the robots.txt , I stumbled upon a Yandex help page on this topic. It suggests that I could use the Host directive to tell crawlers my preferred mirror domain: User-Agent: * Disallow: /dir/ Host: www.myhost.com Also, the Wikipedia article states that Google too understands the Host directive, but there wasn’t much (i.e. none) information. At robotstxt.org, I didn’t find anything on Host (or Crawl-delay as stated on Wikipedia). Is it encouraged to use the

Adding the CANONICAL tag to my page for SEO through code behind?

只谈情不闲聊 提交于 2019-12-22 04:24:24
问题 I am using ASP.NET with MasterPages. Thus i cant just place this link in my pages that reference my MasterPage. <link rel="canonical" href="http://www.erate.co.za/" /> I need to place this link in though my Page Load of each one of my pages. How would i do this through code? I am using VB.NET but C# will also help me in the right direction. This is how i did it for my DESCRIPTION tag in my code behind. Dim tag As HtmlMeta = New HtmlMeta() tag.Name = "description" tag.Content = "Find or rate

When does Googlebot execute javascript?

主宰稳场 提交于 2019-12-22 03:37:11
问题 I have a few single page web apps on multiple domains that heavily rely on javascript/ajax to fetch and show content. Based on logs and search results I can tell that googlebot runs javascript on some of the domains but not on others. On some it indexes everything thats only available with js on others it doesn't even seem to run js at all. Can anybody tell me how googlebot decides what js to run and if I can to anything to get it to run js on my other domains? PS: I know that normally I

When does Googlebot execute javascript?

99封情书 提交于 2019-12-22 03:37:08
问题 I have a few single page web apps on multiple domains that heavily rely on javascript/ajax to fetch and show content. Based on logs and search results I can tell that googlebot runs javascript on some of the domains but not on others. On some it indexes everything thats only available with js on others it doesn't even seem to run js at all. Can anybody tell me how googlebot decides what js to run and if I can to anything to get it to run js on my other domains? PS: I know that normally I