seo

robots.txt : how to disallow subfolders of dynamic folder

我的梦境 提交于 2019-12-06 12:27:53
I have urls like these: /products/:product_id/deals/new /products/:product_id/deals/index I'd like to disallow the "deals" folder in my robots.txt file. [Edit] I'd like to disallow this folder for Google, Yahoo and Bing Bots. Does anyone know if these bots support wildcard character and so would support the following rule? Disallow: /products/*/deals Also... Do you have any really good tuto on robots.txt rules? As I didn't manage to find a "really" good one I could use one... And one last question: Is the robots.txt the best way to handle this? Or should I better use the "noindex" meta? Thx

SEO-----网站不被收录的原因

不羁岁月 提交于 2019-12-06 12:05:44
1. 新站的收录较慢 2. 文章质量不高 文章难以阅读 排版乱 内容是别的网站采集来的 很难被收录 3. 网站被降权中 4. 蜘蛛不访问[网站配置] 检查网站是否屏蔽了蜘蛛的爬取[ robots] 有没有做外链 看网站日志 5. 原来很多收录 最近不收录了 排除被惩罚的因素 主要还是外链太少 没有足够外链支撑 4的处理: 网站的关键词布局没问题、内容质量没问题,并且规律性的更新网站内容,同时也有持续在一些高权重平台发布外链,为什么百度蜘蛛就是不收录的的网页? a。 网站是否有屏蔽了百度蜘蛛的抓取、     1.查看网站的robots.txt文件     User-agent:*     Disallow: /     ===========屏蔽所有搜索引擎蜘蛛的抓取     User-agent: Baiduspider     Disallow: /     ===========屏蔽百度搜索引擎蜘蛛的抓取     ----------------解决办法     User-agent: *     Disallow: /wp-admin/     Disallow: /wp-content/     ==============把Disallow: /改成指定的屏蔽目录     Allow: /     ==============允许访问     2. 在网站页面代码

Google docs ImportXML called from script

末鹿安然 提交于 2019-12-06 11:56:05
I am using ImportXML in a google docs sheet to aqcuire data from the sistrix api. It works fine but I encountered the limitation of 50 ImportXML commands in one sheet. So I used a script that writes the ImportXML command to a cell (temporary) formula and takes back the resulting value of the cell and copies it to the destination cell. So you can do as much ImportXML queries as you need, as they only appear in one temporary cell in the sheet. The problem here is, that the ImportXML query SOMETIMES takes very long or returns with N/A. Is it possible that my script sometimes doesnt wait for the

Error: Page contains property “query-input” which is not part of the schema

為{幸葍}努か 提交于 2019-12-06 11:40:43
I get this error from the Google RichSnippets testing tool : Error: Page contains property "query-input" which is not part of the schema. But where did I make a mistake? HTML : <div id="dkAjaxSearch"> <input id="ajaxSearch" type="text" value="" name="search_term" itemprop="query-input"> Press Enter to search </div> JSON-LD : <script type="application/ld+json"> { "@context": "http://schema.org", "@type": "WebSite", "url": "https://domain.com/", "potentialAction": { "@type": "SearchAction", "target": "http://domain.com/search/{search_term_string}", "query-input": "required name=search_term

Remove hash (#) from URL in Jhipster (both java and angular 6)

孤者浪人 提交于 2019-12-06 11:27:38
问题 I am using Jhipster Spring boot + angular 6. But i'm having trouble because of the hash(#) in URL. It is affecting SEO. I tried setting useHash: false in app-routing-module.ts . But then the API is not working when I run the project via npm start . I think somewhere in Java files I have to change a configuration to remove # from the URL. Here is my WebConfigurer code, @Configuration public class WebConfigurer implements ServletContextInitializer, WebServerFactoryCustomizer<WebServerFactory> {

Make SEO sensitive URL (avoid id) Zend framework

不羁的心 提交于 2019-12-06 11:15:37
问题 i have url like this : http://quickstart.local/public/category1/product2 and in url (category1/product2) numbers are id , categorys and products fetched from database attention to the id id is unique i need to the sensitive url like zend framework url. for example : http://stackoverflow.com/questions/621380/seo-url-structure how i can convert that url to the new url like this is there any way?!! 回答1: You could use ZF's Zend_Controller_Router_Route. For example, to make similar url to those

Search engine optimization for kml file type

烈酒焚心 提交于 2019-12-06 10:47:45
I've a web site that generates kml files. An uri like this: /tokml?gid=2846 Generates a file like this: Mt. Dana Summit Trail.kml Using Header('Content-Disposition: inline; filename="Mt. Dana Summit Trail.kml"'); in a PHP script and running on Apache http server. But a Google search on filetype:kml will not give any results from my web site. I could cache all kml files and build an uri like this: /kml/Mt. Dana Summit Trail.kml But are there any other solutions? From my experience, Google usually index URLs with an identifier in the query string quite well - thus, it seems strange that nothing

Determine I.P. Address of Referring Site

丶灬走出姿态 提交于 2019-12-06 09:44:32
问题 I am currently working on a marketing module that keeps track of sites that brings traffic to our site. Is there a way to get the domain or I.P. address of the referring site using PHP? I believe HTTP_REFERER does not always show up on the $_SERVER global. Thanks in advanced. 回答1: The HTTP_REFERER header has to be sent by the client's browser. You can't rely on it being sent. Scenarios when it does not get sent include: The user enters the address by hand The user opens a link in one of the

Pretty Paths in Rails

本小妞迷上赌 提交于 2019-12-06 09:35:43
问题 I have a category model and I'm routing it using the default scaffolding of resources :categories . I'm wondering if there's a way to change the paths from /category/:id to /category/:name . I added: match "/categories/:name" => "categories#show" above the resources line in routes.rb and changed the show action in the controller to do: @category = Category.find_by_name(params[:name]) it works, but the 'magic paths' such as link_to some_category still use the :id format. Is there a way to do

Can pages that make heavy use of AJAX also be search engine friendly?

我是研究僧i 提交于 2019-12-06 09:29:01
问题 I guess what I mean is, if I make a site that uses AJAX to load some content that I also want search engines to find -- if I make the page work sans-javascript (say, when javascript isn't present, a link goes to site.com?c=somecontent rather than calling a function with $("#content").load("somecontent.html"); ), will the search engine follow the non-javascript link and be able to index the site well? I suppose this would work if javascript-enabled browsers who followed a search engine link to