wikipedia

QGis: How to import svg or raster images into Quantum GIS?

断了今生、忘了曾经 提交于 2019-12-09 05:54:51
问题 these vector or raster files being classic files without geocoordinates. They are lat/long projection, I want to import them into QGIS, scale them up/down, place them to their right place, and they become reusable shp or raster geocoordinated layers. Edit: I'am from the wikipedia Graphic Lab>Map workshop, we want to work more using GIS. We litteraly have hundreds maps to migrate to GIS technologies.... File:Chinese_plain_5c._BC-en.svg File:Vignobles_basse_loire.svg 回答1: Partial Solution: load

Why can't I fetch wikipedia pages with LWP::Simple?

南笙酒味 提交于 2019-12-09 02:44:39
问题 I'm trying to fetch Wikipedia pages using LWP::Simple, but they're not coming back. This code: #!/usr/bin/perl use strict; use LWP::Simple; print get("http://en.wikipedia.org/wiki/Stack_overflow"); doesn't print anything. But if I use some other webpage, say http://www.google.com , it works fine. Is there some other name that I should be using to refer to Wikipedia pages? What could be going on here? 回答1: Apparently Wikipedia blocks LWP::Simple requests: http://www.perlmonks.org/?node_id

Wikipedia as part of my iOS app

为君一笑 提交于 2019-12-08 10:04:26
问题 I would like to download information from wikipedia to my iOS app. Firstly, I created simple RSS reader, but I can't download date from wiki. Now, I think that I should create parser for wiki. What do you think about this? Any ideas? Thanks, Tomek 回答1: In my opinion, parsing a website is never a good idea. Only the smallest change in the design of the website can break your application and make it unusable. I'd try to get to your data in an alternative way. ;-) Sandro Meier 回答2: Scrapping a

How to form SPARQL queries that refers to multiple resources

▼魔方 西西 提交于 2019-12-08 09:27:09
问题 My question is a followup with my first question about SPARQL here. My SPARQL query results for Mountain objects are here. From those results I picked a certain object resource. Now I want to get values of " is dbpedia-owl:highestPlace of " records for this chosen Mountain object. That is, names of mountain ranges for which this mountain is highest place of . This is, as I figure, complex. Not only because I do not know the required syntax, but also I get two objects here. One of them is Mont

How to get data from mediawiki api using Angularjs?

僤鯓⒐⒋嵵緔 提交于 2019-12-08 08:16:28
问题 While I try to access wiki api using Angularjs $http.get(), CORS issues occured. Here is my code $http.get('http://en.wikipedia.org/w/api.php?action=query&prop=extracts&format=json&exintro=&titles=India') .success(function(data){ console.log('data' +data); }); And this is the error message XMLHttpRequest cannot load https://en.wikipedia.org/w/api.php?action=query&prop=extracts&format=json&exintro=&titles=India. No 'Access-Control-Allow-Origin' header is present on the requested resource.

Concurrent Python Wikipedia Package Requests

浪尽此生 提交于 2019-12-08 07:02:43
问题 I am making a python application which uses the python Wikipedia package to retrieve the body text of 3 different Wikipedia pages. However, I am noticing very slow performance when retrieving the articles one at a time. Is there a method that I can use to retrieve the body text of 3 Wikipedia pages in parallel? 回答1: If you want the 'raw' page you can use any python scraping library such as twisted/scrapy. But, if you are looking for the parsed wiki format you should use pywikibot

PHP + Wikipedia: Get content from the first paragraph in a Wikipedia article?

只谈情不闲聊 提交于 2019-12-07 19:36:49
问题 I’m trying to use Wikipedia’s API (api.php) to get the content of a Wikipedia article provided by a link (like: http://en.wikipedia.org/wiki/Stackoverflow). And what I want is to get the first paragraph (which in the example of the Stackoverflow wiki article is: Stack Overflow is a website part of the Stack Exchange network[2][3] featuring questions and answers on a wide range of topics in computer programming.[4][5][6] ). I’m going to do some data manipulation with it. I’ve tried with the

Getting Wikipedia IDs in MQL

随声附和 提交于 2019-12-06 15:31:44
Freebase WEX dumps contain a wpid column corresponding to the page_id from the source MediaWiki database in the freebase_wpid table. This table provides a mapping between Wikipedia numeric article/redirect IDs and Freebase GUIDs (Global Unique IDs). guid use as foreign keys is deprecated by mid for lots of good reasons , but that doesn't change the fact that guids are still used at a system level so I'm going to call mid an accessor from here on. Using the mid accessor is flexible in MQL. One can query using "mid": null and using "mid":[] depending on whether one needs the current mid or every

How to get the result of a complex Wikipedia template?

こ雲淡風輕ζ 提交于 2019-12-06 13:02:51
This is a question that is a bit hard to follow but I will do my best explaining it. First, let me present an example page: http://en.wikipedia.org/wiki/African_bush_elephant That's a wikipedia page, a specie page in particular since it has the 'taxobox' to the right. I'm trying to parse the attributes in that taxobox using PHP. There's two ways in Wikipedia to create such a taxobox: manually, or by using the special "auto taxobox" template. I can parse the manual one. I use Wikipedia's API to return the page's content in json format, next I use some regular expressions to get those properties

“Partial match” table (aka “failure function”) in KMP (on wikipedia)

倾然丶 夕夏残阳落幕 提交于 2019-12-06 12:11:33
I'm reading the KMP algorithm on wikipedia. There is one line of code in the "Description of pseudocode for the table-building algorithm" section that confuses me: let cnd ← T[cnd] It has a comment: (second case: it doesn't, but we can fall back) , I know we can fall back, but why T[cnd], is there a reason? Because it really confuses me. Here is the complete pseudocode fot the table-building algorithm: algorithm kmp_table: input: an array of characters, W (the word to be analyzed) an array of integers, T (the table to be filled) output: nothing (but during operation, it populates the table)