mediawiki

PHP connecting to MediaWiki API and retrieve data

断了今生、忘了曾经 提交于 2019-11-30 07:39:08
问题 I noticed there was a question somewhat similar to mine, only with c#:link text. Let me explain: I'm very new to the whole web-services implementation and so I'm experiencing some difficulty understanding (especially due to the vague MediaWiki API manual). I want to retrieve the entire page as a string in PHP (XML file) and then process it in PHP (I'm pretty sure there are other more sophisticated ways to parse XML files but whatever): Main Page wikipedia. I tried doing $fp = fopen($url,'r');

How to do an accent and case-insensitive search in MediaWiki database?

陌路散爱 提交于 2019-11-30 05:28:46
问题 Let's pretend that I have these page titles in my wiki (MediaWiki 1.19.4): SOMETHIng Sómethìng SomêthÏng SÒmetHínG If a user searches something I want that all 4 pages are returned as the result. At the moment the only thing I could think of is this query (MySQL Percona 5.5.30-30.2): SELECT page_title FROM page WHERE page_title LIKE '%something%' COLLATE utf8_general_ci Which only returns SOMETHIng . I must be on the right path, because if I search sóméthíng OR SÓMÉTHÍNG , I get SOMETHIng as

How to get plain text out of wikipedia

梦想的初衷 提交于 2019-11-29 21:02:15
I've been searching for about 2 months now to find a script that gets the Wikipedia description section only. (It's for a bot i'm building, not for IRC.) That is, when I say /wiki bla bla bla it will go to the Wikipedia page for bla bla bla , get the following, and return it to the chatroom: "Bla Bla Bla" is the name of a song made by Gigi D'Agostino. He described this song as "a piece I wrote thinking of all the people who talk and talk without saying anything". The prominent but nonsensical vocal samples are taken from UK band Stretch's song "Why Did You Do It" Here is the closest I've found

InnoDB: Attempted to open a previously opened tablespace

旧街凉风 提交于 2019-11-29 19:37:15
I have been working on a problem for a few days now. Our local mediawiki page that sits on our box account, destroyed itself and we've been working to get it online. Using XAMPP Control Panel v3.2.1, the errors were numerous so we decided to update XAMPP (v3.2.2) and move the 'htdocs' and 'mysql/data' files over to the new data base. First error: 9:50:21 AM [mysql] Attempting to start MySQL app... 9:50:22 AM [mysql] Status change detected: running 9:50:22 AM [mysql] Status change detected: stopped 9:50:22 AM [mysql] Error: MySQL shutdown unexpectedly. 9:50:22 AM [mysql] This may be due to a

Get all Wikipedia Infobox Templates and all Pages using them

纵然是瞬间 提交于 2019-11-29 18:19:00
问题 Given a Wikipedia page like Wikipedia: Stack Overflow there are often Infoboxes (mostly on the right hand at the top of the page). Example screenshot: DBPedia lists all these attributes as RDF triples. You can see the example at DBPedia: Stack Overflow. There you see the property dbpprop:wikiPageUsesTemplate with the value dbpedia:Template:Infobox_website which is interesting. I want to know which Wikipedia pages use this template. How can i do that and list all pages which use the Infobox

Mediawiki Extension add Javascript in Header

*爱你&永不变心* 提交于 2019-11-29 16:34:45
Hi my problem is that i cant load some javascript file @ my special page extension. I tried it with addscript and some other methods but the only thing what happened was that the javascript was canceled cause no-js of the mediawiki software. In the folder of my extension is a new.js file which i want to have access only on my special page. Here is some code (most of it of the example of the special pages). MyExentions.php <?php if (!defined('MEDIAWIKI')) { echo <<<EOT To install my extension, put the following line in LocalSettings.php: require_once( "$IP/extensions/MyExtension/MyExtension.php

Exporting and importing images in MediaWiki

自闭症网瘾萝莉.ら 提交于 2019-11-29 13:26:46
问题 How do I export and import images from and into a MediaWiki? 回答1: There is no automatic way to export images like you export pages, you have to right click on them, and choose "save image". To get the history of the Image page, use the Special:Export page. To import images use the Special:Upload page on your wiki. If you have lots of them, you can use the Import Images script. Note: you generally have to be in the sysop group to upload images. 回答2: Terminal solutions MediaWiki administrator,

android Wikipedia api game

大兔子大兔子 提交于 2019-11-29 13:11:18
Hi i have to make an app with the following requirement: When the user opens the app, it displays the text from a random Wikipedia page. (You’re free to use any logic for grabbing text from a random Wiki page(preferably using REST APIs)) The game requires a minimum of 10 lines of text on the screen. However, we want to show complete paragraphs of text to make it easier to understand the content displayed. Use the least number of paragraphs required to cross the 10 sentence limit. I am able to get text from random wiki page but many times text is less than 10 sentences and to ensure minimum 10

How to export text from all pages of a MediaWiki?

徘徊边缘 提交于 2019-11-29 12:15:31
问题 I have a MediaWiki running which represents a dictionary of German terms and their translation to a local dialect. Each page holds one term, its translation and a number of additional infos. Now, for a printable version of the dictionary, I need a full export of all terms and their translation. Since this is an extract of a page's content, I guess I need a complete export of all pages in their newest version in a parsable format, e.g. xml or csv. Has anyone done that or can point me to a tool

parse json string from wikimedia using jquery

社会主义新天地 提交于 2019-11-29 12:13:02
Im tring to get the infobox from wiki pages. For this I'm using wiki api. The following is the url from which I'm getting json data. http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&titles= "+first+"&rvsection=0 Where first is a variable containing the article title for Wikipedia. I'm finding it extremely complex to parse this data to make a meaningful html out of it. I was usuing $.each function initially. But the loop is very deep that I had to use 6-7 times to get to the actual data that I want. I think there would be better alternative than this.