mediawiki

How to add a link in MediaWiki VisualEditor Toolbar?

删除回忆录丶 提交于 2019-11-27 17:32:53
问题 I`m trying to insert a custom link to a special page in VisualEditor toolbar. See the image below. See Image I googled a lot but without success. Someone please give a path... 回答1: My answer is based on the following resources: MediaWiki core JS doc (ooui-js) VisualEditor JS doc (+ reading code of both repositories used for VE, mediawiki/extension/VisualEditor and VisualEditor) Also, I'm pretty sure, that there is no documented way of adding a tool to the toolbar in VE, as far as I know.

Parsing a Wikipedia dump

℡╲_俬逩灬. 提交于 2019-11-27 14:02:42
For example using this Wikipedia dump: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=lebron%20james&rvprop=content&redirects=true&format=xmlfm Is there an existing library for Python that I can use to create an array with the mapping of subjects and values? For example: {height_ft,6},{nationality, American} It looks like you really want to be able to parse MediaWiki markup. There is a python library designed for this purpose called mwlib . You can use python's built-in XML packages to extract the page content from the API's response, then pass that content into mwlib's

Reliably detecting PhantomJS-based spam bots

∥☆過路亽.° 提交于 2019-11-27 11:35:46
Is there any way to consistently detect PhantomJS/CasperJS? I've been dealing with a spat of malicious spambots built with it and have been able to mostly block them based on certain behaviours, but I'm curious if there's a rock-solid way to know if CasperJS is in use, as dealing with constant adaptations gets slightly annoying. I don't believe in using Captchas. They are a negative user experience and ReCaptcha has never worked to block spam on my MediaWiki installations. As our site has no user registrations (anonymous discussion board), we'd need to have a Captcha entry for every post. We

Get Text Content from mediawiki page via API

我的梦境 提交于 2019-11-27 11:01:14
I'm quite new to MediaWiki, and now I have a bit of a problem. I have the title of some Wiki page, and I want to get just the text of a said page using api.php , but all that I have found in the API is a way to obtain the Wiki content of the page (with wiki markup). I used this HTTP request... /api.php?action=query&prop=revisions&rvlimit=1&rvprop=content&format=xml&titles=test But I need only the textual content, without the Wiki markup. Is that possible with the MediaWiki API? I don't think it is possible using the API to get just the text. What has worked for me was to request the HTML page

WordPress MediaWiki integration

柔情痞子 提交于 2019-11-27 11:00:41
问题 On the other end of the spectrum, I would be happy if I could install a wiki and share the login credentials between WordPress and the wiki. I hacked MediaWiki a while ago to share logins with another site (in ASP Classic) via session cookies, and it was a pain to do and even worse to maintain. Ideally, I would like to find a plug-in or someone who knows a more elegant solution. 回答1: The tutorial WordPress, bbPress & MediaWiki should get you on the right track to integrating MediaWiki into

JS: Failed to execute 'getComputedStyle' on 'Window': parameter is not of type 'Element'

十年热恋 提交于 2019-11-27 03:15:49
问题 In short: I am trying to understand the meaning of this TypeError: Failed to execute 'getComputedStyle' on 'Window': parameter 1 is not of type 'Element' The error appears while lunching Mediawiki's VisualEditor, as can be seen here: http://www.wiki.org.il/index.php?title=new-page&veaction=edit The error doesn't enables creating new pages or editing the wiki anonymously. However, with the use of a different skin the error disappears: http://www.wiki.org.il/index.php/Main_Page?useskin=vector

How to use wikipedia api if it exists? [closed]

爷,独闯天下 提交于 2019-11-27 02:36:51
I'm trying to find out if there's a wikipedia api (I Think it is related to the mediawiki?). If so, I would like to know how I would tell wikipedia to give me an article about the new york yankees for example. What would the REST url be for this example? All the docs on this subject seem fairly complicated. You really really need to spend some time reading the documentation, as this took me a moment to look and click on the link to fix it. :/ but out of sympathy i'll provide you a link that maybe you can learn to use. http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=New

How to get the Infobox data from Wikipedia?

江枫思渺然 提交于 2019-11-27 01:44:59
If I have the url to a page, how would I obtain the Infobox information on the right using MediaWiki webservices? Maybe a little late but i wanted the same thing and didn't see any easy solutions here, but (as Bryan points out) it turns out not to be too difficult to use the Mediawiki API with this library: https://github.com/siznax/wptools Usage: >>> import wptools >>> so = wptools.page('Stack Overflow').get_parse() >>> so.infobox {'alexa': '{{DecreasePositive}}', 'author': '[[Joel Spolsky]] and [[Jeff Atwood]]', 'caption': 'Screenshot of Stack Overflow as of February 2015', 'commercial':

安装MediaWiki

喜欢而已 提交于 2019-11-26 21:08:28
 Mediawiki拥有众多的扩展,可以完善Mediawiki的功能,下面就对Mediawiki一些实用的扩展进行安装说明,后续也会对扩展的个数持续更新。更多的扩展请参考官方扩展 一、关闭防火墙和selinux,便于顺利测试 [root@mediawiki ~]# systemctl status firewalld [root@mediawiki ~]# systemctl stop firewalld [root@mediawiki ~]# systemctl disable firewalld [root@mediawiki ~]#cat /etc/sysconfig/selinux #disabled - No SELinux policy is loaded. SELINUX=disabled 二、yum源安装Http和MariaDB: [root@mediawiki ~]# vim /etc/yum.repos.d/Mariadb.repo [mariadb] name = MariaDB baseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.2/centos7-amd64 gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB

Reliably detecting PhantomJS-based spam bots

北城以北 提交于 2019-11-26 15:38:24
问题 Is there any way to consistently detect PhantomJS/CasperJS? I've been dealing with a spat of malicious spambots built with it and have been able to mostly block them based on certain behaviours, but I'm curious if there's a rock-solid way to know if CasperJS is in use, as dealing with constant adaptations gets slightly annoying. I don't believe in using Captchas. They are a negative user experience and ReCaptcha has never worked to block spam on my MediaWiki installations. As our site has no