Elixir/HTML dump scraper: Difference between revisions

From ludd
Adamw (talk | contribs)
Undo revision 31 by Adamw (talk)
Tag: Undo
Adamw (talk | contribs)
use Project template
Line 2: Line 2:
A new and wondrous data source has become available to Wikimedia researchers and hobbyists: semantic HTML dumps of articles for most wikis.  Previously, only the bare wikitext was available for download and this was notoriously difficult to make sense of.  With the HTML dumps, standard tooling is used to extract many types of structure and information—and the wikitext is still present as annotations.
A new and wondrous data source has become available to Wikimedia researchers and hobbyists: semantic HTML dumps of articles for most wikis.  Previously, only the bare wikitext was available for download and this was notoriously difficult to make sense of.  With the HTML dumps, standard tooling is used to extract many types of structure and information—and the wikitext is still present as annotations.


At my day job working for Wikimedia Germany's [[metawikimedia:WMDE_Technical_Wishes|Technical Wishes]], we found a motivation to parse these dumps: it's the only reliable way to tally how reference footnotes are used.  I'll go into some detail about why other data sources wouldn't be sufficient, because it showcases the challenges of wikitext and the benefits of HTML dumps.
At my day job working for Wikimedia Germany's [[metawikimedia:WMDE_Technical_Wishes|Technical Wishes]], we found a motivation to parse these dumps: it's the only reliable way to tally how reference footnotes are used.  I'll go into some detail about why other data sources wouldn't be sufficient, because it showcases the challenges of wikitext and the relative simplicity and dependability of HTML dumps.
[[File:Git format.png|left|frameless|64x64px]]
 
Project source code (in progress): https://gitlab.com/wmde/technical-wishes/scrape-wiki-html-dump
{{Project|status=(in progress)|url=https://gitlab.com/wmde/technical-wishes/scrape-wiki-html-dump}}

Revision as of 10:18, 25 March 2023

A new and wondrous data source has become available to Wikimedia researchers and hobbyists: semantic HTML dumps of articles for most wikis. Previously, only the bare wikitext was available for download and this was notoriously difficult to make sense of. With the HTML dumps, standard tooling is used to extract many types of structure and information—and the wikitext is still present as annotations.

At my day job working for Wikimedia Germany's Technical Wishes, we found a motivation to parse these dumps: it's the only reliable way to tally how reference footnotes are used. I'll go into some detail about why other data sources wouldn't be sufficient, because it showcases the challenges of wikitext and the relative simplicity and dependability of HTML dumps.