Elixir/HTML dump scraper

From ludd
Revision as of 18:21, 25 March 2023 by Adamw (talk | contribs) (WIP explain transcluded refs)

A new and wondrous data source has become available to Wikimedia researchers and hobbyists: semantic HTML dumps of articles for most wikis. Previously, only the bare wikitext was available for download and this was notoriously difficult to make sense of. With the HTML dumps, standard tooling is used to extract many types of structure and information—and the wikitext is still present as annotations.

At my day job working on Technical Wishes for Wikimedia Germany, we found a motivation to parse these dumps: it's the only reliable way to tally reference footnotes. I'll go into some detail about why other data sources wouldn't be sufficient, because it showcases the challenges of wikitext and the relative simplicity and dependability of HTML dumps.



What are references?

References are the little footnotes all over Wikipedia articles:[example-footnote 1] Citations ground the writing in sources, which are especially important on Wikipedia because of the rule against so-called "original research". Factual claims are supposed to be paraphrased from existing secondary sources.

  1. This is a footnote body.

A raw reference looks like <ref>This footnote.</ref>. Most references are fancier, and many rely on reusable structures called templates. Here's a short example of a template that would produce a footnote, {{sfn|Hacker|Grimwood|2011|p=290}}.

Challenges of wikitext

If "{{sfn}}" were the only template then we could just search for "ref" tags and "sfn" templates in wikitext. However, a conservative search for reference-producing templates shows over 12,000 on English Wikipedia alone, and not counting those that differ on every other wiki and language.

Only the fully-rendered HTML article shows the final footnotes.