Contribute/High Valyrian
Things to do
Tasks on the High Valyrian section of the wiki that could use a hand are:
- adding pages for recently publicized (i.e. "new") words (see Category:High Valyrian lemmas for examples),
- adding new words to the dictionary page and the English-High Valyrian dictionary page,
- adding new senses of words to existing word pages and to the dictionary,
- adding pages for inflected forms of new words (see below, and see Category:High Valyrian non-lemma forms for examples),
- adding the dialogue from episodes of House of the Dragon to the dialogue pages,
- adding examples from Duolingo, the dialogue and other official sources to word pages (see the template documentation for guidelines; a general guiding principle for now can be to only add examples to lemmas i.e. citation forms),
- downloading audio from DJP's work folder, editing it (i.e. remove slow High Valyrian and English parts) and uploading it to the wiki and then adding links to the audio to the dialogue pages, as well as to examples and pronunciation sections in word entries (see Category:High Valyrian terms with audio links for examples),
- adding words to the appropriate Rhyme page and/or creating new ones,
- adding topic/set categories to existing pages using
{{c|hval|...}}
, and - proofreading existing pages and correcting any errors you find.
Adding pages for inflected word forms
By User:Juelos
There are two main components I use for the adding of pages for inflected forms: several different spreadsheets, based on the same basic principle, to generate the pages, and Pywikibot, a Python tool for interacting with MediaWiki, to add them to the wiki. Here are the spreadsheets. Each tab corresponds to some paradigm or subtype of a paradigm. You must modify the number of cells and/or their content for each paradigm if you haven't made or got a file for that purpose. Each cell corresponds to a wiki page for an inflected word form. Each cell i.e. wiki page must begin with {{-start-}}
and end with {{-stop-}}
, in order for it to work with Pywikibot later. For each citation form of a word, at least one principal part (the stem) must be provided, and sometimes more if the word is somehow irregular. How you get these is up to you, but it's easiest if you simply have a list that you can copy.
You then copy the whole range of cells with the into a text document, or just one column if I have concatenated each row into one cell. For the text document, I use Notepad++ to edit it. The main reason for this is the Regex (regular expressions) search and replace features it has, as well as the abilty to highlight and copy specific parts of the document, which I use a lot in the following steps. When you have the pages for inflected forms you wish to edit, you must remove tab characters and quotations marks which are an artifact from Excel. This is simply done with a regex search and replace for [\t"]
and replace it with nothing.
The next step will be to check which of your generated pages already exist on the wiki. However, due to the limits of Pywikibot, it can only check against one category. I have yet to come up with a solution for this, so there may need to be some manual checking especially for short words. Note, that in your file with all the generated pages, there should not be two or more pages with identical names, since then only one (probably the last) will be added. This should not be a problem unless you have very similar words in the same paradigm.
You should then add the pages of inflected forms for such words in separate sessions/Pywikibot commands, in order to avoid this. The category you will most likely want to check against is "High Valyrian terms with IPA pronunciation", or "High Valyrian lemmas" or "High Valyrian non-lemma forms".
For the checking against existing pages, you'll want to highlight the page names you generated and copy them and only them into a separate file and save that file. Then you'll want to have installed Pywikibot. You will have to save your password or a bot password (safer) for your account in a login file (you can google Pywikibot tutorials, there are very many and they are very detailed). Then in the command prompt, you cd
to the folder where you save your files and run Pywikibot, for example with cd pywikibot
, followed by pwb.py login
.
Then you run the command that does the checking and generates the intersection of the pages you've generated and pages already on the wiki. The command I use is pwb.py listpages -format:3 -intersect -cat:"High Valyrian terms with IPA pronunciation" -file:file_with_generated_page_names.txt
. This will give you list of pages that already exist. I then paste these in another spreadsheet in order to get a search term to use on the first file with all the pages, to highlight them and copy them into a different text document. Then you run a second replace to replace the new lines (in the attached file). This will give you commands to use with Pywikibot in the command prompt, that add the part of the generated pages after the pronunciation section to the existing pages.
You'll have to remove the final \n
, this is some artifact of the process, and replace it with a new line. When you paste these into the command prompt, they will execute directly to add the text to existing pages, so make sure they are correct.
When these partial pages have been appended to existing pages (they will only look right if the existing page is a High Valyrian term), you can do the last step, which is actually the easiest. This is simply adding the rest of the newly generated pages to the wiki. In the command prompt, enter pwb.py pagefromfile -showdiff -notitle -summary:"Created page" -file:file_with_all_generated_pages.txt
. The pages that already exist that you dealt with in the previous step will not be a problem; they will simply not be added. Then you simply let that run and your pages will be added.
Then repeat the process for the next batch of inflected form pages. If you are sure there are no identical page names among them, you can speed up the process by adding inflected form pages for words in several paradigms at once.