Freeware for cleaning and manipulation of text lists

These are all Windows PC freeware, with graphical user interfaces, tested and working on Windows 8.1.x. They may be useful for those who occasionally have to sort and clean and combine lists in text form, and who do not have access to paid tools such as the sophisticated TextPipe Pro or the Sobolsoft utilities, or to advanced training in Excel and regex commands.



The relatively simple:

1. Text Cleanup 2.0.

It “fixes” text automatically when you copy-paste it, according to various cleaning options you can save presets for. Its main use is to unwrap a chunk of text that has hard line-breaks, when copied to the clipboard. Or to place a new blank line between each line. This vital software only recently went freeware.

Can be used in combination with the free Clipboard Magic which keeps copies of all Clipboard text, and then allows you to “Copy all clips to clipboard”.


2. List Numberer.

This can do what Notepad++ can’t yet do, and does easily what Excel can only seem to with complex fiddly formulas and macros. Most useful for dealing with repeated blocks of data in a list (e.g. labelling them 1234, 1234, 1234), to enable mass deletion of certain lines in a text editor.


3. Text Magician 1.3.

Various operations including append text to the start and end of each line, delete multi-line blocks between X and Y, and more. (If you have ‘.DLL missing’ problems, either go find the required .DLL file, or use Version 1.0 which does not have that problem).


4. Duplicated Finder from AKS-Labs.

Easily find and extract the duplicates from a single list. Useful for checking for the presence of the a few duplicate URLs in long list or uniques, for instance. (See also the free Duplicate Master addin for Excel).


5. Excel example sheet: compare two lists and extract non-duplicates.

My free ready-made .XLS sheet for Excel, with formula. The second list is a jumbled up variant of the first, with some new additions in it. These additions are extracted and placed alongside. (Excel is not free, admittedly, but my guess is you could probably get the same formula working in whatever LibreOffice has as its Excel equivalent).



The potentially quite complex:

1. Notepad++.

The code programmer’s text editor. Column numbering (though it can’t do what List Numberer — see above — can do); sophisticated Regex (though the more sophisticated, the more difficult to remember and to get it working); Remove blanks lines (provided you can remember the menu sequence within its complex UI); and much more. Intensive research is often needed to learn how to do a particular bit of sophisticated text manipulation, and it’s also easy to overlook its most powerful features such as per-line list bookmarking. The devs have recently fumbled a move to a different plugin structure, and thus you may need to run the latest 64-bit version alongside an older 32-bit version in order to run PythonScript and older plugins such as Multiline Search/Replace (appears under ‘Tool Bucket’), Column Sorting and Line Filter 2.


2. WildGem 1.3.

A tool for building and testing ‘regular expression’ or ‘regex’ commands. Find and replace with these commands, and see the resulting changes (if any) in realtime. This software can hide some of the more common ‘regex’ snippets behind more user-friendly visual icons. Useful for instantly testing ‘regex’ command formulas you find, to see if they work, without having to wrestle with Notepad++. This is portable Windows software. In order to save your UI layout preferences, it must be run in Administrator mode.


3. CSVed.

A CSV file editor, an alternative you may prefer to behemoth software such as Excel. Move lists and sections around, split lists, add to lines. Appears to lack ability to do column numbering for lists (for which see List Numberer, above).


4. Openview’s Index Generator 7.0.

Dedicated to creating a back-of-the-book index for a book. This one is more about the creation of the list, admittedly, but it has various filtering options while doing this. The curious lack of a ‘filter for capitalised words only’ filter make it far less useful than its paid competitor. Asks for a donation on exit. (Note that Softpedia’s review states you “upload a document to the program”, and this wording may mislead the casual reader into thinking this is partly online cloud-linked software. It isn’t, it’s standalone Windows software).


5. There’s a free Selected HTML page-content to Markdown addon for chrome-based browsers, and also a Markdown to BBCode converter.

May be useful if either Markdown or BBCode is easier to work with, re: sorting and cleaning list-shaped content grabbed from a Web page. The latter is a self contained javascript-based Web page and can work offline, just save the page locally and re-open it.


JURN pagination links fixed

In the last week or so Google has made some slight changes to the default styling templates for CSEs, resulting in the numbered pagination links at the foot of the search results becoming very small and grey. This has now been fixed on JURN, and your per-page links to more search results should now look like this. They should be far more easily selectable now, and especially for touch-screen users…

My thanks to Amit Agarwal of India, for the elegant snippet of commented CSS for the .gsc-cursor-page element. If you have the same problem with your own CSE, this snippet goes in the style header of your page. Colours are controlled elsewhere, in the ‘Look & Feel’ | Customise | Refinement section of your CSE admin dashboard.

Changes may not show up until you and your users refresh your main page a few times, due to Web browser caching.

GRAFT has also had the same fix applied.

Tutorial: simple web-scraping with freeware

This tutorial and workflow is for those who want to do fairly light and basic web-scraping, of text with live links, using freeware.

You wish to copy blocks of text from the Web to your Windows clipboard, while also retaining hyperlinks as they occurred in the original text. This is an example of what current happens…

On the Web page:
This is the text to be copied.

What usually gets copied to the clipboard:
This is the text to be copied.

Instead, it would be useful to have this copied to the clipboard:


Why is this needed?

Possibly your target text is a large set of linked search-results, tables-of-contents from journals, or similar dynamic content which contains HTML-coded Web links among the plain text. Your browser’s ‘View Source’ option for the Web page shows you only HTML code that is essentially impenetrable and/or unfathomable spaghetti — while this code can be copied to the clipboard, it is effectively un-usable.

Some possible tools to do this:

I liked the idea and introductory videos of the WebHarvy ($99) Web browser. Basically this is a Chrome browser, but completely geared up for easy data extraction from data-driven Web pages. It also assumes that the average SEO worker needs things kept relatively simple and fast. It’s desktop software with (apparently) no cloud or subscription shackle, but it is somewhat expensive if used only for the small and rare tasks of the sort bloggers and Web cataloguers might want to do. Possibly it would be even more expensive if you needed to regularly buy blocks of proxies to use with it.

At the other end of the spectrum is the free little Copycat browser addon, but I just could not get it to work reliably in Opera, Vivaldi or Chrome. Sometimes it works, sometimes it only gets a few links, sometimes it fails completely. But, if all you need to occasionally capture text with five or six links in it then you might want to take a look. Copycat has the very useful ability to force absolute URL paths in the copied links.

I could find no Windows-native ‘clipboard extender’ that can do this, although Word can paste live ‘blue underlined’ links from the clipboard — so it should be technically possible to code a hypothetical ‘LinkPad’ that does the same but then converts to plain text with HTML coded links.

My selected free tool:

I eventually found something similar to Copycat, but it works. It’s Yoo’s free Copy Markup Markdown. This is a 2019 Chrome browser addon (also works in Opera and presumably in other Chrome-based browsers). I find it can reliably and instantly capture 100 search results to the clipboard in either HTML or Markdown with URLs in place. You may want to tick “allow access to search engine results”, if you plan to run it on major engines etc. Update: it can also just copy the entire page to the clipboard in markdown, no selection needed!


Cleaning the results:

Unlike the Copycat addon, it seems the ‘Copy Markup Markdown’ addon can’t force absolute URL paths. Thus, the first thing to check on your clipboard is the link format. If it’s ../data/entry0001.html then you need to add back the absolute Web address. Any text editor like Notepad or Notepad++ can do this. In practice, this problem should only happen on a few sites.

You then need to filter the clipboard text, to retain only the lines you want. For instance…

Each data block looks like:

    Unwanted header text.
    This is [the hyperlinked] article title.
    Author name.
    [The hyperlinked] journal title, and date.
    Some extra unwanted text.
    Snippet.
    Oooh, look… social-media buttons! [Link] [Link] [Link] [Link]
    Even more unwanted text!

You want this snipped and cleaned to:

    Author name.
    [The hyperlinked] article title.
    [The hyperlinked] journal title, and date.

Notepad++ can do this cleaning, with a set of very complex ‘regex’ queries. But I just couldn’t get even a single one of these to work in any way, either in Replace, Search or Mark mode, with various Search Modes either enabled or disabled. The only one that worked was a really simple one — .*spam.* — which when used in Replace | Replace All, removed all lines containing the knockout keyword. Possibly this simple ‘regex’ could be extended to include more than one keyword.

The fallback, for mere mortals who are not Regex Gods, is a Notepad++ plugin and a script. This takes the opposite approach — marking only the lines you want to copy out, rather than deleting lines. The script is Scott Sumner’s new PythonScript BookmarkHitLineWithLinesBeforeAndAfter.py script. (my backup screenshot). This script hides certain useful but complex ‘regex’ commands, presenting them as a simple user-friendly panel.

This script does work… but does not work in the current version of Notepad++. Unfortunately the new Notepad++ developers have recently changed the plugin folder structure around, for no great reason that I can see, and in a way that breaks a whole host of former plugins or which make attempted installs of these confusing and frustrating when they fail to show up. The easiest thing to do is bypass all that tedious confusion and fiddly workarounds, and simply install the old 32-bit Notepad++ v5.9 alongside your shiny new 64-bit version. On install of the older 32-bit version, be sure to check ‘Do not use the Appdata folder’ for plugins. Then install the Notepad++ Python Script 1.0.6 32-bit plugin (which works with 5.9, tested), so that you can run scripts in v5.9. Then install Scott Sumner’s new PythonScript BookmarkHitLineWithLinesBeforeAndAfter.py script. Install it to C:\Program Files (x86)\Notepad++\plugins\PythonScript\scripts.

OK, that technical workaround diversion was all very tedious… but now that you have a working useful version of Notepad++ installed and set up, in Notepad++ line filtering is then a simple process.

First, in Notepad++…

    Search | Find | ‘Mark’ tab | Tick ‘Bookmark line’.

This adds a temporary placeholder mark alongside lines in the list that contain keyword X…

In the case of clipboard text with HTML links, you might want to bookmark lines of text containing ‘href‘. Or lines containing ‘Journal:‘ or ‘Author:‘. Marking these lines can be done cumulatively until you have all your needed lines bookmarked, ready to be auto-extracted into a new list.

Ah… but what if you also need to bookmark lines above and below the hyperlinks? Lines which are unique and have nothing to ‘grab onto’ in terms of keywords? Such as an article’s author name which has no author: marker? You almost certainly have captured such lines in the copy process, and thus the easiest way to mark these is with Scott Sumner’s new PythonScript (linked above). This has the advantage that you can also specify a set number of lines above/below the search hit, lines that also need to be marked. Once installed, Scott’s script is found under Scripts | Python Scripts | Scripts, and works very simply and like any other dialogue. Using it we can mark one line above href, and two lines below…

Once you have all your desired lines bookmarked, which should only take a minute, you can then extract these lines natively in Notepad++ via…

    Search | Bookmark | ‘Copy Bookmarked lines’ (to the Clipboard).

This whole process can potentially be encapsulated in a macro, if you’re going to be doing it a lot. Perhaps not necessarily with Notepad++’s own macros, which have problems with recording plugins, but perhaps with JitBit or a similar automator. The above has the great advantage that you don’t have to enter or see any regex commands. It all sounds fiendishly complicated, but once everything’s installed and running it’s a relatively simple and fast process.


Re-order and delete lines in data-blocks, in a list?

Scott Sumner’s script can’t skip a line and then mark a slightly later line. Thus the general capture process has likely grabbed some extra lines within the blocks, that you now want to delete. But there may be no keyword in them to the grab onto. For instance…

    [The hyperlinked] article title
    Random author name
    Gibbery wibble
    Random journal title, random date

The Gibbery wibble line in each data block needs to be deleted, and yet each instance of Gibbery wibble has different wording. In this case you need either: the freeware List Numberer (quick) to add extra data to enable you to then delete only certain lines; or my recent tutorial on how to use Excel to delete every nth line in a list of data-blocks (slower). The advantage of using Excel is that you can also use this method to re-sort lines within blocks in a long list, for instance to:

    Random author name
    [The hyperlinked] article title
    Random journal title, random date


Alternatives?:

Microsoft Word can, of course, happily retain embedded Web links when copy-pasting from the Web (hyperlinks are underlined in blue, and still clickable, a process familiar to many). But who wants to wrestle with that behemoth and then save to and comb through Microsoft’s bloated HTML output, just to copy a block of text while retaining its embedded links?

Notepad++ will allow you to ‘paste special’ | ‘paste HTML content’, it’s true. But even one simple link gets wrapped in 25 lines of billowing code, and there appears to be no way to tame this. Doing the same with a set of search engine results just gives you a solid wall of impenetrable gibberish.

There are also various ‘HTML table to CSV / Excel’ browser addons, but they require the data to be in an old-school table form on the Web page. Which search and similar dynamic results may not be.

There are plenty of plain ‘link grabber’ addons (LinkClump is probably the best, though slightly tricky to configure for the first time), but all they can grab is the link(s) and title. Not the link + surrounding plain-text lines of content.

There were a couple of ‘xpath based’ extractors (extract page parts based on HTML classes and tags), but in practice I found it’s almost impossible to grab and align page elements within highly complex code. Even with the help of ‘picker’ assistants. I also found an addon that would run regex on pages, Regex Scraper. But for repeating data it’s probably easier to take it to per-line Markdown then run a regex macro on it in Notepad++.

The free ‘data scraper’ SEO addons all look very dodgy to me, and I didn’t trust a single one of them (there are about ten likely-looking ones for Chrome), even when they didn’t try to grab a huge amount of access rights. I also prefer a solution that will still go on working on a desktop PC long after such companies vanish. Using a simple browser addon, Notepad++ and Excel fits that bill. If I had the cash and the regular need, I would look at the $99 WebHarvy (there’s a 14-day free trial). The only problem there seems to be that it would need to be run with proxies, whereas the above solution doesn’t as the content is simply grabbed via copy-paste.

How to remove every nth line in a list

How to remove every nth line in a list. The list is made up of repeating four-line blocks of text.

The situation:

You have a long and mostly cleaned text list that looks like this…

Random article title
Random author name
Gibbery wibble
Random journal title

Random article title
Random author name
Gibbery wibble
Random journal title

Random article title
Random author name
Gibbery wibble
Random journal title

… and you of course wish to delete all the unwanted Gibbery wibble lines. All the Gibbery wibble text is different. Indeed, there’s no keyword or repeating element in each four-line data block for a search-replace operation to the grab onto. The only repeating element is the blank line that separates each data block of four lines.

So far as I can see, after very extensive searching, there’s as yet no way to deal with this in Notepad++, even with plugins and Python scripts.

The slower solution:

The more flexible but longer solution is Excel. However, the latest version of Notepad++ (not the older, 32-bit version) will let you quickly take the first and vital step. You first delete the blank lines with…

Edit | Line Operations | Remove Empty Line

It’s far easier to delete blanks in a long list in Notepad++, rather than wrestling with complex ten-step workflows in Excel, just to do such a simple thing.

Then you copy-paste the list into a new Excel sheet. You then add these two Excel macros and run the first. Both run fine in Excel 2007. The first splits the column into chunks of 4 (if you have three lines per block, change all the 4s in the macro to 3s, if six lines then change to 6s, and so on). Each chunk is placed into a new column on the same sheet.

You can then delete the offending Gibbery wibble row, which will run uniformly across the spreadsheet. In this example, it all runs across row 3.

The second macro is then run and this recombines all the columns back into a long list, and places the recombined list onto a new sheet.

The free ASAP Utilities for Excel can then ‘chunk’ this list back into blocks of four, enabling you to add a blank line between each block. Optionally, you can add the HTML tag for a horizontal rule.

The same core method can be used to re-sort the lines each block, or to add numbering to each line as: | 1. 2. 3. 4. | 1. 2. 3. 4. | These operations are something Notepad++ can’t yet do.

The quicker solution:

If you need a quicker option, and don’t need to re-sort the lines in each data block in Excel, then try the Windows freeware List Numberer. As you can see below, once you’ve used this utility to run a simple operation, then a regex search back in Notepad++ (.*line3.* — used in Replace | Replace All) will clear all the unwanted lines.

How to save a multipage Web-book, or full set of journal articles, to a single PDF file.

How to save a multipage Web-book or full set of journal articles to a single PDF file.

Situation: You sometimes encounter full books online, split up into perhaps 300 or more separate HTML Web pages, each containing a bit of text from the book. You wish to re-combine this chopped-up book into a single offline PDF or ebook file, with the bits assembled in the correct order. You might want to do the same with a large journal issue. You need some Windows freeware to solve this, and don’t wish to use cloud/upload services.

Solution: A free Chrome browser plugin, and a Windows freeware utility.

Test book: Minsky’s The Society of Mind from Aurellem, with nearly 300 HTML pages. These are all linked from the main front page, which shows a linked table-of-contents.

Clicking heck!


Step 1. Install Browsec’s Link Klipper – Extract all links browser addon (for Chrome based browsers, inc. Opera) or similar. Run it on your target Web page. Open the resulting plain-text list of extracted URL links, re-order these as needed, and then copy the list of the links you want to the Windows clipboard.

One problem you may encounter here is that the filenames may be obfuscated, as perhaps jj8er4477-j.html rather than Chapter-1.html. But it seems that Link Klipper follows the URLs down in in-HTML sequence, and thus presents them in a list in the same manner. Linkclump is a good alternative browser addon, for those who need precise and manual control of the URL capture, though it is probably a bit fiddly for the first-time user to get working in that manner.

Note that Link Klipper is meant for the SEO crowd, so it can also do Regex and can save to .CSV for sophisticated link-sorting with Microsoft Office Excel.

Step 2. The genuine Windows freeware Weeny Free HTML to PDF Converter 2.0 can then accept Link Klipper’s simple URL list. Just paste it in…

Weeny is very simple to get running and will then go fetch and save each URL in order, outputting a clean PDF for each (as if it had been saved from a good Web browser). There’s no option to select repeating parts of each page to omit, it saves all-or-nothing. It can’t process embedded videos and similar interactive/multimedia elements.

During the saving process Weeny may appear to freeze, showing ‘Not Responding’, if fed hundreds of HTML pages. However, an inspection of the output folder will show that PDFs continue to be converted and dropped into it one-by-one. Thus, even if Weeny seems to choke and crash on 300+ files, it hasn’t done so. Just let it run until it completes.

If the Link Klipper URL list was in the correct sequence, then a sort ‘By Date’ of the resulting PDF files should place the book parts in their correct order, even if the filenames were obfuscated.

We could have downloaded the pages as HTML, but in practice it’s not viable to then join them up. Inevitably, there’s some broken HTML tag somewhere in the combined file, and that causes problems in the text which start to cascade down. PDF is the more robust format.

Step 3. OK, so that’s fairly quickly and easily done. But, oh joy… you’ll now have nearly 300 PDF files, all very nice-looking… but separate! Weeny is sweet software, but not very powerful and thus it doesn’t also join the PDFs together.

If you have the full paid Adobe Acrobat (not the Reader) then you can combine these PDFs very easily (or ‘bind’ them, in Adobe-speak). Acrobat also offers the great benefit of file re-ordering by dragging.

You’re done, and the whole process should have taken ten minutes at most. If the font is not ideal for lengthy reading, the free Calibre can convert the PDF to .RTF or .DOCX for Word, HTML, and various eBook formats.


“But I don’t have the full Adobe Acrobat”:

For those who need freeware for this last step of combining the PDFs, you need to find one that offers a similar ‘re-ordering by dragging’ to Adobe Acrobat. Such freeware is not at all easy to find. Most such Windows utilities are old and use very clunky up/down buttons for re-ordering. That’s not so useful if you have a file number 298 that needs to be moved up to become file 1 — you’re only going to want to do that by dragging, not by clicking a button 297 times. Why might you need to re-order? Because with a big book, you almost certainly got the file order a little wrong, when glancing down and editing the initial URL list in Link Klipper.

Eventually, I found the right sort of free software to do the job. DocuFreezer 3.0 is free for non-commercial use, only adding a non-obtrusive watermark “Created by free version of Docufreezer”. It’s robust and good-looking 2019 software, and it needs Microsoft .NET Framework 4.0 or higher to run (which many Windows users already have).

DocuFreezer can re-sort the imported PDF files ‘By Date’, or by some slightly fiddly dragging (a feature which seems unique among such freeware). It can even OCR the resulting PDF. You just need to remember to tell it to combine and save as a single PDF, and to do the OCR…

It’s reasonably fast, if you don’t OCR. Removing the watermark, by getting the paid Commercial version, costs $50. Even so, Docufreezer’s free version is no problem if all you want is a personal offline PDF of a ebook for reading — the watermark is quite discreetly placed on the side edge of each page in plain black lettering…

You can also see here that the embedded video, from the original HTML page, was elegantly worked around by Weeny while retaining the page’s images and font styling.


To .CBZ format:

Theoretically one could use this process to then get .JPG files, to compile offline versions of webcomics like Stand Still. Stay Silent., and other primarily visual sequential content. If you have the full Adobe Acrobat then it’s easy to save out the PDFs as big page-image .JPGs in sequence, bundle them into a .ZIP, rename the .ZIP to .CBZ file… and you’re done.

Though you may then encounter a problem in the layout. Unlike mostly-text books, webcomics and other visuals may not fit well on a single portrait-oriented PDF page, without running over. In other words, if you need to scroll down the Web page to see the whole image, then your final PDF page-flow may not be ideal.

In practice, most PDF-to-JPG freeware utilities are not viable in this workflow. I found only a few, and they either contain third-party ‘toolbars’ or just don’t install on modern Windows, or the JPGs they produce are watermarked. They would also need to offer file re-sorting by-dragging, and robust batch processing, and a file mask to rename the .JPG files sequentially — because it’s important for a CBZ to have its filenaming of pages be properly sequential (0001.jpg, 0002.jpg). I’d welcome hearing of such a freeware, but I don’t think it currently exists.

The better option then is simply to read the material online. Or if you really need it offline, then use a free open source website ripper such as HTTrack Website Copier to make a mirror of the website and set to only save the .JPGs to your PC. This assumes that the website doesn’t have traffic surge control or anti-ripper measures in place. But you should really be supporting the comic maker and buying their paid Kindle ebook editions.


“Ooh, does the workflow work on open access journal TOCs?”:

Yes, indeed it does. Not all open/free journals also offer PDF version of their articles, especially those in a more magazine-like, trade journal, or blog-like format. In such a case, one can run the above quick workflow on the issue’s TOC page, thus quickly providing yourself with a per-issue portable offline single-PDF for your favourite journal in the garden or at the beach. You can then run it through the free Calibre to get it to various ebook formats such as .MOBI (Kindle ereader) and .ePUB.

For a journal issue where PDFs links are already present beside the HTML article links, but there are a great many PDFs, then the browser addon Linkclump is your best option to grab them all.

Clicking heck, that’s a lot of PDFs in an issue! And there’s no single-volume PDF.

You can set up LinkClump to select / open / download all the PDF links (this works even with repository and OJS redirects which use /cgi/), to grab the PDFs for joining with DocuFreezer or some other free desktop PDF joiner. This method is a lot easier than fiddling around with a bulk downloader browser-addon, and picking out the PDFs from a long jumbled list of files. Or you can have LinkClump grab a list of the HTML article URLs for processing to a PDF book, with the above Klipper – Weeny – DocuFreezer workflow.


“My super-mega-combo PDF is too big”:

If your resulting PDF is too large to Send to Kindle (Amazon has as a 50Mb per-file transfer limit, and many people also have very slow uplinks), then there are a couple of PDF shrinkers worth having, from the freeware but rather clunky Free PDF Compressor to the slick and easy $20 PDF Compressor V3 (I like and use the latter a lot).

The world’s remote coral reefs, mapped

Newly published, a “High-resolution habitat and bathymetry maps for 65,000 sq. km of Earth’s remotest coral reefs”. It’s a new world-map of such coral reefs, in an interactive map where the data appears to be Attribution open access…

the Khaled bin Sultan Living Oceans Foundation embarked on a 10-yr survey of a broad selection of Earth’s remotest reef sites — the Global Reef Expedition. [producing a] meter-resolution seafloor habitat and bathymetry maps developed from DigitalGlobe satellite imagery and calibrated by field observations.”

“We are particularly grateful to our long-standing partnership with Dr. Sam Purkis’ remote sensing lab at the NOVA Southeastern University Oceanographic Center. From the satellite acquisition process, to ground-truthing field work, to creating the habitat maps and bathymetry products, Dr. Purkis’ lab is world-class. Additionally, this magnificent web application was created by an outstanding project management team from Geographic Information Services, Inc (GISi). GIS, Inc. was an absolute pleasure to work with on this exciting project.

Here’s my example zoom of the interactive map, down into the Red Sea…

The red dots in the top screenshot are the reefs, and the red dots in the last screenshot indicates the project’s video locations.

Added to JURN

Critical Multilingualism Studies

Journal of Universal Language

Arv : Nordic Yearbook of Folklore. It’s now supposed to go OA six months after publication (“Open access: Articles printed in ARV will also be available six months after their publication”). But it looks to me like no-one told the publisher’s webmaster or delivered the post 2016 PDFs. The PDFs are there to 2016, but not the links. So here are the direct links to the PDFs, so as to keep them alive on Google (and thus JURN)…

2018 – appears to be missing.

2017 – appears to be missing.

Arv : Nordic Yearbook of Folklore, 2016

Arv : Nordic Yearbook of Folklore, 2015

Arv : Nordic Yearbook of Folklore, 2014

Arv : Nordic Yearbook of Folklore, 2013 [and a lone Mirror]


Plant Phenomics