There doesn’t seem to be a portable PDF version, just a tablet-tastic Web site with no RSS feed. I don’t mind the lack of a PDF, but if they want to be in the newsfeeds of influencers then surely someone needs to plug in the RSS module.
Idyll is a new “friendly markup language — and an associated toolchain — that can be used to create dynamic, text-driven web pages.” Interactive diagrams in academic papers, that sort of thing…
Iceland has digitised the historical collections for all of the nation’s newspapers, newsletters and small magazines and popped a unified search box on them. My search for a random set of likely keywords suggests they’re all in the local language.
A simpler slideshow-like alternative would be TimelineJS, fairly easily workable via a Google Spreadsheet template rather than WordPress. The free service imports the completed Google Spreadsheet and automatically outputs an elegant simple side-scrolling timeline. Note however that the developers say that… “We recommend not having more than 20 slides [timeline points] for a reader to click through”, and that the Web page embedding code for “TimelineJS does not work with WordPress.com sites”.
For small tablet-tastical timelines + templates, see the $12 Responsive Timeline by Toghrool on CodeCanyon, and his Responsive Timeline WordPress version. Made in 2017, and it looks good for making a short timeline which will have to be seen by tablet-centric clients from beyond the world of education.
Omeka also has the Neatline plugin, which might be worth a look if you’re working with maps and images and time.
If you can pay a monthly fee, I see there’s also now a nice-looking commercial timeline service called Tiki Toki.
Coalition (science and technology for the study of cultural assets, often with a special focus on the interaction of microorganisms and rock art)
Hypothes.is lets visitors annotate your Web pages, via a pop-out sidebar filled with a Twitter-like stream of visitor comments/links.
It’s the perennial idea of re-inventing the classic footer comments box as a uniform annotation layer, something that has been tried many times over the past 20 years. Google ran such a tool for three years before closing it down. Such services tend to end up as dank wastelands filled with Viagra ads, troll spoor and link-rot.
But this time might be different. There’s a couple of somewhat workable-looking early W3C standards (more are on the way), new options for moderation and closed group working, and an impressive range of publishers and universities are now planning to discuss how social annotation might proceed for scholarship…
Our goal is that within three years, annotation can be deployed across much of scholarship.”
The ‘can’, not ‘will’, is probably because the big publishers like Elsevier et al are noticeably absent from the list of Hypothes.is academic supporters. I can’t see them liking the idea that an open commenting system is being laid over/into their content. The sidebar’s content seems to be outside the control of the page owner, so I could theoretically pitch up at an Elsevier $66 article paywall and say “there’s a free PDF of this article over at Site XYZ…”
So how does it work, at present? Imagine that someone took a Web page’s comments section from the bottom of the page, and instead put it into a standalone and uniform sidebar. Someone adding a comment also has the option to highlight a bit of text on the page, automatically hyperlinking their comment to it. Other visitors see the comments and the highlighted text. Obviously various Twitter-ish and Wiki-ish features could be added, but that’s the basic functionality.
A pop-out sidebar means that Hypothes.is can work with PDFs, and the Hypothes.is roadmap suggests that annotation of data / images / videos / ePubs could be on the way soon. So it seems Hypothes.is needs fixed browser-displayed content, located on a URL that’s never going to break — a natural fit with things like PDFs in repositories and digital libraries. But even in that relatively limited arena, who will do all the hand annotation, moderation, linkrot checking and repair need to keep such a service usable across a billion or more pages and documents? I somehow doubt that overworked and underpaid repository staff will be skipping through the library stacks with joy, at being told they must also become the herders of social media cats and the tamers of trolls.
“[In a sample of] 3.5 million scholarly articles published between 1997 and 2012 [there is an] alarming link rot ratio for all three corpora: 13% of arXiv, 22% of Elsevier, and 14% of PMC articles published in 2012 suffer from link rot. These numbers only increase for older articles, for example, for articles published in 2005 the corresponding numbers are 18%, 41%, and 36%.”
Aeon magazine has a very nice new Save to Instapaper drop-down on its articles, which might usefully be copied by ejournals offering articles in HTML.
Compare the blissful ease of doing this with the impossibility of saving Digital Arts magazine’s new 2015 creative trends survey article to Instapaper. Impossible because of the database-driven URL structure, which loathsomely uses ? in the URL to spawn a new page hanging off of a static URL. I ended up having to copy-paste to a .txt file and then used Amazon’s Send to Kindle desktop software. I guess Digital Arts are assuming their younger clear-eyed readers are going straight to the page on an iPad, rather than needing the ‘old-eyes friendly’ font-scaling on a Kindle.
No mention of this Aeon button on the Instapaper official blog in 2014, so I’m guessing it’s custom to the editors, probably wrapped up in a WordPress widget or similar. But their little button looks like it can be fairly easily implemented in a DIV in your HTML code…
The academic world recently learned that bots can write automated gibberish and — with a little help from their fleshy minions — can have it published in mainstream peer-reviewed scientific publications. But are we prepared for what follows from the moment when bots can reliably produce writing that makes real sense and which is useful and timely enough for use in major newspapers? It’s happening already. The finances of newspapers are such that a wave of robo-journalism seems inevitable, once we have a few more advances in semantics and automated basic fact-checking. Given the current dismal state of newspaper science reporting such new-fangled robo-news may even be slightly better than what we have now.
It follows that journal editors and publishers may soon need to add a new clause to their author guidelines, such as: “articles must be fully written by humans”. Not for fear of gibberish faux-papers, but rather because bots will be able to add sensible summaries and otherwise usefully aid in the writing of a research paper. Or we may need to develop an agreed form of simple presentation to flag up:
[bot]this section of the text was written by bots[/bot] and to embed links to the bot’s sources.
Incidentally, I’ve also often thought that the humourous LOLcat language would form a pleasing basis for identifying messages-sent-to-humans by objects embedded in The Internet of Things, clearly marking their simpler forms of communications to us as being: ‘not kreated by th humanz’. We already have the LOLcat translation systems available.
The Academic Book of The Future, another in the seemingly never-ending train of £500k two-year AHRC research projects, launches at The British Library on 10th February 2014. Grants of up to £450,000 are on offer.