Spain has legally mandated financial compensation to content owners, for online use of even snippets of content. This is an “inalienable” right and applies to every content producer, which appears to effectively void Creative Commons licenses and ‘fair use’ in Spain. Since even if you want to give something away free as Creative Commons, the law won’t allow that: you will always have the “inalienable” right to suddenly demand payment for a CC-licenced work in Spain, any time you choose. It even forbids linking to content without payment, for anything beyond a hyperlink + minimal anchor text. Given the Spanish-speaking world’s outstanding lead in publishing open access academic journals, this seems a rather perverse position for Spain to take.
8.5% of Wikipedia articles have been written or edited by a bot.
Scholar Ninja, new from Jure Triglav…
I’ve started building a distributed search engine for scholarly literature. … What makes Scholar Ninja unique is that all of its functions (indexing, searching, and distributed server) are contained within a browser extension. [and thus hardened against censorship] “What?”, I can hear you say, “How can that be? Since when can a browser be a server?” Since 3 years ago, when the almighty WebRTC was born. … [Scholar Ninja] is completely contained within a browser extension: install it from the Chrome Web Store. … beware that this is alpha software and may break completely.
Trooclick, a new attempt at an auto-fisker for news facts, as a browser plug-in. Silly name, and still in invitation-only alpha. But it’s an interesting indication that it might be possible to make it work, with a little human curation along the way.
The July 2014 issue of Cites & Insights swings the bell-ropes at the Beall list and the DOAJ, and listens for interesting overlaps and more — with an aim of making…
“the clear case that publishers on Beall’s list are not typical of OA [open access] as a whole or of DOAJ”
Gothic networking day for postgraduates and academics at Manchester (UK), 12th July 2014. Including an afternoon of sessions on publishing academic journals in Gothic Studies.
Why we need both discoverability and long Plain English summaries (as well as short abstracts) for open academic work… “The solutions to all our problems may be buried in PDFs that nobody reads”. Admittedly, we are talking about World Bank reports, but in the ‘send a Congressman to sleep’ stakes I guess those can go head-to-head with many other academic papers.
Fluff up your resume with an internship at Nesta in London…
The Court of Justice of the European Union (CJEU) declared today that UK and European Internet users are not acting illegally when simply browsing copyrighted material online.
The equivalent of the USA’s Supreme Court established that users engaged in “Temporary acts of reproduction … which are transient or incidental” (Article 5.1 of the EU Copyright Directive) — such as files automatically copied to a Web browser’s temporary cache and displayed on screen — must not be considered to be making illegal copies. This ruling now applies throughout the UK and Europe.
Earlier this year the EU ruled that hyperlinking to public content is not illegal, and this new ruling seems like the other side of that coin.
GeoDeepDive is software that helps…
geo-scientists extract data that is buried in the text, tables, and figures of journal articles and web sites […] As of today, GeoDeepDive has processed over 36K research papers and 134K web pages
FoRESEE: Future Search Engines 2014, a one day workshop in Germany, 22nd September 2014.
David Prosser at Jisc blogs on the need for action on discoverability…
… 40% of researchers kicked off their project with a trawl through the Internet for material, while only 2% preferred to make a visit to a physical library space. [yet] nearly half of all items within digitised collections are not discoverable via major search engines by their name or title [and, even worse] digitised collections become harder and harder to find over time, for a variety of complex reasons.
“Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind.”
I’d say that 27m is probably a large underestimate, given that the two engines used for the study (Google Scholar and Microsoft Academic Search) are proven to be poor at indexing open repositories and open access journals. Given a few hours of work I could probably winkle out from JURN a list of 100 “big” URLs, which together would put JURN at 25m (primarily in English) — before even starting to tally all the other URLs.
Bealle has a new list, Hijacked journals. Counterfeit websites that mimic or clone legitimate journals.
Open source, open access comics? The great bard of Northampton is on the job, with a little help from NESTA’s Digital R&D Fund…
” Alan Moore said in a statement: … we are assembling teams of the most cutting-edge creators in the industry and then allowing them input into the technical processes in order to create a new capacity for telling comic book stories. It will then be made freely available to all of the exciting emergent talent that is no doubt out there, just waiting to be given access to the technical toolkit that will enable them to create the comics of the future.”
Google, being evil: ceases all RSS feeds from YouTube.
A Thomson ISI / Web of Science study is reported in Nature, dated 26th May 2014, as “Do Open Access journals have impact?”. They concluded that…
“Open Access journals [a selection of 190 titles, “core scientific publications”] can have similar impact to other journals, and prospective authors should not fear publishing in these journals merely because of their access model.”
RSS feed search, by keyword. Tip: paste in the URL, then cut it back to just the main word in the URL. It will usually find the RSS. Or just use site:yoursite.com and it will find all feeds from that site. Incredibly useful.
Why having the data can sometimes be handy: the Financial Times has fisked the Piketty data on Europe…
“The FT [Financial Times] found mistakes and unexplained entries in his spreadsheets, similar to those which last year undermined the work on public debt and growth of Carmen Reinhart and Kenneth Rogoff. … For example, once the FT cleaned up and simplified the data, the European numbers do not show any tendency towards rising wealth inequality after 1970. An independent specialist in measuring inequality shared the FT’s concerns.” – Financial Times.