NASA has kindly placed over 8,000 Moon Landing photos on Flickr, made by NASA astronauts during the Apollo missions. Hi-res too, at over 4000px. The pictures are at the Project Apollo Archive albums.
Internet Monitor Dashboard from Harvard… “explore, create, customize, and share dashboards of data visualizations about multiple facets of the Internet.” Looks like a sort of citizen-accessible global cyber-attack early-warning system, with attacks broadly defined as everything from Wikipedia edits and news reports through to the number and scale of direct network attacks.
One more reason to uninstall Flash…
Most programmes on [the BBC TV and Radio online] iPlayer “should” be available in HTML5 from today”
Interesting new paper at PLOS One, “The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching”.
Test searches were drawn from review papers…
“…chosen as they covered a diverse range of topics in environmental management and conservation, and included interdisciplinary elements relevant to public health, social sciences and molecular biology.”
… and compared alongside Web of Science results…
Surprisingly, we found relatively little overlap between Google Scholar and Web of Science (10–67% of WoS results were returned using searches in Google Scholar using title searches).
Unsurprisingly, Google Scholar wasn’t found to be the one-stop shop many assume it to be…
… some important evidence was not identified at all by Google Scholar … [so it] should not be used as a standalone resource in evidence-gathering exercises such as systematic [literature] reviews.”
Interesting finding also that…
“Peak” grey literature content (i.e. the point at which the volume of grey literature per page of search results was at its highest and where the bulk of grey literature is found) occurred [in Google Scholar] on average at page 80 (±15 (SD)) for full text results … page 35 (± 25 (SD)) for title [search] results.”
So this suggests that one might usefully flick through to result 700 (of 1000) and work a few hundred results starting from there, if seeking grey literature with a very well-formed topic search? By well-formed I mean the sort of sophisticated literature-review style of search term chaining being used in this study, for example…
“oil palm” AND tropic* AND (diversity OR richness OR abundance OR similarity OR composition OR community OR deforestation OR “land use change” OR fragmentation OR “habitat loss” OR connectivity OR “functional diversity” OR ecosystem OR displacement)
It appears that the researchers only auto-extracted “citation records” from the search results, and then classified into broad categories based on those alone. There appears to have been no checking as to the validity of the link, and/or downloading and scrutiny of PDFs. So there are no measurements of how many of Google Scholar’s links work or lead to free no-paywall fulltext articles.
Lastly, I noted…
Google Scholar has a low threshold for repetitive activity that triggers an automated block to a user’s IP address (in our experience the export of approximately 180 citations or 180 individual searches). Thankfully this can be readily circumvented with the use of IP-mirroring software such as Hola (https://hola.org/)”
Has it leaked? is a rather nice specialist search tool for free content, from Sweden. Focussed on forthcoming arty music albums, it basically saves fans the task of tracking down the tracks / snippets / “making of…” etc that the official marketeers ‘leak’ for free in advance of the album, or during the release window. It’s not a pirate site, though, and firmly states: “No download links are allowed!”.
I’d say there’s room in the market for something similar for all quality non-fiction books, perhaps in partnership with a book-summary service like Blinklist, and with user-configurable topic filters.
Why would such a site be needed? Here’s an instance of the limited way in which current mega-services offer to group versions or offer preview options. If one looks at Amazon UK for the new Matt Ridley book The Evolution of Everything: How New Ideas Emerge one only sees two options there for the audiobook: free with an Audible direct-debit subscription, or a £30 pre-order and wait until November for delivery. Even then the audiobook pages are not linked from the print book page, so someone landing on the print page via Web search would have no clue there even was an audiobook version. No mention at all on Amazon UK that it’s actually available now for £13 on the Audible UK site, or that there’s a free 13 minute extract of the introduction of the audiobook available via publisher on SoundCloud. Only my deep searching surfaced the free audiobook extract.
The above suggests that two mega-services (Amazon and Audible) and a mega-publisher (Harper) can’t even co-ordinate promo material and version offers for a major book in the globally important UK market. So I’d say there’s a lot of scope for savvy curators to do it for them, also adding author podcast links, newspaper book review links etc.
University Press Redux is the first UK conference on the state and future of the UK university presses. 16th and 17th March 2016 in Liverpool.
The Adblock Browser has launched for mobile devices (Android and iOS). DuckDuckGo is their default search-engine.
This may possibly be handy for some people. How to remove your fulltext PDF from ResearchGate, but leave the record standing. Finding the way to the delete function doesn’t seem very intuitive…
A fine short blog post by Manu Saunders on the historical ecology data latent in art history.
Tree of Life, a rough first-try at merging the available data on the relationships of the 2.3 million known and named species on Earth…
“According to a survey of more than 7,500 phylogenetic research papers published between 2000 and 2012, only one out of six studies came with a digital, downloadable format of the data. … Many of the evolutionary trees that have been published are only available as PDFs and other image files that can’t be entered into a database or merged with other trees.”
Have you noticed the rise of UTM tracking tokens in URLs? There’s an increasing amount of extra text being added after the URL, usually meant to tell marketeers how the link was found. At its simplest it might look something like…
Anyone not web-savvy who then shares the URL also unwittingly reveals to the world how they found that URL, unless the tracking is cloaked as gibberish numbers.
Anyway, urlHosted has spotted the potential of this URL misusage to initiate a new server-less communication method…
urlHosted is an experimental web app that misuses the part after the “#” of a URL to store and read data. … This means this app neither stores nor sends any of your data to any server. … [then] Whenever you visit the site [that has] payload data in the URL, the [URLhosted browser] app renders that data as an [text] article.”
One would still have to pass a clickable link somehow, so I’m not sure how useful this would actually be to anyone in its current form. I guess at its most clandestine urlHosted might work something like: Bill places a time-limited message-URL in an old post on his blog, then casually refers to the title of this post (without linking to it, or even mentioning his blog) in an email to Susan. Bill and Susan both know that this mention means she should check his blog and find the post in the next 12 hours — and then click on an URL there that has been temporarily altered to contain a message. urlHosted elegantly renders the message on a page for Susan. 12 hours later Bill switches the URL link back to normal. Since old blog posts are only rarely re-indexed by search services, and receive little traffic, there’s only a slim chance the message will be exposed to public view. The addition of simple ROT-13 to the message would make it even more unlikely to be discovered. But it’s probably much easier for Bill and Susan just to use SnapChat.
Update: There’s a handy Greasemonkey script for Firefox users that simply auto-strips such gunk from the URL when the Web page loads in the browser. Those in need of a standalone add-on for Firefox might look at Au Revoir ATM or PureURL.
A 2015 study of the “Indexing of Mapping Science Journals”, including cartographic history journals.
* Found that 47 such titles were published as free / open access, but that only eight of those were in the DOAJ.
* Of the 47 free / open access titles, 12 were represented in Google Scholar by 10 or less articles.
* Scopus indexed 18 of the 47 free / open access titles.
So it’s “back to school” time. What fab free MOOCs are available and starting this September / October?
“… an experienced academic librarian will share his strategies for getting students engaged in the art of library research.”
“Go behind the scenes at Harvard’s libraries to discover how readers in the first information age interacted with their books.”
“… the wide variety of resources available on the TED website [TED Talks etc] and how to use them in the classroom.”
An unusual new OA journal is the Journal of Brief Ideas, containing “citable ideas in fewer than 200 words” which are published under a CC-BY license. Interesting idea, but the title is not going into JURN just yet. The journal is in beta, for one thing. I also suspect that a clear focus on rational evidence-based discourse may be difficult to maintain, once wider audiences find it and realise it’s a free-for-all platform. The curation, such as it is, seems too light-touch to allow the journal’s reputation to survive a surge of articulate loons.
However, I’ve often thought that a normal journal might usefully have such short pieces — perhaps tight summary surveys of each of the field’s knowledge gaps (“what we know that we don’t know”). Perhaps such an article series might run alongside a series of imaginative ‘brief ideas’ articles on how those knowledge gaps might be filled. A third series might briefly outline the field’s as-yet unexplored interdisciplinary potentials.
The world will soon have a new open access journal on bitcoin, cryptocurrencies, ledgernomics and blockchain applications. Ledger will be Open Access, but is not yet in JURN since the first issue won’t be published until 2016.
The Biological Journal of the Linnean Society‘s July 2015 issue (Vol 115 Part 3) is devoted to biological recording, celebrating and documenting 50 years of the UK’s Biological Records Centre as a pioneer of citizen science. The issue is currently free.
Government decrees closure of all humanities degrees. No, it’s not another crazed Putin pronouncement from Russia. It’s sober Japan…
Many social sciences and humanities faculties in Japan are to close after universities were ordered to “serve areas that better meet society’s needs”. Of the 60 national universities which offer courses in these disciplines, 26 have confirmed they will either close or scale back their relevant faculties at the behest of Japan’s government. … 17 national universities will stop recruiting students to humanities…”
…[the move in Japan is] linked to a low birth rate and falling numbers of students, which has led to many institutions running at less than 50 per cent of capacity.”
And the situation is only likely to get worse. In population terms Japan is headed back to where it was in 1955…
Source: IPSS (National Institute of Population and Social Security Research) via IEEE.
So it seems that the same blanket closure of humanities departments may soon be forced on other nations in steep demographic decline, such as Russia and most of post-socialist Eastern Europe. Possibly even southern Italy.
Which is one reason why I’ve been so pleased to see the amazing baby boom we’ve been having here in the UK over the past five years, which shows no sign of stopping any time soon. We seem to have babies and toddlers everywhere you look, and the supermarkets usually now dedicate two double-sided aisles to romper-suits, nappies, toddler clothes, baby food etc. Midwives are worked off their feet, and infant school reception classes are so full that the kids are almost falling out the windows. And I’d take a bet that these kids are going to be remaking and reinventing British youth culture circa 2023-28, and then surging into the universities circa 2026-35.
A handy benchmarking tool for OA in the UK…
CIAO is a benchmarking tool for assessing institutional readiness for Open Access (OA) compliance … produced as part of the JISC OA Pathfinder…”
Looks good, but omits the utterly vital element of ‘Public, Peer and Government Discovery’. I’d suggest adding an extra strip with the following wording/steps…
ENVISIONING: We do not know what proportion of our OA repository contents can be found via public search-engines, or the quality of the search results that link to our repository.
DISCOVERING: We are considering the most effective steps to improve our repository coverage in public search-engines, and are taking advantage of guides and free consultancy work offered by staff at major search engines such as Google. We will rank the priority of these steps by both their likely impact on discoverability and ease of implementation.
DESIGNING & PILOTING: We have committed funds to implement and test at least ten commonly recommended methods that will increase our repository’s coverage in the public search-engines. Graduate interns have been recruited to aid the repository staff during this period.
ROLLING OUT: The planned measures have been turned on or implemented. Systems and staff are in place, and best practice workflows have been clearly documented and disseminated. Search engine indexing of our repository content is being tested to gather reliable metrics on: increased indexing coverage; time to index new content; and search result quality. We are also internally monitoring visitor traffic and open/dwell rates.
EMBEDDING: We are examining further measures to boost the quality of the public search results for our repository content, such as ensuring that the document title is used in the results Web link. We are considering acquiring funds to undertake certain large-scale measures once deemed too expensive to implement, such as retrospectively re-working the university-branded cover-pages applied to our PDFs. Senior staff have recognized that Web traffic to our OA repository represents a valuable branding, outreach and recruitment opportunity, rather than a drain on resources or general-use webspace for the university.
OpenURL has become, in a sense, the glue that holds the infrastructure of traditional library research together, connecting citations and full text. … [We found that] One-click (OpenURL) resolution was noticeably poorer [than Summon], with about 60% of requests leading directly to the correct fulltext item. More alarming, we found that, of full-text requests linked through an OpenURL, a large portion — 20% — fail.”
So… 40% of fulltext requests go to the wrong item? And 20% fail altogether. That sounds to me like a 60% failure rate.
The Guardian‘s ‘Anonymous Academic’ runs some numbers today on overly expensive academic hardbacks, the sort that gather dust on the shelves of university libraries…
Seventy-five books [per editor, per year], £80 each, selling on average 300 copies. That’s £1.8m. And he’s just one of their commissioning editors.”
The Guardian‘s academic was told that “friends [can] act as reviewers” for his book proposal. And that the author and his proposal-reviewer “friends” might also add the book to class reading lists, and thus ease it toward becoming a library purchase. Left unsaid, at least in the publisher’s initial phone pitch, is the implication that “friends” might also write book reviews of the title after publication.
These are the sort of books for which there will never be a cheap paperback version, just the choice of a very nice £60-£80 case-bound hardback or an ebook only that’s only slightly cheaper than the paper edition. By my rough calculation the profit per £75 book is around £12,000, even on only 300 sales. To reach that figure I assume each book proposal is swiftly handed off after approval to a home-working freelance, who might be paid £4,500 per book to get it into a publishable state. I also assume there’s a £20 manufacturing and shipping cost to be deducted per book, since in my limited experience as a reviewer and shelf-browser such books tend to be print-on-demand from Lightning Source (look at the very tiny small-print in the very back of the book). Every ebook edition sold, however, would mean about £17 extra profit per book — assuming some of that £17 isn’t passed along as discount offered to the library’s purchasing clerk.
If a telesales lead-generator and initial author handler is given a target of drumming up 75 new book titles per year, as The Guardian‘s article suggests, in the expectation that he only delivers 50, then he’s potentially generating £600,000 profit per year for someone. One suspects his own salary amounts to far less than that.
At that sales/profit ratio might the academic world need to guard against a de facto ‘guaranteed book purchasing’ ring? Perhaps one loosely spread across the world’s libraries and differently configured/staggered for each book title?