A new review of a paywalled up-to-date blacklist of predatory journals, “Cabell’s Predatory Journal Blacklist: An Updated Review”, at the Scholarly Kitchen.
SciRide Finder is a newly launched search tool that searches Medline/PubMed, but it limits the search to just those “statements, numbers and protocols” which cite other publications. A fine idea, but the core concept may initially be a little difficult for humanities scholars to fathom. You can see what they’re talking about, in this visual example…
SciRide Finder appears to have crashed under the initial surge of traffic, but is “under maintenance” and should be up again soon.
“Building a mission critical research ecosystem for Russia” (Feb 2019). The glossy report appears to be at attempt to sell Web of Science to Russia, and states…
The Web of Science platform is the first and only comprehensive, publisher-neutral discovery resource for trusted, peer-reviewed Open Access content.
A new February 2019 paper, from the German Centre for Higher Education Research and Science Studies, testing existing methods for auto-detection of OA papers in Web of Science (WOS) and Scopus. The conclusions are about what you might expect — that it’s easier said than done, even with such well-behaved services, and even then it’s partial.
But as part of the study a research assistant valiantly undertook further manual checking by hand. They found that OA full-text links there were broken at a rate of 17%…
“a further manual check found about 17% of OA publications are not accessible … 17.57% in WOS and 16.74% in Scopus.”
The new OOIR List. Currently with 849 journals in its List, these being from Web of Science’s SSCI journals in social studies. 119 of the titles on the OOIR List are flagged as Open Access, though a good number of these are greyed-out and not tracked (because they don’t bother to also submit to CrossRef).
Evidently Web of Science only covers 119 such OA titles, which means its OA coverage in this area has hardly budged since 2015 when Web of Science was only showing 116 titles in OA in social studies.
Within that very limited range, what OOIR is trying to do with its titles seems interesting, by providing an aggregated ‘latest’ / ‘trending’ / ‘active journals’ dashboard. It’s neatly presented, and there are also per-journal metrics over on the Statistics tab.
Apparently the service is focussed on recent papers, and “OOIR does not link to papers published before Nov 2018”. A previous RSS-feed based version, for politics and diplomacy, was titled Observatory of International Relations (OIR). But this has now been shut in favour of OOIR.
I guess the question now is, would it be possible to build something bigger and similar and slightly shinier, that could provide a public tracking-dashboard for all such material of use to those interested in timely new research on politics, diplomacy and related matters? Zak Kallenborn has some ideas on that in his recent article “Academic Paywalls Harm National Security”.
New in DHQ: Digital Humanities Quarterly, “Researcher as Bricoleur: Contextualizing humanists’ digital workflows”. A small-scale observational study from 2016, building on a larger ‘Digital Scholarly Workflow’ study. The body is made up of case studies and commentary. Here’s the tale of a search by a historian for “1916” “November” “War Council”:
Audrey, a professor of history, searched for literature on an event that took place in 1916, and for which she had only partial information. Audrey’s search starts with her personal collection of notes written in Word and stored on the internal hard drive. She uses a Word search function that queries the folder for a supposed event name, but this search yields no result. Audrey then switches to her browser and the online search. She logs on to the Penn State library and enters a search phrase composed of three descriptors into the discovery search interface, LionSearch. This attempt does not yield any results either.
“Okay, no problem, I’m going to go to some of my favorite databases,” Audrey says optimistically, and, using the same search phrase, she continues her search in the Historical Abstracts database. “All right, I need another field. It happened in Rome,” she comments still optimistically, and expands her search with one more field, which reads “Rome.” Still nothing. “Seriously?!,” Audrey exclaims with annoyance. “All right, let me just do ‘war council,’ something more specific,” she says with reasserted optimism, and changes her search phrase accordingly. Failure again. “Really?!,” Audrey laments in shock. “I would have thought it was more important.” Audrey then reaches to her bookshelf and grabs a book. She reads through a few pages, trying to find any additional information that could help her search. Nothing. But Audrey is not ready to give up yet.
She returns to her library search and adds “November” as one more search field, trying to make her query as precise as possible. No results. Still, Audrey does not give up, and, instead of adding one more search term, she decides to change her search phrase. She creates a new search phrase, again composed of three descriptors as the possible event name. “Nope. All right, strange,” Audrey says quietly, confident that any further search would be pointless. “You would think someone must have written an article about this. It was the time that the different allies got together and hammered out a strategy…,” she continues murmuring, but discontinues her library search.
Instead, Audrey decides to try her luck with Google Search. She enters the search phrase and the Wikipedia entry pops up right away. “See, that’s the thing,” Audrey comments. “One would love to use more scholarly resources, but I just typed [the search phrase] and it’s up there [on Wikipedia]! Sadly, Historical Abstracts was not of too much use; the most useful one was still Wikipedia,” this historian concludes.
The problem here appears to be that the Supreme War Council of the three allies was created in November of 1917, not 1916. Only by switching the search terms from 1916 to 1917 does the Wikipedia page mentioned appear, so one has to suspect that there was some finessing of the search before hitting Google Search.
Another new prodding of Google Scholar, this time from the latest First Monday “Testing Google Scholar bibliographic data: Estimating error rates for Google Scholar citation parsing”…
While data quality is good for journal articles and conference proceedings, books and edited collections are often wrongly described or have incomplete data. We identify a particular problem with material from online repositories [where there appears to be] considerable inhomogeneity in the implementation of data standards [and] a mismatch between repository software and the harvesting protocols employed by Google Scholar.
One of Scholar’s other problems is that it includes Google Books results. While 30% of the time its Google Books inclusions can useful, there is no way to exclude Books results. One might want to exclude because Scholar still can’t seem to determine a proper book from a robot-produced shovelware ebook that assembles public-domain content. Scholar has no ‘edition authority’ which states that the Joshi-edited and annotated Penguin Classics edition of H.P. Lovecraft’s “Dexter Ward” is the gold-standard and that it has a text that has been fully corrected of the many textual errors, omissions and editing mistakes of previous decades. Unlike the public-domain shovelware ebooks that flood Amazon and (often) Google Books.
A basic undergraduate level search, for instance, for Lovecraft “Dexter Ward”, demonstrates the problem on the first page. Joshi is nowhere to be seen, and the searcher is hammered by links to shovelware ebooks (or worse), often with citation counts that suggest they are legitimate.
Michael Gusenbauer, “Google Scholar to overshadow them all? Comparing the sizes of 12 academic search engines and bibliographic databases”, Scientometrics, November 2018.
The findings provide first-time size estimates of ProQuest and EBSCOHost and indicate that Google Scholar’s size might have been underestimated so far by more than 50%. By our estimation Google Scholar, with 389 million records, is currently the most comprehensive academic search engine.
With the later proviso that there are likely to be many duplicates and near-duplicates, with such tools reporting…
the number of all indexed records on a database, not the number of unique records indexed. This means duplicates, incorrect links, or incorrectly indexed records are all included in the size metrics provided by ASEBDs.
As you can see, the article coins the ugly and unreadable “ASEBDs” for “academic search engines and bibliographic databases”. MASTs might be more mellifluous — Massive Academic Search Tools.
“Grades of Openness. Open and Closed Articles in Norway” (August 2018)…
Based on the total scholarly article output of Norway, we investigated the coverage and degree of openness according to three bibliographic services: 1) Google Scholar, 2) oaDOI by Impact Story [now called Impactstory], and 3) 1findr [formerly oaFindr]. According to Google Scholar, we find that more than 70% of all Norwegian articles are openly available. However, degrees are profoundly lower according to oaDOI and 1findr, respectively 31% and 52%.
open shares vary considerably by discipline, with … the Humanities at the lower end