• Directory
  • FAQ: about JURN
  • Group tests
  • Guide to academic search
  • JURN’s donationware
  • openEco: nature titles indexed

News from JURN

~ search tool for open access content

News from JURN

Category Archives: Economics of Open Access

Digestable scholarship

06 Sunday Feb 2022

Posted by David Haden in Academic search, Economics of Open Access

≈ Leave a comment

A good point, made in a new short OA opinion-piece by Martin Eve…

How can the humanities parrot the oft-repeated liberal humanist line that they exist to produce an educated citizenry capable of participating critically in democracy, when most humanities work remains unreadable by most people? … When justifying oneself, pointing at a scholarly book that costs £60 is not the same as pointing to an article that can be read for free.

Assuming that ‘democracy’ and ‘educated citizenry’ are not soon to raise a sneer rather than a cheer from those holding the cattle-prods, then an obvious solution might be some form of free digest of the text that one wishes to read or consult. Often a good digest might suffice to quench curiosity on the part of independent scholars outside salaried academia. It would be far more than an abstract or a paragraph in an editor’s introduction, better than an Amazon Kindle-like “the first 10% is free” extract, and yet shorter than a Blinkist full-book summary (these being carefully crafted by a human, last I heard).

Could a summarising AI do the task? It can with newspaper articles and (I believe) with structured things such as financial reports. It might need some help with denser and more specialised material from the arts and humanities. Such an AI-bot might be aided by picking up on a dozen simple ‘structure tags’, added by the author alongside the text as they wrote it. And in that task the author might themselves be being assisted by a tagging AI. There might also be a semantics back-end at work. A local history chapter on well-dressing ceremonies and associated folklore in the English Peak District might then not flummox the AI too much. But a convoluted chapter on Elvish linguistics and arcane medieval star-lore detectable in Tolkien’s The Silmarillion might still cause it dissolve into a hissing molten puddle.

What about the idea of digests written by young humans in need of cash, of which the planet is very soon to have a rather large abundance? The AI digester-bot might then assist a well-trained young human, by providing a ‘first pass’ digest.

An informed and curious citizen might then be given 52 credits a year to access such a digesting service. Each credit would pay for the cogent summary of a book chapter or article. The specialisation and reading-difficulty of a text would be assessed initially by the AI, with some texts deemed to merit the expenditure of more than one credit. The digestion of a dense book which evaluates the Norse linguistics allegedly to be found in Sir Gawain & The Green Knight might even need a crowd-funding consortium or similar, pooling their collective credits. But that seems unlikely.

My idea loses ground again when one considers it might work poorly with summaries-of-summaries. For instance I want the weighty “The Year’s Work in Tolkien” summary overview in the latest Tolkien Studies journal. But the issue is prohibitively expensive in print, and is locked away from the public on Project Muse. As an impoverished independent scholar I need to read every word of it to keep up with the field, and a summary will not suffice. No matter how good it is.

So… none of this is really ideal, though it would certainly create a welcome market in somewhere like Bangladesh for AI-assisted ‘academic summarisers’ servicing a ’52 credits a year’ system.

One other vague notion also arises. Google Books already exists and provides a partial ‘Look Inside’ solution for many who need to take a peep into an expensive £50-£120 academic collection or (far less often available) an obscure monograph. Could that existing service be expanded? Even if only through gritted teeth, as most librarians seem to despise Google Books. How about a mandate that says Google Books gets to show 100% of a volume produced with or originating from a public university employee, but only ten years after publication? Could that work in terms of the current economics of such things? I don’t know enough about publishing’s current ‘profits over time’ aspects there, as I haven’t been following the monographs debate. But such a book would still be effectively locked down (not OA in any meaningful sense), while still being readable by the global public. It would be a sort of automatic ‘Knowledge Unlatched’, running globally alongside the existing copyright systems and (because universal) not subject to political skew in terms of the books selected. It might also be retrospective.

The COAR of the issue

29 Friday Jan 2021

Posted by David Haden in Academic search, Economics of Open Access, Open Access publishing, Spotted in the news

≈ Leave a comment

A useful new analysis today from COAR, “Don’t believe the hype: repositories are critical for ensuring equity, inclusion and sustainability in the transition to open access”. Recent…

publishers’ comments portray gold open access as the only ‘legitimate’ route for open access, and attempt to diminish the repository (or green) route.

According to the author, some publishers are even implying that repositories have no aggregators, or are not present in Google Search or in specialist search-engines such as Scholar and GRAFT. Laughably, they apparently suggest that poor over-worked researchers will instead…

need to search through individual repositories to find the articles.

The publishers are also said to be trying to stop all but a sub-set of elite repositories from being used for data deposit, via…

proposing to define the repository selection criteria for where their authors’ should deposit research data. These criteria, which are very narrowly conceived, threaten to exclude thousands of national and institutional repositories as options for deposit.

Again, this sounds like it is designed to make researchers feel it’s more convenient to publish their article + data via a big publisher.

Report: Equitable access to research in a changing world

28 Monday Sep 2020

Posted by David Haden in Academic search, Economics of Open Access, Official and think-tank reports, Spotted in the news

≈ Leave a comment

Released in June 2020, a new consultancy report titled “Equitable access to research in a changing world: Research4Life Landscape and Situation Analysis”. This surveys the pressures on the Research4Life aid programmes. Established 20 years ago, Research4Life gives developing countries “free or low-cost” online access to journals and books from some 175 publishers. Along with other aid initiatives, this means that African universities often have better free access to journal databases than do some academics in advanced nations. The new report makes no recommendations, but a key point to note is that…

… some of the most relevant and influential research undertaken in low-and-middle income countries happens outside academia: in specialised research institutes, think tanks, or government-backed research agencies. In some countries, research agencies and institutes conduct research in national priority areas and have direct access to and influence on decision-makers” [yet] “these non-governmental organisations have in the past been excluded from open access debates, and may be unable to take advantage of initiatives such as Research4Life.

It could be useful to quantify that “may”, through further research. Do developing nations find roundabout ways to include their research agencies in Research4Life, such as giving off-campus agency researchers special log-ins to access the national university system? Or are such arrangements rather moot, in the age of open-access and Sci-hub? If not, would there be a real benefit if Research4Life were to be extended to bona fide government research agencies and suitable NGOs? How much would such an expansion actually cost, and what could the returns be in such nations?

Well endowed

18 Tuesday Jul 2017

Posted by David Haden in Economics of Open Access

≈ Leave a comment

“Financing Open Access” at Cultural Anthropology gives a figure on production costs. Apparently it costs them $50,000 a year to run a polished fees-free open access journal in the form of Cultural Anthropology. I’m not sure if that $50k figure is a notional “if people were actually paid” or a more grounded “people are actually paid, now”. A Google site: search of /culanth.org/articles/ suggests around 1,000 articles and short notes since 1986, and a current production rate of approx. six articles per quarter, plus another half-dozen short notes. One of the best long-term options, according to the article, seems to be…

“Establishing an endowment. The experts with whom FoCA has consulted have unanimously advised that an endowment is, by far, the best way to stabilize Cultural Anthropology’s financial situation in the long term.”

At a guess a $1.5m endowment donation would presumably perform at around 5% income, or $75,000 per year. Meaning $50k income per year, plus a $25k buffer for re-investment / management fees / mismanagement insurance. $1.5m is ambitious, but a slow crowdfunder + some chunky legacies in wills might do it. Once it’s in, it’s there forever.

Lulu.com launches academic service suite – Glasstree

01 Thursday Dec 2016

Posted by David Haden in Economics of Open Access, Open Access publishing, Spotted in the news

≈ Leave a comment

The leaders in affordable print-on-demand, Lulu.com, have just launched a book publishing service for academics. Glasstree offers the…

“tools and services needed by academic authors, and will leverage technology, such as print-on-demand, to distribute their works more cost-effectively. [aims to boost the] commercial academic publishing market, such as accelerating time to market, more transparent pricing, and reversing the revenue model to allow academics and scholars to realize 70% of the profit from sales of their work. Among Glasstree’s advertised services: support for open access, including the deposit of works in institutional repositories; Tools for bibliometric tracking, so academic authors can monitor Impact Factors, and other relevant measurements; More control over licensing options, through a partnership with Creative Commons; and access to traditional peer review.”

Note that…

“Glasstree is currently in a limited free trial period until 31st December 2016. During this time, authors can publish as many titles as desired, free of charge, receiving a range of complimentary services.”

Somehow I doubt that includes the related Glassleaf services where book production… “Packages start at as low as $2,625”. Ouch.

The Glasstree signup doesn’t port over your existing Lulu details, and thus presumably can’t port your academic book files over from Lulu either. Looks like it’s a wholly separate system.

glass

Digital Monograph Costing Tool

14 Monday Nov 2016

Posted by David Haden in Economics of Open Access, Spotted in the news

≈ Leave a comment

A new Digital Monograph Costing Tool from American University Presses.

Africa is coming online

28 Tuesday Jun 2016

Posted by David Haden in Economics of Open Access, Spotted in the news

≈ Leave a comment

While African research universities usually have better commercial journal database access than their counterparts in the West (thanks to aid deals), what of public access to African-focused research? Great to hear an African voice on this, as Africa starts to buckle up for growth and international access. Chukwuemeka Fred Agbata Jnr. of Nigeria says that there is an…

“overwhelming call for the accessibility of African research [about Africa, but that this] has stretched traditional archiving methods.”

With a substantial increase in population and wealth now happening on the continent, he asks if there is now an opportunity…

“for archiving and digitising African-focused research [in order to] make African research accessible on a global scale.”

Let’s hope so. Although the author also suggests a commercial option, seemingly more in terms of access to contemporary and commercial data…

“monetising the whole process through a subscription model for online hosting of knowledge resources – books, research papers, journals, dissertations, and reports to investors, product and policy developers. [With African researchers getting] “a revenue share for each download”.

That might work for useful locally-created data — one might get the article or substantial data summary for free, anywhere in the world. But if you’re outside Africa then you’d buy the data download direct from the researcher, and in affluent nations your university would require you do that as part of your ethics code as a researcher. Though I’m not sure a commercial pay-per-download model would be useful for things like folklore, the arts, oral history and natural history, which might be better funded by a big pan-African consortium of nations, philanthropists and donors. And thus kept freely available.

OAPEN-UK final report

28 Thursday Jan 2016

Posted by David Haden in Economics of Open Access, Official and think-tank reports, Spotted in the news

≈ Leave a comment

OAPEN-UK’s final report on open access monographs, OAPEN-UK final report: A five-year study into open access monograph publishing in the humanities and social sciences.

publish_monograph

“Many libraries will […] be providing links to the open access copies of monographs through their discovery systems, but librarians are not always aware of this. A minority are also reluctant to include open access content within their catalogues.”

“30% of respondents currently identify open access monographs for inclusion within their library collections – 49% do not, while 21% were unsure.” — Librarian survey for the report.

Unsure about including OA at all, or unsure if anyone on staff was identifying OA items?

“There are also large numbers of researchers – especially early career and retired academics – who do extremely valuable research which deserves publication but who work outside academic institutions. Changing publishing culture in a way that affected these researchers negatively would damage the overall discipline.”

“SFX Miscellaneous Free Ejournals Target”

19 Wednesday Aug 2015

Posted by David Haden in Academic search, Economics of Open Access, Spotted in the news

≈ 1 Comment

“SFX Miscellaneous Free Ejournals Target: Usage Survey Among the SFX Community“, Serials Review (2015), 41(2), pp. 58-68.

SFX is an OpenURL link resolver product for university libraries, focussed on the output of traditional publishers — of which 16-20% is apparently so dodgy in terms of quality that it breaks the system. Yet, rather amazingly, it appears that much of this 16-20% is still allowed to get to the point-of-use.

The article briefly surveys recent findings on how SFX copes with open access articles, and then the rest of the paper gives the results of a survey of librarians who integrate a specific ‘free’ section of SFX with their library discovery tools. It appears that scholars looking for open free full-text via SFX can expect way over 20% dead link errors on URLs…

… one category [of failure] (incorrect parse params) alone leads to 20% false positives (dead links) for MFE [the largest ‘free’ target in SFX]. Besides incorrect parse params, there are numerous other reasons for the occurrence of false positives (dead links), such as resolver translation error, inaccurate embargo data, provider target URL translation error, incomplete provider content, wrong coverage dates, indexed-only titles mistakenly considered as fulltext titles, and other reasons listed in the literature review section.”

So that might mean… perhaps 40% of links to open access full-text are dead? Or even more, like… 60%? The article doesn’t hazard a guess.

The DOAJ ‘targets’ are apparently not much better…

It’s an irony that I find discovery services generally have much poorer coverage of Open Access than Google Scholar. … Most discovery services have indexed DOAJ (Directory of Open Access Journals), but many libraries experience such a bad linking experience they just turn it off” — Aaron Tay, July 2015.

I’m pleased to say that JURN should have close to zero dead links on standalone journals, due to the way it is set up. JURN may lead to a few fleeting “server maintainance” / “timeout” errors here and there, but if the journal’s base URL for articles moves then its articles effectively get auto-removed from JURN’s results. But they get found again within a year at most, through an effective two-pronged method.

AWOL releases cleaned A-Z URL list.

18 Tuesday Aug 2015

Posted by David Haden in Economics of Open Access, My general observations, Spotted in the news

≈ Leave a comment

AWOL has a fascinating post today. It’s on the attempts to identify which AWOL linked resources have already been ingested into major long-term Web archives, and which haven’t. As part of that experiment Charles and his helpmate Ryan have offered their readers a nice big cleaned A-Z list of the “52,020 unique URLs” linked from AWOL, which is very good of them. I might clip these URLs back and de-duplicate, and then do a side-by-side sheet with JURN’s own indexing URLs and thus see what’s missing from JURN. Very little in terms of post-1945 journal articles, I suspect, though there may be some I’ve missed.

Of course a JURN Search already runs across the AWOL pages, as well as a great many of the post-war full-text originals (via Google). But if I were an Ancient History scholar I might now be tempted to get together with others to crowdfund a mass download of AWOL’s full-text, so that I could search across the full-text locally and minutely, without having to rely on Google etc. I reckon the entire set of AWOL full-text would fit on a 1.5Tb external drive and would cost around $10,000 to harvest by hand/eye. Why would that be needed? I’m assuming that many long-term Web archives are ‘dark’ or that license complications mean no single archive can ingest the entirety of what AWOL points to.

My calculations for the $10k figure start with the fact that a little over 10,000 of AWOL’s 52,020 URLs are straight-to-PDF links, and so very easily downloaded by a harvesting bot. Assuming an average of 5Mb per PDF, that means about 260Gb of disk storage space for those PDFs.

If one then assumes that perhaps 10,000 of the URLs are not going to articles (rather to such things as sites that show scans of original source manuscripts and old books that display in zoomable and frame-nested forms etc, huge datasets, that are difficult to extract and archive), then that might leave 32,000 URLs that are mostly likely to be links to either journal TOCs pages or individual articles.

Let’s assume that each of the 32,000 TOC page URLs lead to an average of 16 articles and reviews (though some 2,000 may be home-page links sitting above links to issue TOCs). So 32,000 = 512,000 articles of some kind, in PDF or HTML, on average weighing 1.5Mb each. So that’s 768Gb in total. In that case one might easily store all the AWOL-discovered full-text on an $80 1.5Tb external disk, and have space to spare for the desktop indexing software‘s own index, which would be fairly big. That is a product that I might find very useful, if I were an Ancient History student, specialist, or independent scholar without access to university databases.

But how to harvest those 512,000 articles? The brute force way would be to parcel up the 32,000 URLs into parcels of 150 each. That’s 230 parcels x 150 URLs. If one were paying 20 cents per URL to Indian freelancers, to go in and spend 3 minutes grabbing whatever articles are hanging off each of those 150 page URLs, plus the page, then that would cost $37 per parcel. Let’s say $40, with a small quality bonus. Let’s say it takes four hours to do the 150 URLs and not miss anything. So that’s $10 U.S. a hour — pretty good for an Indian freelancer with broadband, I don’t think anyone would be being exploited on that deal. So the whole 32,000 URL set would cost $9,200 to harvest by hand and eye, which seems well within the range of a small crowdfunding campaign.

Of course, it might be that the articles could be wholly or partly harvested by bot. But I suspect that a simple “page + anything it links to” harvest would bring in a lot of chaff alongside the articles, given the very varied and non-standard nature of what AWOL links to. Perhaps that wouldn’t matter in practice, when keyword searching across the entire harvest. Or one might be able to use a more intelligent bot, one using Google Scholar-like article-detection algorithms.

← Older posts
Subscribe: RSS News Feed.
I'm on Patreon!

JURN:

  • JURN : directory of ejournals
  • JURN : main search-engine
  • JURN : openEco directory
  • JURN : repository search

Related sites:

  • 4 Humanities
  • Academic Freedom Alliance
  • Accuracy in Academia
  • Alliance Defending Freedom
  • ALPSP
  • alt.academy
  • AMIR
  • Anterotesis
  • Arcadia project
  • Art Historicum (German)
  • AWOL
  • Beall's List (updated at 2018)
  • Beall’s List (old)
  • Beyond Search
  • Bibliographic wilderness
  • Booktwo
  • Campus Reform
  • Charleston Advisor
  • Coalition for Networked Information
  • Communia (public domain watchdog)
  • Cost of Knowledge
  • Council of Editors of Learned Journals
  • Dan Cohen
  • Digital Koans
  • Digital Shift
  • Dissernet (Russian anti-plagiarism)
  • DOAJ
  • Don't Block TOR
  • eFoundations
  • EIFL
  • Electronic Frontier Foundation
  • ELO
  • Embargo Watch
  • ePublishing Trust for Development
  • Facebook: Arab Open Access
  • Facebook: Italian Open Access
  • Facebook: Open Access India
  • Film Studies for Free
  • FIRE
  • Flaky Academic Conferences
  • Found History
  • Foundation for Individual Rights in Education
  • Free Speech Union (UK)
  • Google Algorithm
  • Heterodox Academy
  • Iconclass
  • IFLA Serials blog
  • ImpactStory
  • infoDocket
  • InTech Blog
  • Jinfo (formerly Free Pint)
  • Kindle blog
  • L'edition Electronique (French)
  • La Criee : periodiques (French)
  • Leader Statement Database on Free Speech
  • National Association of Scholars
  • National Coalition of Independent Scholars
  • Neil Beagrie
  • OA Lookup : Policies
  • OA Working Group
  • OASPA
  • Online Searcher
  • Open Access Bibliography
  • Open Access Week
  • Open and Shut?
  • Open Electronic Publishing
  • Open Folklore
  • Open Knowledge Maps
  • Open Library of Humanities
  • Periodiques en ligne (French)
  • Peter Murray Rust
  • PKP / OJS
  • Project Gutenberg
  • Publishing Archaeology
  • RBA Blog
  • Reclaim the Net
  • Research Information
  • Research Remix
  • Right to Research
  • River Valley TV
  • ROARS (Italian)
  • Scholarly Electronic Publishing
  • Scholarship Matters
  • Searchblox
  • Searcher
  • Serials Cataloger
  • Serials Review
  • Society of Young Publishers
  • Speech First
  • TaxoDiary (taxonomies news)
  • Taxpayer Access
  • Tentaclii
  • The Scholarly Kitchen
  • Thoughts from Carl Grant
  • Web Scale Discovery
  • Zotero blog

Some of the libraries linking to JURN

  • Boston College Libraries
  • Brooklyn Public Library, NY
  • Duke University
  • Kobe University, Japan
  • Rhode Island College
  • San Jose State University
  • UConn Stamford
  • University of California
  • University of Cambridge (Casimir Lewy Library)
  • University of Cambridge (main)
  • University of Canberra
  • University of Toronto
  • Washington University
  • West Virginia University

Spare BitCoins? Please send donations to JURN via: 17e2KGuyzjzEEE7BsoYTwMo3MtUod6DrjP

Archives

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • June 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009

Blog at WordPress.com.

  • Follow Following
    • News from JURN
    • Join 903 other followers
    • Already have a WordPress.com account? Log in now.
    • News from JURN
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...