• Directory
  • FAQ: about JURN
  • Group tests
  • Guide to academic search
  • JURN’s donationware
  • openEco: nature titles indexed

News from JURN

~ search tool for open access content

News from JURN

Category Archives: JURN tips and tricks

SMS as voicemail on a home phone – how to translate the number being read out

27 Friday Jan 2023

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

Another handy tip for wrangling with online life.

Situation: An online service requires verification of your new phone number. It’s a home phone, not a mobile, but they still send you an SMS message. This is delivered, great. But… your phone service has turned it into a spoken voicemail. The user only hears the vital verification number being read aloud thus…

Two hundred and ninety-five thousand two-hundred and forty-two

Problem: This is puzzling to many users, especially older people. Even if they can write it all down in time, what are they meant to input into the confirmation-box on the website? 20095000242? 295000242? 295,000,242?

Solution: None of the above. What the above read-aloud number actually translates to, in numbers is this…

295242

So write it down ‘as spoken’ first, then translate it back to (most likely) a six-digit number.

The above should also work if the same method is used by your service for ongoing two-factor verification. That’s if the same phone is also used for two-factor.

How to block the mouseover pop-ups on Archive.org search results

19 Thursday Jan 2023

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

How to block the mouseover pop-ups on individual Archive.org search results, in the annoying new flickering / flashing search interface…

1. Go to the top bar of your Web browser | click on the uBlock Origin extension icon | Click on its cogwheel icon.

2. In the uBlock Origin Dashboard | go to “My Filters”.

3. In the My Filters list, add the new line…

archive.org##tile-hover-pane

… and save. Reload the results page. The item ‘preview’ popup panels will have all been blocked. You can still right-click on any result tile, and launch a new tab showing the main page for that result.

The above is for a user who uses the Grid view…

The above fix at least removes one of the main annoyances of the regressive new UI.

Delete small pure silences in an audio recording

06 Friday Jan 2023

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

How to find and delete small pure silences in an audio recording?

These silences are known in the audio recording trade as “dropouts” or “RF hits”, commonly caused by tiny failures in radio microphone transmissions. But they can also be caused by having to record on a desktop PC from a huge video that’s streaming down to someone with a relatively poor Internet connection. The video playback stutters and stalls a bit. Each stall results in a perfectly silent pause in the recording.

So let’s assume you’ve either captured a field audio recording using a flaky RF mic, or have captured the audio going through your desktop sound card by using something like Total Recorder. Either way you find there are silent skips, and now you need to delete these tiny bits of silence. All 250 of them. Automatically.

The powerful audio repair suite iZotope RX 7 should help here, and do this for you in a few clicks. But rather surprisingly it doesn’t have such a thing. You instead have to have PhD in using its complex ‘Ambience Match’ and ‘Spectral Repair’ modules. There must be an easier way for non-professionals.

There is. The quick, easy, automatic and free solution is actually (you guessed it) good old Windows desktop freeware. Here’s the workflow…

1. In this case the freeware really is a dinosaur, or rather the Wavosaur. Download and run. Admire the groovy retro 1995-style icon with the dinosaur face. Actually it’s not that old, and the Wavosaur’s current version is July 2020.

2. Load your .WAV file into the mighty mammal-munching maws of the Wavosaur. Then go: Tools | Silence Remove | Custom. It’s that simple.

3. “-90” = find real pure silence, not just lecture room ‘ambience’. “0.25” = the silence is only to be deleted if longer than 0.25 seconds. Run “OK”.

4. Wavosaur will stomp through the .WAV and find and delete silence, also close up the resulting gaps. There is no notification this has been done, but it has. When you go to see if it worked, you won’t be able to find all those former “flat bits” in the audio signal. Though the “ambient room noise” heard in the speaker’s pauses should still be there, as you can see here…

That’s because they had a tiny bit of noise in them, lifting them above the -90db threshold needed for deletion.

5. Now you can save and then load the cleaned .WAV into Ocenaudio (also freeware, and a great replacement for Audacity) and from there quickly save out to an .MP3 file.

If you’re going to do this a lot, note that Wavosaur can do MP3 export, but it first needs lame_enc.dll installed correctly.

A peep at Content Marketing

19 Monday Dec 2022

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

I had cause to take another peep at SEO, this time specifically the role of Content Marketer. A role completely unknown to me before now. Last week I could only have guessed, if you had asked “what does a Content Marketer do?”. Yet it’s basically what I’ve been doing for decades (albeit mostly without pay).

Some maxims and tips on this topic, from my jotted notes…

Your written content should not be self-serving, even if that will please your boss. You should try to serve your intended audience first, and educate your boss about the need to place the audience first. Quality is important, and needs to be consistent. Don’t outsource your vital customer-facing writing to someone in Whereizitagain who says they’ll do it for $10 an article.

Rather surprising was the advice to never overlook or dismiss old-school methods. Handwritten letters and targeted informative direct-mail “can still work wonders”. If done from a trusted source, done correctly and especially to a receptive older audience.

A writing Content Marketeer doesn’t just churn out SEO-focused robo-articles. Which was news to me. Such a writer could be undertaking: how-to tutorials / quarterly industry reports / buyer guides and reviews / news about new releases, with a touch of analysis to add value / long-form interviews / white papers / contributions to annual reports / and making online microsites and infographics.

One might also be polishing, SEO-buffing and revising/re-formatting older media content. If that hadn’t already been done by the previous post-holder. But link-rot is ever present, and old Web links will always need to be regularly checked.

One would ensure that all of the above are sprinkled with calls-to-action and (if needed) explicit “buy now” calls, as well as SEO-derived keywords. Web links would be added at the most relevant points, and these must be coloured so as to be clearly visible as links.

Monitor the competition. Note their topics, phrases and infer any new audience profiles they might be trying to address.

Monitor the commentators and contrarians. Comment on their posts, though be careful and don’t spam or aggravate.

Occasionally coin meaningful new phrases and even words and tags (e.g. ai-gen as an easily tag to identify AI generated content). Also consider local keywords, or timely ones relating to the trade-journal or hobby-magazine lead articles / news that your audience will be reading that month. You are of course also reading these journals regularly, and ideally doing so before everyone else.

Avoid writing “Top 10”, “7 best” etc article headlines. The savvy have long since learned that such headlines lead to untrustworthy articles, most of which appear to have been written by robots.

Monitor both customers and potential customers. Get a feel for things like their literacy level and the length they like to read. Do they have the ability to ‘skim and skip’, or do they just back off when faced with a long text? Can you break up a slab of text with nicer typography and spacing, break-out boxes and pull-quotes? Pictures should be unique and tailored to both the content and the audience, ideally, not just hastily-grabbed stock. The best writer is also ideally a crack picture-researcher and accomplished picture-processor.

Build a mental map of ‘where the audience is’ in the seasonal buying cycle (e.g. they may be saving up their PayPal for Black Friday, or flat broke after a big family spend at Christmas). Also develop a map of the favoured places visited by the decision makers in the audience, and when they find time to engage/post in such places.

Explore the potential for infusing what may be a rather staid corporate “brand tone” with touches of good-natured humour and subtle “insider joke” nods to the audience. Also explore the potential for weaving developing personal stories across your output. These stories should be genuine and as close to the grassroots as possible.

Pitch ideas to the legacy media via press releases, and perhaps also pitch directly if you have a relationship with a journalist or editor.

Actively pitch ideas to bloggers, audio podcasters and especially YouTube influencers. Make sure the pitch is individual and tailored to them and their audience. An inbound Web link from a blog is (apparently) worthless for Google Search SEO ranking, but that doesn’t mean it’s not valuable.

Make sure your content is easily shareable. But remember that not everyone does Facebook and Twitter, and in that case they can’t even see your posts on those closed services.

Don’t overlook the need for good writing in “thank you” messages, and even in receipts. Avoid flowery language and gushing cliches. Keep it simple, and add a nice straightforward coupon-code for a discount on their next purchase.

In spare moments, inventory the firm’s back catalogue of media, if that has not already been done. Is anything still useful, but gathering dust?


That’s the gist of what I noted during my reading and listening on the topic. Please comment and let me know if there’s anything I’ve missed.

Microsoft Teams – no-nag

17 Saturday Dec 2022

Posted by David Haden in JURN tips and tricks, Spotted in the news

≈ Leave a comment

A new UserScript, Microsoft Teams – Use Web App Instead. Stops the Desktop version of Teams from nagging you, if you’ve already launched (and want to use) the almost-identical Web browser version of Teams in Edge or Chrome.

How to archive a recalcitrant forum

11 Sunday Dec 2022

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

Task: To download and safely archive a useful but very recalcitrant user-forum, one that may be at risk of going offline.

Roadblocks:

1) The forum archives can only be accessed by drop-downs that require you to input precise from-to dates (see above). Harvesters / bots cannot get past such barriers, and cannot reach the forum’s ‘deep history’ of per-post threads.

2) Even if you had the individual URL of each and every forum thread, only a proper Web browser can get and archive each forum thread URL. Automated harvesters / bots / capture utilities are quickly blocked by the forum’s server.

3) AutoIT or the newer AutoHotKey might be a solution on Windows, by calling Internet Explorer to load the URLs and then save each as a file. But my intensive searches find only arcane code fragments, and one code function. Nothing complete or even part-way complete.

The following solution thus requires a bit of manual work, though not too much. It is for a relatively small forum or sub-forum, of technical coding advice (in this case Python for 3D software) without a great weight of images being posted. In this case there are 16 master pages of links to some 500 actual forum posts, and each post has user replies appended. Each post displays as a single scrolling page and is not paginated.

Solution:

1. Find the earliest forum thread date, then manually go through and create a per-year page that show the links to the forum threads. Save it, and also any continuation pages there may be for that year. Work through the years and, on a long-standing forum or sub-forum, you may perhaps end up with some 15-20 saved HTML pages. It should not take more than a few minutes.

2. Extract a big list of all the links in these locally saved HTML pages. I used Sobolsoft’s ‘Extract Links from Multiple HTML Pages’ Windows utility to do this, but there are other bulk link extractors.

3. Save the extracted one-per-line links list to a .TXT file, copy-paste that list to Excel and sort the list A-Z. From this sorted list you extract just the links that point to the forum threads. They should have a uniform path and pattern, allowing them to be easily identified and extracted. Save the new list to a further .TXT file.

4. Use the free Chrome-based Web browser extension DownThemAll! to load the new list .TXT (Web browser | start DownThemAll! | right-click anywhere | ‘Import from file’). You may also want to set DownThemAll! to only download one forum thread at a time (Web browser | start DownThemAll! | Cog icon in DownThemAll!’s lower right | Network | Concurrent downloads: 1).

Have DownThemAll! do the downloads. Very regrettably there is no way to have DownThemAll! save the pages from the browser to .MHT (.MHTML) or .PDF files. Just the same format as the target URLs point to.

5. Because you’re using your normal Web browser and only downloading one page/post at a time, use of DownThemAll! should not trigger any traffic blocking from the targeted forum.

Great, so you have the forum threads downloaded as .HTML files. Of course, there’s a problem. The .HTML pages being saved locally are not also saving the images. When you load one of these HTML forum pages locally, the Web browser is still loading the post’s images from the online forum server. That’s good, but we need a more permanent local file being saved.

6. The only solution I found for the next bit is the Pale Moon browser (very worthy, based on Firefox) and its free MozArchiver add-on. This add-on appears to be unique, in terms of being happy to save all open tabs (rather than just one). It saves each open tab as a portable .MHT file with embedded images. You will have to be brave though, and load 50-80 tabs at a time by drag-dropping the .html files onto Pale Moon. With my RAM and workstation, I find Pale Moon has no problem with 80 at a time. After drag-drop, pause to let the tabs all load. Then “save all tabs” to .MHTML files, which is quickly done.

It’s thus relatively easy to use this method to work through 500 or so locally-saved forums post-pages, provided they were not too image-heavy.

Then when done with each batch in Pale Moon, right-click on the left-most tab and “close all tabs to the right”. Repeat until finished.

That’s it. A slightly tedious workflow, but your recalcitrant and harvester-phobic user forum is now safely archived as portable .MHT files, one per forum thread. Good local indexing/search software (DocFetcher, DTSearch etc) should have no problem indexing local .MHT files, ready for you to do keyword searches across the local archive.


If you ever need to convert the .MHT (.MHTML) files back, the Windows freeware MHTML Converter 1.1 will do that and has batch processing.

A newb takes a peep at SEO

05 Monday Dec 2022

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

I had cause to read up on SEO. Yes, I — an utter newb at such things. Such an interest is rather unusual for me, since I live in an ad-free world that’s stuffed with ad-blockers and filters. Haven’t seen a TV ad for decades. At most I glance for a microsecond at a bus-shelter ad while out walking. Or flick past a PDF magazine ad. That’s it. Ads don’t exist for me, online or offline. Likewise, I have all the cruft blocked on Google Search, so I see just the search results. And there I also have Google Hit Hider, instantly removing a huge list of blocked sites from results.

Anyway, here are notes on what I learned about SEO as a total newb.

The focus of the SEO world appears to be almost entirely on Google Search, seemingly for the AdWords capabilities but also for the rich analytics Google provides. It also helps that Google owns a third of all online trackers. If there is any serious attention paid to other search tools, there was little sign of it. Most Google searchers are said to still use 1, 2, 3 words, and maybe add a -knockout word. Some will use “inverted commas” to form a phrase. Few know how to go beyond that and use modifiers such as after: or site: or similar.

Broadly, it’s said that 40% of traffic comes from desktop PCs, and 60% from mobile. Desktop users look at and stay on a Web site longer. I wanted to know how that breaks down into navigational vs. informational searching, but it seems that soon turns into fairly granular and valuable information. Especially when it’s relevant to a market niche. Especially one involving pizza delivery and finding your local Ferrari dealership (these appear to be big in the SEO world). But I would imagine that desktop users make the most informational and deep-informational searches, once hordes of casual “write my essay for me” student searches are discounted.

I wasn’t aware (being smartphone averse, as well as ad-averse) that these days mobile users often use voice for search. Rather than searching by typing. Either way, search words are often mis-spelled or badly chosen. Variants are often used, which the marketing dept. might not even be aware of without some heavy SEO keyword research.

Searches from buyers can further break down into general solution-solving buying vs. direct brand buying vs. trial users (e.g ‘try before you buy’, tool-hire and ‘borrow from a mate’, or even ‘watch a hands-on YouTube demo’). Informational searches can turn into buying decisions much later, often after some form of personal consultation (e.g. boss, kids, spouse, fellow forum users). Marketeers may not be aware of all the requirements of a user, simply by taking a quick look at search terms. Behind a search the buying user may have a mental tick-list of many factors that will be considered in due course. Many will also go on to buy offline, after online research. And that’s not just about “seeing the item” before you buy. Couriers can still be big barrier to home delivery, though ‘click and collect’ and Amazon lockers have somewhat solved that problem.

Surprisingly, it’s said that being No. 1 in Google Search results or AdWords is not a good position to be in. 3rd or 4th might be better, or so UK research claims.

The usual factors need an initial check before one plunges into SEO. Site speed and individual page-load speed. Also A-B taste testing re: layouts, colours, usability, and which hot-spots people tend to click on when at a vital “buy now” page. Wheel your known buyers into the labs, feed them nibbles, and have them clickety-click for a few hours. Watch what they do.

Before SEO you would also need to give the site a health-check for major problems. Try to get rid of any links involving hijacked domains now in the hands of spammers; fix content not accessible to bots (which can include content accessible only via clicking from a drop-down menu, something many academic journals are guilty of); ditch anything that pops up or slides in and gets in the way of a purchase decision; and those vile “sign up to our mailing-list” whole-page blockers. More generally consider if you just have a horrid website that simply causes people to back away and go elsewhere. Even on a great site, the most important landing page should be both perfect and helpful. It may not be, at present.

Anything effectively dead should be fixed; dead “404” links; “sorry” pages with no re-directs; pages that might appear to insult the visitor if the page is missing (“you seem to be lost, idiot”); and in this category also sits bad date signals (e.g. old pages dated “1999” etc).

You also need to thoroughly fix the point-of-purchase. For instance, if you don’t accept PayPal and guarantee to send by regular Royal Mail second-class postage in a jiffy-bag, Grandma ain’t buying from you. She doesn’t do credit cards and couriers. Nor do her school-age grand-kids. Much the same is true of higher-level purchasers — don’t get in their way, or let delivery problems stop the purchase.

With the website patched up it’s said that you then go on to make a general topics map that draws on the expertise in a business, and that this should be aware of seasonality (e.g. spring, summer beach holidays, Halloween, Thanksgiving, Christmas etc). This is done before you get into forming a “keywords plan” using a Excel spreadsheet. The spreadsheet builder is aware that the list of a firm’s specific product-names may not map onto searcher terms, and will also need to factor in national variant spellings, neologisms, and any slang used among sub-groups of buyers.

Then starts the process of building the “keywords plan” spreadsheet, on which there is abundant detailed DIY information available.

Many SEO people appear to find the competitor websites, which have already gone through the SEO process, to be a good way to harvest keywords. But the competitors probably did the same, and there’s said to be a risk of an echo-chamber developing in which vital factors can be missed.

Internally, it’s said that you should talk to the sales people after the marketing dept. Then you can use the face-to-face sales people to try to filter out the marketing-speak that normal people don’t use. Other useful sources are the firm’s on-site search box and discussion forums (official or unofficial). Try also to get up-to-date on customer expectations, and what their range of current problems are. Many will be trying to find a solution to a problem, rather than your specific product.

Once you have your keywords spreadsheet filled, you then rank the words and phrases, classify them by searcher intent, break them down into sub-sets and clusters, colour-code and so on. At this point some wizardry and pixie-dust is sprinkled as you plug in data from SEO sites that specialise in selling information. Such as how many hits per month your word is likely to get, its reputed value in your market niche and so on. If a word or phrase is high difficulty / high cost, many will try to break it down into a cluster of related sub-words that will be cheaper to tackle.

Less relevant keywords should apparently not be discarded, as your content-writer can buff them by writing articles based around these words or phrases. SEO content writing for a genuine audience (rather than the Googlebot) is an art in itself, and article title and pictures / infographics are said to be especially important. If you already have older semi-defunct articles that look and feel old, you either re-write or give them a ‘noindex’ flag so they’re no longer seen on Google.

Less relevant words may also lead to sales way down the line, perhaps months or years later. Sometimes words come back into vogue, or are suddenly ‘lit up’ by a news story. So hide them rather then delete them from the spreadsheet “plan”.

Then you take your finished “keywords plan” and use it to overhaul the website content and write new content. Making sure it’s accessible to all, and refraining from anything Google doesn’t like. Such as “keyword stuffing” or “clickbait articles”. Every quarter you update the plan and seek out new keywords or phrases, and tweak the content as much as you can while adding fresh articles. At the same time you’re also monitoring what works or doesn’t, building inbound Web links from top sites, and trying to get genuine real people to talk about your stuff and link to it.

That seems to be it, at least according to my various notes. SEO gurus, please feel free to point out where I’ve misunderstood or overlooked something.

How to remove an erroneously added Excel hyperlink

01 Thursday Dec 2022

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

How to remove an erroneously added hyperlink, from just one cell in Excel 2007.

Problem: Sometimes you paste and there’s a hyperlink formed, sometimes not. It seems a bit arbitrary, regardless of what you have set in your Autoformat settings. Once a live hyperink appears in a cell, there is no right-click | “Remove Hyperlink” in Excel 2007. Only the ability to add a hyperlink.

No “remove hyperlink” on right-click.

Solution in Excel 2007:

1. Select just the hyperlinked cell.

2. Top bar | Home | go along to the far end of the bar, where the “Sort and Find” is. Next to this is “Clear” and there you select “Clear Formats”. That should do it.

You now have plain text in your cell, and it’s no longer a live hyperlink.

Later versions of MS Office Excel also added a “Clear Hyperlinks” option here, at the foot of the “Clear” selection options. But here we’re assuming you’re stuck with good olde 2007.

3. Save.

URLlister 0.4

27 Sunday Nov 2022

Posted by David Haden in JURN tips and tricks, Spotted in the news

≈ Leave a comment

URLlister 0.4.0 (March 2021) is Windows freeware. Manually click through a long list of URLs, as if with a TV remote-control. Each new click passes the next URL on the list into your Web browser, and loads it in a new tab.

No more manual copy-paste needed, for slow and careful ‘eyeball’ manual checking of a list of URLs.

More details on GitHub.

It may not be ideal to have lots of tabs opening and accumulating over time, and you may prefer not to have to manually close them. In which case the Tab Wrangler extension for Chrome-based Web browsers can handle that. It “automatically closes idle tabs after a designated time”.


See also the Chrome browser extension Load URLs At Interval, which moves through the URL list at a set timed interval. The URLs are loaded into the same tab, rather than a new tab opening for each.

Word macro to increment the numbers in a back-of-the-book index

12 Saturday Nov 2022

Posted by David Haden in JURN tips and tricks

≈ Leave a comment

Situation: You have a completed back-of-the-book index for a book, perhaps created with PDF Index Generator. You are then required to add a further page to the book, that you thought was finished. Such a change will ‘throw off’ most of the numbers in the index, but not all of them. Only the page-numbers after the point where the new page was added. Do you need to remake the index again? No.

Solution: You use the following Microsoft Word macro to do the required ‘intelligent corrections’ of the changed numbers in the Index.

Use: Copy-paste the kaput Index to a new Word document of the same page size, with retained formatting. Edit the macro’s three variable numbers for your needs and load it. Run it. Copy back the results to the book. Check the new index aligns with the book.

Change the numbers indicated. The first number is the increment, and the other is the page number above which incrementing is done.


' WORD MACRO - Increment the numbers in the back-of-a-book index by x, only if above a certain number.
Sub Demo()
Application.ScreenUpdating = False
' Change the following number 1 to the increment you need. Number of pages added = the number you need.
Const i As Long = 1
With ActiveDocument.Range
  With .Find
    .ClearFormatting
    .Text = "<[0-9]{2,3}>"
    .Replacement.Text = ""
    .Forward = True
    .Wrap = wdFindStop
    .MatchWildcards = True
    .Execute
  End With
  Do While .Find.Found
  ' Change the next number to the page-number below which all must stay the same.
    If CLng(.Text) > 77 Then
	  ' Page number 2000 is the ending backstop number - change this only if you have a monster book.
      If CLng(.Text) < 2000 Then .Text = CLng(.Text) + i
    End If
    .Collapse wdCollapseEnd
    .Find.Execute
  Loop
End With
Application.ScreenUpdating = True
End Sub

Change the numbers to suit your situation.

Here the above macro is set to increment by 1, any page-number it finds in the index. But only if it has a value between 77 and 2000. You are assumed to have added one new page at page 76. Any indexed page number for page 77 must therefore now become 78, and so on. Numbers are recognised individually, even if in the hyphenated form, e.g.: 99-100 or similar.

Your index is assumed to have no dates or street numbers used as index terms, e.g. “1066 A.D., Battle of Hastings, p. 356”, or “221B Baker Street, p. 73”.

The newly added page(s) will of course need to have their new entries indexed, if not simply illustrations, and manually added to the revised index.

So far as I can tell, there’s no way to do this with a regex.

← Older posts
Subscribe: RSS News Feed.
I'm on Patreon!

JURN:

  • JURN : directory of ejournals
  • JURN : main search-engine
  • JURN : openEco directory
  • JURN : repository search

Related sites:

  • 4 Humanities
  • Academic Freedom Alliance
  • Accuracy in Academia
  • Alliance Defending Freedom
  • ALPSP
  • alt.academy
  • AMIR
  • Anterotesis
  • Arcadia project
  • Art Historicum (German)
  • AWOL
  • Beall's List (updated at 2018)
  • Beall’s List (old)
  • Beyond Search
  • Bibliographic wilderness
  • Booktwo
  • Campus Reform
  • Charleston Advisor
  • Coalition for Networked Information
  • Communia (public domain watchdog)
  • Cost of Knowledge
  • Council of Editors of Learned Journals
  • Dan Cohen
  • Digital Koans
  • Digital Shift
  • Dissernet (Russian anti-plagiarism)
  • DOAJ
  • Don't Block TOR
  • eFoundations
  • EIFL
  • Electronic Frontier Foundation
  • ELO
  • Embargo Watch
  • ePublishing Trust for Development
  • Facebook: Arab Open Access
  • Facebook: Italian Open Access
  • Facebook: Open Access India
  • Film Studies for Free
  • FIRE
  • Flaky Academic Conferences
  • Found History
  • Foundation for Individual Rights in Education
  • Free Speech Union (UK)
  • Google Algorithm
  • Heterodox Academy
  • Iconclass
  • IFLA Serials blog
  • ImpactStory
  • infoDocket
  • InTech Blog
  • Jinfo (formerly Free Pint)
  • Kindle blog
  • L'edition Electronique (French)
  • La Criee : periodiques (French)
  • Leader Statement Database on Free Speech
  • National Association of Scholars
  • National Coalition of Independent Scholars
  • Neil Beagrie
  • OA Lookup : Policies
  • OA Working Group
  • OASPA
  • Online Searcher
  • Open Access Bibliography
  • Open Access Week
  • Open and Shut?
  • Open Electronic Publishing
  • Open Folklore
  • Open Knowledge Maps
  • Open Library of Humanities
  • Periodiques en ligne (French)
  • Peter Murray Rust
  • PKP / OJS
  • Project Gutenberg
  • Publishing Archaeology
  • RBA Blog
  • Reclaim the Net
  • Research Information
  • Research Remix
  • Right to Research
  • River Valley TV
  • ROARS (Italian)
  • Scholarly Electronic Publishing
  • Scholarship Matters
  • Searchblox
  • Searcher
  • Serials Cataloger
  • Serials Review
  • Society of Young Publishers
  • Speech First
  • TaxoDiary (taxonomies news)
  • Taxpayer Access
  • Tentaclii
  • The Scholarly Kitchen
  • Thoughts from Carl Grant
  • Web Scale Discovery
  • Zotero blog

Some of the libraries linking to JURN

  • Boston College Libraries
  • Brooklyn Public Library, NY
  • Duke University
  • Kobe University, Japan
  • Rhode Island College
  • San Jose State University
  • UConn Stamford
  • University of California
  • University of Cambridge (Casimir Lewy Library)
  • University of Cambridge (main)
  • University of Canberra
  • University of Toronto
  • Washington University
  • West Virginia University

Spare BitCoins? Please send donations to JURN via: 17e2KGuyzjzEEE7BsoYTwMo3MtUod6DrjP

Archives

  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • June 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009

Blog at WordPress.com.

  • Follow Following
    • News from JURN
    • Join 901 other followers
    • Already have a WordPress.com account? Log in now.
    • News from JURN
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...