Over 2000 academics have signed an online declaration against using the academic publisher Elsevier.
Google Web Fonts, a new Google service. It offers a snippet of code that styles your website with a font. The font streams in over the Web, so your website’s text looks to the same to all visitors. Although, judging by my experience of using a similar system with WordPress.com, it will slow down page loading. An especially nice choice for historians to experiment with might be Old Standard TT font…
[ Hat-tip: Beautiful Web Type ]
Summly is an interesting phone app that passed me by in the Christmas rush. It claims to use advanced algorithms to usefully summarise Web texts. Apparently it works best with journalistic press articles, although is still easily confused by dates. The 16 year-old British inventor reportedly has backing from Hong Kong billionaire investor Li Ka Shing.
“scholarship’s three main filters for importance are failing … new forms [now] reflect and transmit [additional forms of] scholarly impact: that dog-eared (but uncited) article that used to live on a shelf now lives in Mendeley, CiteULike, or Zotero — where we can see and count it. That hallway conversation about a recent finding has moved to blogs and social networks — now, we can listen in. The local genomics dataset has moved to an online repository — now, we can track it. This diverse group of activities forms a composite trace of impact far richer than any available before. We call the elements of this trace altmetrics.”
While some of these claims may be true of science and medical, the research suggests the humanities are rather lacking in engagement with new technologies and blogs / social media — beyond standard Web use, sharing Powerpoint slides and using services like Google Docs.
Hurrah! I found a viable way to automatically, reliably, and fairly simply grab a CSV of Google Search results. With URL, title (anchor) text, and even the sample snippet. This is, of course, only intended for academic use — to speedily build useful lists of subject-specific links.
1. Download the free MozBar addon for Firefox. It’s SEO stuff for webmasters, but it’s free and it works. Note that the CSV export feature is only present in the Firefox toolbar. Not the Google Chrome version.
2. Temporarily turn off any Firefox addons you might have for modifying the appearance of Google Search results, such as GoogleMonkeyR.
3. Go to Google Search, go to Search Settings, and turn on Google Instant if you have it disabled. Turn the number of results to 100. Save. Now do a test search.
No SERP Control Panel showing up? Click on the new SEOMoz toolbar (it’s sitting up near the top of your browser), click on the grey cogs, and select Google…
The SERP Control Panel overlay should now appear over to the right of the search results. Note that you may also need to repeat this step, for each new search or page, in order to get the data cued up correctly for a fresh CSV output, if you have Google Instant turned off.
4. On the SERP control panel, click on “Export to CSV”…
Note than we can also do this with Bing and Yahoo, and perhaps others if you can make profiles for them. Possibly it might work with Google Scholar?
5. Open the resulting CSV file with Excel…
You even get the description/snippet from the search results, although prefaced with some junk — simply delete everything in front of keyword “Undo” in the relevant column, by using Sobelsoft’s Excel Remove (Delete, Replace) Text, Spaces & Characters From Cells Addin for Excel…
Also delete the columns with the SEO junk in them. You now have three clean columns: URL, title, and snippet. Use a formula to convert these to pretty linked HTML in a fourth column, and/or paste them into a mega-file of subject-specific results for further weeding and sorting.
None of the above is as robust or simple as the broken Google Extract Data and Text, and it’s to be hoped that Sobolsoft fixes this software soon for Windows 7 + IE9.
The Public Domain Review has a new online leaflet, A Guide to Finding Interesting Public Domain Works Online.
The Web’s biggest pirate galleon has just announced a new search category: “Physibles”, a fancy name for digital 3D objects…
“Data objects that are able (and feasible) to become physical. We believe that things like three dimensional printers, scanners and such are just the first step. We believe that in the nearby future you will print your spare sparts for your vehicles. You will download your sneakers within 20 years.”
Google 3D Warehouse has of course being quietly doing something very similar for some years now. All their models are free (inc. commercial use) too, but legit. They even give you awesome software, Google SketchUp, for free to manipulate and alter the objects.
Oxford’s Dynamic Collections is a forthcoming WordPress plugin that seems to still be in private beta, but which sounds interesting. Basically, it harvests OER [Open Educational Resource] records into WordPress from across a range or sources, but filters them by keyword(s). The results, presumably with a bit of hand tweaking, are quickly-built subject lists of such resources.
My guess would be that one could probably do something similar with repository record feeds: use Excel to sort simple CSV records by the presence of keyword(s), then export only the relevant records as CSV, then load these into Omeka.
A nice new footnotes plugin for WordPress. It uses simple square brackets, which must have a number at the start of them. It accepts HTML links inside the brackets. I’d love to see this plugin come as standard with the free WordPress.com -hosted blogs…
To get the smaller font size on the footnotes, paste this CSS into your theme’s styles CSS, probably at the foot of the font section (that worked for me). The plugin doesn’t add this CSS automatically.
A recent survey by Civic Ventures concluded that six million of the USA’s 1960s baby boomers seriously intend, upon retirement, to use their experience to develop new non-profit organisations. And these may not look like the creaky old non-profits that we’ve known until now. They’re likely to be seriously Internet-enabled, and reasonably well funded from private sources. So, here’s a question. Some of the effort will be local (saving stray kitty cats, developing local theatres, creating new woodlands, etc) but how could some of it be directed toward open-access scholarly content? Could structured national programmes be developed to stimulate and guide useful scholarly initiatives by retirees, perhaps based on alumni associations and running alongside things like tax breaks and the promotion of legacies left to help fund open access journals and archive digitisation? And how about your local university gives free library and journals access to any retiree who starts a suitable non-profit, and then invites them all to a free annual TED-like networking event just for them?
Nice one. Mobile phone firm Orange has struck a deal with Wikipedia to make the encyclopaedia available free of data charges to 70 million users across the Middle East and Africa. Various national launches of the service will happen during 2012. The deal is non-exclusive, so Wikipedia can sign similar deals with other phone service providers.
A report on JISC Discovery 2012 (11th Jan 2012).
Omeka: a complete WordPress-like digital collections management system, for academics. It’s free, from the Center for History and New Media at George Mason University. It’s easy to install and use, and has themes, and plugins, and media support, just like WordPress.
* Allow users to add a comment and rating to any record. Also add social media buttons.
* Add Library of Congress Subject Headings to your records
* Have your collection records be readable for Zotero users
A search modifier I’d like to see in Google Search…
infirstpage: (similar to the existing intitle: but it would return a result only if the keyword or phrase occurs in the first 360 words of a document)
David Shotton proposes The Five Stars of Online Journal Articles…
“I propose five factors — peer review, open access, enriched content, available datasets and machine-readable metadata — as the Five Stars of Online Journal Articles.”
From a search perspective, I might suggest we need to add another star for “Googlyness”, when all the following factors are present…
* search-engine friendliness (i.e.: make sure the article title shows up as the clickable link in search results, not something like “43w94.taryyt.indd”)
* RSS feeds for linked tables-of-contents
* embedding of the journal title and home URL in each individual PDF or HTML article page (so they can be easily tracked back, after they get casually downloaded to a hard-drive)
JURN has joined Wikipedia, the Internet Archive, O’Reilly, Make magazine, and others in blacking out the service, for 24 hours.
Google’s new guide on how to add name authority (that shows up in Google Search results) to your online articles or blog posts. Thankfully, it doesn’t seem to rely on you signing up with Google’s Facebook-challenger Google+.
Wikipedia is to shut down for 24 hours (5am GMT Weds to 5AM GMT Thurs), in protest against the new copyright legislation being pushed by big publishers in the USA — the SOPA (the Stop Online Piracy Act) and the PIPA (the Protect Intellectual Property Act). Also very worrying is the Research Works Act.