Tuesday, December 12, 2006

Edu Exchange - lessons learned from the banks

IMG_1327, driek ]

On the Edu Exchange conference, one of the keynote speeches was given by Gerard Hartsink of ABN AMRO, who is heavily involved in the standardisation of monetary traffic between banks. A great idea, though somewhat flawed in execution: he clearly gave the default talk, which had way too many details of the intricacies of the problems encountered [1]. The talk would've been better if he had focussed more on the general problems, rather than for instance, showing all the consortium members in a long series of logo-filled slides.

Still, the final slide with the lessons learned from the troubles of standardisation had some nice ones for our field.

Put the customer in the centre of the chain, not the supplier.

A good reminder - especially since the customers are usually not involved at all in the work, they are the ones we're working for.

Don't start a standardisation process without a common vision.

The Dutch banks learned it the hard way, sinking > 200 mln EUR in to incompatible 'chip wallet' systems.

Business managers, not the experts on standards need to be leading.

Interesting tidbit: One of the directors of Ahold, Ab Heijn, played an instrumental role in the international barcode association.

Do not hand over the control over standards to consultants or service providers.

Not only is the standard at risk, less involvement of the partners means less commitment.

All easier said than done, and all cliches, of course. But still true.

[1] As it turns out in the EU, most countries have developed their own system for paying with debit cards, and even if the same technical standard was used such as in Belgium and Germany, the content standard makes cooperation even harder. An extra layer on top of the national standards is needed to make it possible; the architecture of which is pretty nightmarish.

Friday, November 10, 2006

thoughts on Daring Fireball's review of Stikkit

John Grubers review of stikkit is worth a read, for a number of reasons. First of all, it's thorough on designing an interface, which sounds familiar to us information professionals, right? Or at least, it should. Anyway, Daring Fireball has established itself as a place to watch for excellent interface critiques, with an eye for those details that make all the difference for the overall experience. For instance, the section AJAX vs. PERMALINKS nails the danger of over-using ajax-techniques to repeat the mistake of frame-based navigation in the web1.0 days. Gruber does this by describing exactly what's wrong with non-changing location-bars.

Apart from that, this review is interesting because stikkit is. They're more than yet another backpack or delicious clone. They're trying to do something new: mixing different types of information, and letting the system work out what you're trying to say. This works with simple conventions in the text, rather than knobs and buttons. For instance, type 'at' and a time, and the note is automagically put in the calendar. If it works, it means that the Google Generation users are still willing to learn a vocabulary - against the holy grail of simplicity.

This is a big if. Quoting from Gruber's concluding thoughts:
I remain unconvinced that it’s a good idea in the first place. Stikkit strikes me as a very good implementation of a flawed premise. The main problem is that with an utter lack of UI-enforced structure, it’s hard to get a sense of what the rules are.
When you present features as being magical — “just type a date and it knows it’s an event” — it’s confusing and irritating when the magic runs out.
Translating this to the challenge of re-thinking the academic information flow: if this magic works in stikkit, it will be a must for the Scholary Workbench. I know several researchers who are swear by Backpack. It's a great collaboration tool. I wouldn't be surprised if they move over.

The magic needs to work reliably; it needs to be discovered gradually and with ease; in short, it needs to Just Work®. Time to spend some r&d on this.

Wednesday, October 18, 2006

Famous last words

I ended my last post that I was preparing for the Primo presentation... famous last words. It's been more than two weeks now.

I've got a long draft on Primo, waiting to be published, originally titled "an offer you can't refuse?". That should give an idea about my thoughts. But I don't want to get dooced.

There's a lot to be said, if only about the vision inside Primo. And the presentation was not shrouded in NDA's. So I'll have to look at it more carefully, considering the internal upheaval, but I will walk the walk, and post this post. Mmmmkay?

Thursday, September 28, 2006

Too old for that

A team of students is investigating the use of social bookmarking in academic education as placement project [1]. I had coffee with two of them yesterday. Of course, the conversation turned to the social web, and came onto the myspace-phenonemon (which in Holland means hyves.nl).

Student #1: "Well, I used to be on hyves... but I'm too old for that now."

Student #2: "Yeah, now you've got a girlfriend."

Mind you, these guys are barely twenty.

The technology gap is no longer one wide crack between generations. We're dealing with technology craquelée! [2] Not that this makes it easier to bridge it, but at least the burden's distributed more evenly.

Back to work - preparing for a presentation of Ex Libris' Primo tomorrow. Eager!

[1] the results of which will be published in the open, of course.
[2] technology craquelée, it sounds like the perfect name for a blog.

Monday, September 18, 2006

boasting a boost

Lists are lists, that is all they should be. A model only as good as the imagination of the creator.

With that out of the way, let me boast for a moment! Webometrics ranks my uni as the number 21 within Europe on its commitment to open access (July count). That's not bad, and we're the highest in the Netherlands.

There is so much work to do in changing the research workflow, it's hard to imagine how we're ever going to get there. So this is a nice little boost, meanwhile. Onwards!

Thursday, September 14, 2006

The perceived value of recommendations

On the TidBITS Talk list, a discussion was started on the quality of recommender systems. An rather laughable suggestion by email from Amazon prompted the original poster to ask why these systems are still mediocre at best.

It reminded me of the session on the Techlens project at the CNI fall 2005 task force conference. There were some interesting observations on when recommenders work, and when they don't (most in the Q&A, so not covered in the abstract).

There are two problems for such systems: the quality of the underlying data, and the problem of the desired neighbourhood. I'll start with the latter.

How widely do you, as user, want to have the recommendations vary? When you are new to a subject, you want the defining standard works - a narrow view. As you get more versed in the subject, you actually don't want those predictable results anymore, as you will be already familiar with them. Without surprise, it has no value. Different users want different results.

Research on users expectations showed that users were most content with a recommender service if it would give 5 suggestions (in an unobtrusive interface), as long as out of these five one or two would be 'interesting'. Keep in mind though that this was research on users in a strictly defined research field, which can't be translated directly to other fields, but it gives an indication, and at least it is real, non-anecdotal data.

How does this translate to amazon? Like the original poster, I get the occasional amazon suggestion by email, most of which I delete instantly. Only rarely they were actually interesting. As a result, I find them annoying or amusing, depending on the actual suggestion - and they irritate me almost as much as spam.

However, when I browse amazon, the recommendations are much less obtrusive, so I glance at them when I want, and then I sometimes do find something interesting in there. And I find myself agreeing with the outcome of the techlens research: my amazon miss:hit ratio is 25:1, and I would like more hits, but it needn't be 1:1.

Now the data. The suggestions depend on the quality of the data. The ACM techlens used citations to see which objects were linked. That provides high-quality information on the links between objects.

Amazon however has to rely on more primitive metadata, such as the author, and refines this with buying and browsing patterns. It is actually surprisingly good at this, but as with all 'social sites' (of which amazon arguably is the granddaddy) this needs a critical mass to get reliable. In the dustier corners of the inventory, you get oddball results.

(nothing new here BTW - until recently, in our rare books department, the quality or even availability of indeces of specialized collections depended totally on the personal interest of the specialist...)

A good recommender system will always give you some surprising suggestions. It may not always be the surprise you wanted, but if it would be predictable, it would be of no value at all! So by definition, there is a high miss to hit-ratio. The key is that the system must be unobtrusive enough, so the misses can be ignored.


PS: in the long run, this will all change, when the systems will be able to parse the actual objects and build relationships based on the content. There is a lot of research in this area, largely spin-off of 'Homeland Security' projects. But it is still years away.

Wednesday, September 06, 2006

SFX integrated in search results

Yes, another TICER-inspired post! FastSearch's Bjørn Olstad made a few interesting remarks in his talk. One of those I heartily agree with: search results should be rich enough so the user won't have to open each line to see if it is actually what he was looking for.

This, now, is where OpenURL resolvers such as SFX can shine. As currently implemented, SFX is Yet Another Button, a separate action - and thus a burden to the user. I don't have the statistics handy of UvA-Linker (the name we gave to our SFX - by the way, why does everybody have to give SFX its own name? It's very unhandy in spreading the word to users. But I digress.) I know we had to upgrade our server hardware, so it is used; however, I am positive it is still not used as much as it could, and that's a bloody shame, for it has so much potential.

And then, if the button is clicked, the services offered are pretty boring. Useful, sometimes; but not inspiring. Why offer links to google and amazon searches? They are a dime a dozen, you get those with your cereal. And as a current student, I probably googled already before I looked at the library's search service.

It could be so much better! SFX should work as a web service instead, that can be integrated wherever objects are displayed. Imagine a search results page (for instance in Metalib or an OPAC) where the results are enriched with SFX services! A little AJAX magic will do to insert a few lines. Direct link to fulltext. The first lines from the abstract, with a little button to display the remainer directly inline (again with AJAX). If available, the cover image from amazon or a dozen of other sources. Cited by links. Citation ranking. For chemical articles, graphic molecule displays. And who knows what other services the future holds.

Not just two clicks and a long wait away (not to mention all those new windows, ugh!) - instantly. It shouldn't be that hard for the current generation of openURL resolvers to add a web service interface (provided they *cough* have a simple and straight architecture for the publishing side).

I'm very curious what the SFX community thinks of this idea. The SFX conference is actually happening this week in Stockholm, so I'm a little late for that. When our SFX people are back from Sweden though, I'll see if I can convince them...

PS: I'm aware I am mixing the terms OpenURL Resolver and SFX liberally in this post, which strictly speaking is not correct. I must confess that I don't know what other players there are on this market besides Ex Libris. Note to self: check this out.

Monday, September 04, 2006

Thinking about the future research workflow - A TICER 06 post

The final talk of day 1 of TICER 2006 was by Herbert van de Sompel (abstract powerpoint). In short, it was an elaboration of the article Rethinking Scholarly Publication (D-Lib, vol 10, no 9) and the april 2006 meeting on Augmenting interoperability across scholarly repositories.

It's always a pleasure to hear Herbert talk. He's an inspiring presenter, which unfortunately is not too common amongst conference speakers. When he speaks, he makes clear what tends to be hard to grasp in his writings, that tend to have an information density that is hard to follow for mere mortals. When I read the D-Lib article a while ago, I did not fully grasp the depth and vision; with this talk, the penny dropped.

What really made this a great finish of a a day that was pretty good already, was that this was about thinking further into the future. The other talks were about the next step, current dilemmas, what to this year, maybe the next: using instant messaging and blogs to communicate with clients, or improving your OPAC's search results. All find and dandy, I certainly got some ideas; but the scope of this talk was a much wider vision, which might take a decade to come to fruition.

We've made the first transition, from writing articles on paper, to writing articles electronically. But the idiom of the research workflow, the way scientists and scholars cite sources, has not changed yet. We've merely swapped one medium for another. With this transition well on its way, it's time to re-think the whole scholarly workflow.

And the first step is to build a uniform mechanism for referencing objects. This is necessary for machines to follow the research workflow from one source to another, which will make it possible to build all kind of services on top of the objects. Think recommenders on basis of recent citations (recent as in: without the publication lag, that can run up to two years for slow-moving journals!). Think overlay journals that work. Think archiving services (LOCKSS) that function automatically. It could work! And if anyone could pull it off, it's van de Sompel, who brought us the OAI and OpenURL architecture (*).

But even if this particular direction won't be not the way, it is refreshing to think beyond the normal event horizon. The digitilization of the research workflow can mean so much more than the way the system is growing now, which, in Herberts words, is nothing more than scanning in printed journals with the paper left out. The revolution has only just begun.

(*) He's not alone, of course, he's surrounded by some really bright people, and LANL is the perfect place for this type of research. But he's the essential hub.

[Note: what I find very interesting about this proposal, is that it is a step on the road towards the original vision of Ted Nelson's Project Xanadu: an hypertext system where source and target not just blindly link, but are aware of each others existance. This goes way beyond our limited corner of the web, of course, but still].

Friday, September 01, 2006

On mixing work and play

Hello readers from Library Stuff. I peeked at the stats in a moment of curiosity and was pleasantly surprised. Thanks Steven! And a nice blog to boot - it's listed.

Commenting on my opening post, Library stuff makes a point about blurring the lines between work and play. Funnily enough, I'm writing this at nine on a friday evening, as I'm about to go dancing.

Yes, I want to show my true self here, my honest and personal self. After all, what's the point? That does not mean a full mix of work and play in one place though. After all, I do not live in my library. There's a time for working and a time for dancing. I will do the occasional work at home, and the occasional dance at work (yes, true story!), but both have their own place.

Same "blog voice" talking, just a different focus. Though the occasional vintage photograph may seep through.

Thursday, August 31, 2006

Online conference content done right

University of Michigan is one of the partners in Google's mass digitization program. They are a partner, not a supplier: they're not just surrendering their content to the Big G, but also host the digitized books themselves, and do a lot of research into what this means. In march, they held a symposium Scholarship and Libraries in Transition, with speakers such as Clifford Lynch and Tim O'Reilly.

The content is interesting, but I'll leave that for another time. The problems of mass digitization are, unfortunate as it may be, at the moment not yet relevant for most libraries. But we can also learn from the presentation. Now that's the way to do it! Not only are all the talks available in streaming video, but there was also a symposium blog. A mixture of more and less official posts with sometimes lively comments, made during and shortly after the conference, in which the energy of the event comes through. What a huge improvement over a website with powerpoint slides! There is a conversation going on in the blog, giving pointers to which talks to watch.

The last post is three weeks after, officially closing the comments. The blog stays online. Well done.

Tuesday, August 29, 2006

Being eaten from two sides

The Register has an interview with Colly Meyers of AQA. AQA offers an SMS answer-any-question-for-a-quid service, is apparently a phenonomon in the UK, and makes a decent profit. The interview centers on the 'demise of Google', though it might be a bit early for that, and Andrew Orlowski is always keen to predict the is nigh for the big G - you've got to look through that (not always easy, but worth it with El Reg).

From a library POV, the interesting thing is the service. This is a service which is on our turf! This is the work of the good old reference librarian, and here they are charging a quid and making profit with nine heads aboard.

Two observations. First, this once more establishes that there is a market for information, which includes a segment that is willing to pay good money for quality. Second, why have libraries left this to the market?

It's a wake-up call. We are being eaten, and not just by Google on one end, but on the other as well.

Monday, August 28, 2006

Spring is here!

The end of august 2006. Outside it's raining cats and dogs, inside I've just set up a new blog with the name Library Spring. The irony is hard to ignore.

Why Library Spring? The world of libraries is hardly blossoming. Another name could just as well have been Identity Crisis. But it is all too easy to be cynical, an protective mode to slip into.

Third paragraph already - let me introduce myself. My name is Driek Heesakkers, I work in the library of the University of Amsterdam, and this blog is intended as a work blog. It is not official, by no means, but a way to join the online discussion of the library world. I find myself adding more and more comments in my del.icio.us bookmarks tagged 'library'. The finaly drop was that one day after the opening day of TICER 2006, I met one of the Dutch Librarybloggers, just as I was bubbling over with observations and thoughts. Time for a blog on my thoughts on library innovation.

I have been blogging personally for a long time now (available with a quick google) but mostly avoided work. At the start of this experiment, these barriers will stay up, let's see how it goes.

Finally - why in English? Not just because, in the words of Henk Ellerman, English is an open standard, because the interesting conversations happen there, or because I'm used to it (blogging in Dutch just feels wrong). I like distance it creates with day-to-day routine (to which I hastily have to retreat now).