Semantic Web rant – automated travel agents in 1978

BBC – Radio 4: The Material World 29/04/2004: The Semantic Web: “Quentin cooper is joined by Wendy Hall professor of Computer sciences at the University of Southampton and Jim Hendler professor of Computer sciences at the University of Maryland to find out how to add meaning to mass of content on the web.”

The above page includes background info and a link to the RealAudio replay of the original 29th April programme/chat. The good news is that it’s a pretty good overview for general consumption. The bad news is that it promises mainstream Semantic Web in 10 years, but in a cavalier fashion that (earlier in the programme) mentions software agents such as Holly in Red Dwarf, which is misleading. The ubiquitous travel agent example is cited as an example, i.e. tools that, like good human travel agents, could help me cut through the morrass of data and services.

This connection reminded me that the groundbreaking automated travel agent scenario was established some 27 years ago (!!) in a paper by six of the heaviest hitters in the field:

Daniel G. Bobrow, Ronald M. Kaplan, Martin Kay, Donald A. Norman, Henry Thompson, Terry Winograd: “GUS, A Frame-Driven Dialog System.” Artificial Intelligence. 8(2): 155-173 (1977).

Now that 27 years have gone by, the travel data is all online, and is straightforwardly searchable and retrievable. But the automated travel agent still remains elusive. The data is there, the representations are there, the inference engines are there, the connectivity is there, the computing horsepower is there (in spades), but the automated travel agent still remains elusive.

I’m not going to play cynic and argue that it’s unattainable, or that it requires some mystical human intuition to be a good travel agent. On the contrary — I’m an AI person, and a Psychologist, and I’m certain that the tools are there and that the implementation has pretty much already been achieved. But I’m not a Sociologist, nor an Anthropologist, and that’s probably what was necessary to help balance that team of heavy hitters back in 1977: the usage-in-context scenarios would have set off alarm bells to someone outside of AI and Cognitive Psychology who might have said “wait a minute, folks: no one is going to USE an automated agent in the way you have envisaged.”

To make this a little more concrete, let me tell you a true story from 1978. So enamoured was I of GUS-like automated travel agent scenarios, computational linguistics, and intelligents agents, that I thought I’d test out a little scenario on a real person: a ticket agent at Waverley train station in Edinburgh, Scotland. I was travelling from The Open University in Milton Keynes up to the AI Department at the University of Edinburgh on a fairly regular basis (we were making a series of 16 TV programmes on AI with the BBC back then, for a new Open University course), and travel by train was dramatically improved if you could avoid going through London. There were some ‘unofficial’ or ‘unrecommend’ routes, that involved, if you were lucky, a 2-minute interchange at Crewe (the recommended routes required a 20-minute connection time, to be on the safe side). If your first train was more than 2 minutes late, it cost you about 2 hours!

Anyway, enough background. I needed some specific information, and I needed it badly. I formulated my question very precisely, and even wrote it down on a piece of paper so I could ask it with no ambiguity. I strolled up to the ticket window, and proudly read out my natural language version of my GUS-like query:

“What is the latest I can leave Edinburgh tomorrow morning and still arrive in Milton Keynes by 1PM?”


“What is the latest I can leave Edinburgh tomorrow morning and still arrive in Milton Keynes by 1PM?”

“Sorrry, me mate, I dinna understand ye.”

“Er… sorry… I have to be in Milton Keynes tomorrow afternoon, and was just wondering if I can get there in time.”

“Ach… why didna ye sae so?”

[addendum 29th April 2004, following some useful email interchanges with KMi colleagues]
In many ways, the key for me is not having the ‘system’ actually do much for me at all, other than lay out the space of possibilities and perhaps clarify trajectories and tradeoffs for me [no surprises here: this is precisely why Meet-O-Matic doesn’t actually arrange meetings!] — toward this end, common (and clearer) representation formalisms, better interfaces, expressive systems, etc. can all contribute; what I dispute is the premise, typified by GUS and the aforementioned radio programme, that agents will actually be negotiating or deciding on my behalf.


%d bloggers like this: