ACM Computing Surveys 31(4), December 1999, Copyright © 1999 by the Association for Computing Machinery, Inc. See the permissions statement below.

The Evolution of Hypertext Link Services

Leslie Carr, Wendy Hall, and David De Roure
University of Southampton     Web:
Multimedia Research Group     Web:
Highfield Lane, Southampton, SO17 1BJ, UK


Web:,, and

Abstract: Hypertext, a neologism of the 1960s indicating something which is more than text, has taken over the attention of scholars, businesses and hobbyists in the form of the World Wide Web. Developed as a hypertext framework for information distribution [Berners-Lee 1992] , its overseeing organisation (W3C) has insisted on maintaining and developing a suite of open standards for data formats, communication protocols and programming interfaces to allow all comers to participate in a globally shared information repository.

However the Web is just one example of how the development of hypertext philosophy, design and deployment has led to practical solutions for information dissemination, manipulation and maintenance. This paper describes how hypertext systems have evolved to become distributed and open providers of information services and examines the nature of the linking that forms the basis of hypertext functionality.

Categories and Subject Descriptors: I.7.2 [Text Processing]: Document Preparation - hypertext/hypermedia; H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems - Hypertext navigation and maps

General Terms: Design, Documentation

Additional Key Words and Phrases: Electronic publishing, link services, open hypermedia systems

1 Justification for Hypertext Services

Early visions of hypertext environments took the view that all information would be made available to everyone through a particular system. Bush's Memex [Bush 1945] was such a system, but it was parasitic on the existing network of research libraries; Nelson's Xanadu [Nelson 1987] by contrast proposed a whole new computer network of ``xanalogical storage'' sites with associated franchise opportunities. Other hypertext environments, while not assuming such a level of ubiquity, were still characterised by the need to import data into the control of the system, irrespective of the fact that information was originated, modified and consumed outside of any specific environment, using many tools and programs, for many purposes, perhaps on many different machines and computer architectures.

The recognition of the diverse nature of information environments drove other communities to develop standards such as SGML for open information interchange (subsequently adopted by the Web in simplified form) allowing documents and data to be interchanged between different applications and work processes. By contrast, the database community produced SQL, a standard for interoperation, so that instead of transferring data between different programs, applications could query any database which held the required information.

Early on in the history of the hypertext research community, there were calls to similarly extend the capabilities of hypertext systems to allow interoperability across a range of software components [Meyrowitz 1989]. Sun's Link Service [Pearl 1989] was the first practical implementation of this idea: part of a commercial software development environment, the link service was composed of a server providing link deployment features together with a library that could be used to turn any application into a client of the service. Several text editors, a file browser and a project scheduling tool were shipped as `link aware', together with various third-party CASE tools.

2 Open Hypermedia Systems

The term `open hypermedia' came to be associated with the provision of a hypertext service which enabled client applications to create, edit and activate links which were managed in separate link databases; it contrasted with the monolithic approach to hypertext systems in which the functionality of both data and link management was provided within a single indivisible application.

Microcosm [Fountain 1990] provided a similar link service solution from the perspective of a different application domain. Developed out of a need to be able to provide a hypertext environment for the presentation of existing archival data (video disks, sound and image libraries as well as textual documents), its aim was to provide scholarly and teaching access for those data sets. It did not have a client/server architecture, but instead distributed the various hypertext functions into a chain of processes which communicated by message passing [Davis 1992]. This enabled the behaviour of the system to be modified easily at run time, adding different link database processes or swapping history management and tours facilities in and out depending on the system behaviour required by the user (which in turn may be a function of their technical or subject sophistication). Document viewers and editors were simply specific instances of hypertext processes in the chain, and similarly communicated by message passing.

Multicard [Rizk 1992] provided another example of this separation, with an extensible protocol (M2000) communicating requests between specially adapted document editors and a hypertext back end. Document editors were not required to implement the full protocol: a minimum of Open/Close Node was deemed sufficient in order to access each document from a networked storage database.

An Architectural Basis for Open Hypermedia Services

Database systems came to be seen as an ideal basis on which to build higher level hypertext functionality: for example HyperBase [Schütt 1990] on a relational foundation and HB1 [Schnase 1993a] on a semantic platform.

A more abstract but highly influential attempt to provide a standard decomposition of hypertext facilities resulted in the Dexter Hypertext Reference Model [Halasz 1990]. It established how the architecture of a hypertext system could be organised into separate layers concerned with component storage, run-time and document content issues. Some of the persistent ideas which emerged from the Dexter separation of concerns were the classification of hypertext objects into nodes, groups, anchors and links and the identification of the run time role of session management.

Particular hypertext features were deliberately excluded from the model, including a model of the contents of each node or document in the hypertext and the presentation and behavioural aspects of the components of a link. The runtime layer focussed on the dynamic interactions between the user and the various components of the system, most importantly the meaning of ``following a link'' in terms of identifying a target link anchor, mapping it onto a link component in the storage layer, determining the resulting anchor components and presenting them in the context of their stored nodes.

Despite providing a layered model, Dexter did not necessarily result in implementations providing identifiably separate services [Nürnberg 1997]; the Flag Taxonomy [Østerbye 1996] directly addressed this issue, modelling the decomposition of roles into interdependent services. The Open Hypermedia Protocol (OHP) [Davis 1996] was a collaborative development by the open hypermedia community to implement a universal protocol between these services, effectively allowing mix-and-match access to the storage and runtime layers of a range of hypermedia systems. Layered systems based on the Dexter model started off in a single computer environment, but over a period of several years were reworked to address issues of the network. Devise HyperMedia [Grønbæk 1993] used an object-oriented schema and an OODB to implement Dexter's storage layer; the authors subsequently proposed extensions to the Dexter model to accommodate open hypermedia systems [Grønbæk 1996] and ultimately technology for deploying these Dexter services in a Web environment [Grønbæk 1997] [Grønbæk 1999]. HyperDisco established an environment for tool integration firstly on a single machine [Wiil 1996] and later across the Internet [Wiil 1997]. Chimera [Anderson 1994] also established a service model, not directly designed on Dexter, but which was later successfully used to apply link services to the Web [Anderson 1997].

A significant outcome of this work has been the demonstration of three independent hypertext systems interoperating in various roles across a network [Reich 1999]. Providing such hypertext linking across application boundaries is a principle goal of open hypermedia interoperability, and offers great hope for the usefulness of data stored in legacy systems.

3 Open Hypermedia Linking Services

The role of links is varied (structural, navigational, spatial etc.) and the kind of linking which can be used within an open environment may be of crucial importance [DeRose 1989]. XLink, the Web proposed standard for hypertext linking [DeRose 1999], allows databases of links to be maintained separately from the documents, with each of the links is fixed to specific regions of specific documents. Hyper-G [Andrews 1995a] provided a solution for the problem of broken links in the Web's open environment in the form of a safe but restricted environment with guaranteed consistency for link editing. By contrast, the Distributed Link Service consisted of an unenclosed environment providing a navigational overlay to Web pages based on within-node text analysis to identify implicit link opportunities such as key phrases, personal names and bibliographic citations [Carr 1998].

To recall the opening theme, hypertext services must provide `more than' services to diverse sets of text and multimedia data. Hypertext links describe various kinds of relationships, not all of which can be easily modelled by `hard', point-to-point hypertext links anchored at fixed document offsets. Instead they may be computed or inferred based on attributes of the document [Parunak 1991] , features of its contents [Hirata 1997], characteristics of the runtime environment [Stotts 1989] or traits of the community which uses the documents [Hill 1997].

Computations which process knowledge and meta-knowledge, data and metadata are proposed for providing the 'more than' services of the Web [Berners-Lee 1997]. Metadata as well as analysis of the data contents themselves can be used as the basis for deriving linkable relationships. Initially used to encode keywords and publication cataloguing information, the Web standard RDF [Lassila 1999] can also make general statements about documents which a suitable future service can use to infer many kinds of link relationships (e.g. two documents are related because their metadata declares that they describe the same research project).

Link services emerged as embedded parts of a monolithic system but have become value-added services in a heterogenous and unco-ordinated environment. This has opened up enormous challenges of robustness and scalability, not the least of which is dealing with disappearing documents and broken links.

In the future, it is possible that link services which provide `soft' links (corresponding to inferable relationships between documents) will play a significant role in future multimedia information applications.


[Anderson 1994] Kenneth M. Anderson, Richard N. Taylor, and E. James Whitehead. "Chimera: Hypertext for Heterogeneous Software Environments" in Proceedings of the ACM European Conference on Hypermedia Technology (ECHT '94), Edinburgh, Scotland, 94-106, [Online:], September 1994.

[Anderson 1997] Kenneth M. Anderson. "Integrating Open Hypermedia Systems with the World Wide Web" in Proceedings of ACM Hypertext '97, Southampton UK, 157-167, [Online:], April 1997.

[Andrews 1995a] Keith Andrews, Frank Kappe, and Hermann A. Maurer. "The Hyper-G Network Information System" in Journal of Universal Computer Science, 1(4), 206-220, [Online:], April 1995.

[Berners-Lee 1992] Tim J. Berners-Lee, Robert Cailliau, Jean-FranÁois Groff, and Bernd Pollermann. "World-Wide Web: The information Universe" in Electronic Networking: Research, Applications and Policy, 2(1):52-58, 1992.

[Berners-Lee 1997] Tim J. Berners-Lee. "Realising the Full Potential of the Web" a presentation at a W3C Meeting, London, UK, [Online:], March 1997.

[Bush 1945] Vannevar Bush. "As We May Think" in The Atlantic Monthly, 176(1),101-108, [Online:], July 1945.

[Carr 1998] Leslie A. Carr, Wendy Hall, and Steven Hitchcock. "Link Services or Link Agents?" in Proceedings of ACM Hypertext '98, Pittsburgh PA, 113-122, [Online:], June 1998.

[Davis 1992] Hugh C. Davis, Wendy Hall, Ian Heath, Gary J. Hill, and Rob J. Wilkins. "Towards an Integrated Information Environment with Open Hypermedia Systems" in Proceedings of the ACM Conference on Hypertext (ECHT '92), Milano, Italy, 181-190, [Online:], December 1992.

[Davis 1996] Hugh C. Davis, Andy Lewis, and Antoine Rizk. "OHP: A Draft Proposal for an Open Hypermedia Protocol" presented at ACM Hypertext '96 Conference, Open Hypermedia Systems Workshop, Washington DC, March 1996.

[DeRose 1989] Steven J. DeRose. "Expanding the Notion of Links" in Proceedings of ACM Hypertext '89, Pittsburgh, PA, 249-257, November 1989.

[DeRose 1999] Steven J. DeRose, David Orchard, and Ben Trafford. "XML Linking Language (XLink)" (Working Draft 26 July 1999). Cambridge, Massachusets: World Wide Web Consortium. [Online:;], 1999.

[Fountain 1990] Andrew M. Fountain, Wendy Hall, Ian Heath, Hugh C. Davis. "Microcosm: An Open Model With Dynamic Linking" in Proceedings of the ACM European Conference on Hypertext '90 (ECHT '90), Versailles, France, 298-311, November 1990.

[Grønbæk 1993] Kaj Grønbæk, Jens A. Hem, Ole L. Madsen, and Lennert Sloth. "Designing Dexter-Based Cooperative Hypermedia Systems" in Proceedings of ACM Hypertext '93, Seattle, WA, 25-38, [Online:], November 1993

[Grønbæk 1996] Kaj Grønbæk and Randall H. Trigg. "Toward a Dexter-Based Model for Open Hypermedia: Unifying Embedded References and Link Objects" in Proceedings of ACM Hypertext 96, Washington DC, 149-160, [Online:] March 1996.

[Grønbæk 1997] Kaj Grønbæk, Niels Olof Bouvin, Lennert Sloth. "Designing Dexter-Based Hypermedia Services for the World Wide Web" in Proceedings of ACM Hypertext '97, Southamption, UK, 146-156, [Online:], 1997

[Grønbæk 1999] Kaj Grønbæk, Lennert Sloth, and Peter OrbĘk. "Webvise: Browser and Proxy Support for Open Hypermedia Structuring Mechanisms on the WWW" in Proceedings of the Eighth International World Wide Web Conference, 253-268, [Online:], 1999.

[Halasz 1990] Frank G. Halasz and Mayer D. Schwartz. "The Dexter Hypertext Reference Model" in Proceedings of the Hypertext Standardization Workshop by National Institute of Science and Technology (NIST), January 1990. reprinted in Communications of ACM (CACM), 37(2), 30-39, [Online:], February 1994.

[Hill 1997] Gary J. Hill, Gerard Hutchings, Roger James, Steven Loades, Jacque HalČ, and Michael Hatzopulous. "Exploiting Serendipity Amongst Users to Provide Support for Hypertext Navigation" in Proceedings of ACM Hypertext '97, Southampton, UK, 212-213, [Online:], April 1997.

[Hirata 1997] Kyoji Hirata, Sougata Mukherjea, Yusaku Okamura, Wen-Syan Li, and Yoshinori Hara. "Object-based navigation. An Intuitive Navigation Style for Content-Oriented Integration Environment" in Proceedings of ACM Hypertext '97, 75-86, [Online:], April 1997.

[Lassila 1999] Ora Lassila and Ralph Swick (editors), "Resource Description Framework (RDF) Model and Syntax Specification" World Wide Web Consortium Recommendation, [Online:], February 22 1999.

[Meyrowitz 1989] Norman K. Meyrowitz "The Missing Link: Why We're All Doing Hypertext Wrong" in The Society of Text: Hypertext, Hypermedia, and the Social Construction of Information, Edward Barrett (editor), Cambridge, MA: MIT Press, 107-114, 1989.

[Nelson 1987] Theodor Helm Nelson. Literary Machines, Edition 87.1, Sausalito Press, 1987.

[Nürnberg 1997] Peter J. Nürnberg, John J. Leggett, and Uffe K. Wiil. "An Agenda for Open Hypermedia Research" in Proceedings of ACM Hypertext '97, Southampton, UK, 198-206, [Online:], April 1997.

[Østerbye 1996] Kasper Østerbye and Uffe K. Wiil. "The Flag Taxonomy of Open Hypermedia Systems" in Proceedings of ACM Hypertext '96, Washington DC, 129-139, [Online:], March 1996.

[Parunak 1991] H. van Dyke Parunak. " Don't Link Me In: Set-based Hypermedia for Taxonmoic Reasoning" in Proceedings of ACM Hypertext '91, San Antonio, TX, 233-242, [Online:], December 1991.

[Pearl 1989] Amy Pearl. "Sun's Link Service: A Protocol for Open Linking" in Proceedings of ACM Hypertext '89, Pittsburgh, PA, 137-146, November 1989.

[Reich 1999] Siegfried Reich, Jon Griffiths, David E. Millard, and Hugh C. Davis. "Solent - A Platform for Distributed Open Hypermedia Applications" in Database and Expert Systems Applications, Trevor Bench-Capon, Giovanni Soda, and A Min Tjoa, (editors), 10th International Conference, DEXA 99, Florence, Italy, V.1677 of LNCS, 802-811, August 1999.

[Rizk 1992] Antoine Rizk, and Louis Sauter. "Multicard: An Open Hypermedia System" in Proceedings of the ACM Conference on Hypertext (ECHT '92), Milano, Italy, 4-10, [Online:], December 1992.

[Schnase 1993a] John L. Schnase, John J. Leggett, David L. Hicks, Peter J. N¸rnberg, and J. Alfredo S·nchez. "Design and Implementation of the HB1 Hyperbase Management System" in Electronic Publishing: Origination, Dissemination and Design, 6(2), 35-63, 1993.

[Schütt 1990] Helge Schütt and Norbert A. Streitz. "Hyperbase: A hypermedia engine based on a relational database management system" in Proceedings of the ACM European Conference on Hypertext '90 (ECHT '90), Versailles, France, 95-108, November 1990.

[Stotts 1989] P. David Stotts and Richard Furuta. "Petri-net based hypertext: Document structure with browsing semantics" in ACM Transactions on Office Information Systems (TOIS), 7(1), 3-29, [Online:], 1989.

[Wiil 1996] Uffe K. Wiil and John J. Leggett. "The HyperDisco Approach to Open Hypermedia Systems" in Proceedings of ACM Hypertext '96, Washington DC, 140-148, [Online:], March 1996.

[Wiil 1997] Uffe K. Wiil and John J. Leggett. "Workspaces: The HyperDisco Approach to Internet Distribution" in Proceedings of ACM Hypertext 97, Southampton, UK, 13-23, [Online:], April 1997.

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or