May 23, 2019
Among other things, Brown CS history is a treasure trove of stories, and sometimes even the sequence of events that led to a recent symposium is a tale worth telling.
As last year began, Marc Weber ‘84 had hypertext on the brain. The Computer History Museum (CHM) was planning a conference on pioneering inventor Douglas Engelbart, and in Marc’s role as the Curatorial Director of CHM’s Internet History Program, he knew about an effort by two alums to revive Brown’s late 1960s hypertext system, FRESS.
Steven DeRose ‘81 AM ‘86 PhD ‘89 had been the final director of the FRESS project, building its last binary from 82,000 lines of assembly source. David Durand ‘83 had been a FRESS user in high school due to a policy that gave access to the children of faculty members. In 1989, they created a FRESS demo for the ACM Hypertext Conference after building an emulator for its original vector graphics display.
Marc asked Professor Andries “Andy” van Dam to revive the FRESS demo for CHM’s conference, but the emulator no longer worked, so Tyler Schicke ‘18 ScM ‘19 was pressed into service, working with David to create a new one. That’s where Norm Meyrowitz ‘81 enters our story.
“Andy, in his classic ‘the more the merrier’ style,” Norm remembers, “thought I should try to revive Intermedia, Brown’s hypertext system from the late 1980s. We thought it had been lost due to a degraded disk, but someone had saved our demo copy.” And so he started buying up 30-year-old hardware: mice, displays, video cards, cables.
“And it worked!” he says. “It got me thinking two things. The first was that we should find more of our old systems and create a symposium to celebrate Brown’s impact on the online universe, and the second was that I was about to create a lot of work for myself.”
A symposium was born, and on May 23, Brown CS held A Half-Century of Hypertext to recognize Brown’s many contributions to the linked world. The event featured a half-dozen demos and almost thirty presenters, ranging in age from their early twenties to late eighties.
In the article below, we retrace the symposium’s chronology to provide a brief history of five decades of research with a lasting impact.
Celebrating Brown’s Half-Century Of Hypertext
When Andy and his team of undergraduate researchers began building Brown’s first hypertext system in 1967, not even a hundred people on the planet had heard of hypertext. A half-century later, the web has over 4.4 billion users. At its most basic, hypertext is simply connecting documents to each other through integrated links. What makes it so important?
For one thing, how far it can reach. Picture hypertext as a bike ride that might start in your backyard but then follow any itinerary you choose across all the information in the universe. It’s like racing up a sequoia, then to the top of Everest, then out into the Crab Nebula. It’s terrifying, but it’s exhilarating, too: following one link to another, then another, you can go anywhere.
Few things are more daunting than infinite possibility, so you’ll need an expert guide. He’s walking his bike out from behind the sequoia as we speak: angular Dutch features, glasses, sweater over his shoulders. If you haven’t met him yet, this is Andy van Dam. For an introduction, let’s turn to one of his former students.
Terry Gross ‘68, now an attorney specializing in Internet law, was one of the symposium’s first presenters and worked on Brown’s first hypertext system. “All the information in the universe,” he says, “can be interconnected, and we can look at it on screens. Those connections already exist – text has always worked that way through footnotes and bibliographies and tables of contents – but we haven’t been able to see them. Andy was telling us that fifty years ago, and it was like we were the earliest physicists and he was explaining string theory to us. It was a psychedelic view of information.”
For the past half-century, Andy, his colleagues, and his hundreds of student researchers have used a succession of hypertext experiments to blaze a prismatic trail through an infinite universe.
This is their story. This is where they went.
For a half-century, they’ve been working to help you go anywhere.
“It’s multithreaded,” Andy says, throwing himself onto the couch in his famously cluttered office a few months before the symposium. “Out of necessity, the story of hypertext is a hyperlinked story.”
Pulling one of those threads away from the others to examine it for a moment, we find the theme of shared vision and inspiration. It begins with the narrative of four people whose prescience and unorthodoxy brought about our linked world, and it starts almost 75 years ago. Vannevar Bush, an engineer and influential policymaker who later spearheaded the creation of the National Science Foundation, used a 1945 essay (“As We May Think”) to describe what he called the memex, a device that would allow its user to organize, search, and navigate through a personal library of information stored on microfilm.
“It’s truly about the effortlessness of the human mind, jumping from one topic to a related topic with ease,” Andy says. He makes a jump of his own, finding the analogy he wants in Proust: “A madeleine is virtually tasteless, but the flavor can cause your mind to immediately take you to a visual scene, a moment in history. Bush influenced generations of scientists.”
One of them was Douglas Engelbart, the pioneering inventor whose creation of the computer mouse has received more popular recognition than the computerized collaboration system (oNLine System, or NLS) that he developed at the Stanford Research Institute. In 1968, he used his “Mother of All Demos” to show a conference audience an extraordinary array of revolutionary functionality, including windows, simple line graphics, dynamic linking, text and outline processing, video conferencing, and real-time collaboration.
“It was mind-blowing in the number of dimensions that went beyond the state of the art that we knew,” Andy remembers, “and the system was matched by the artfulness of the 90-minute-long demo itself – it was like a technical rock concert. I don’t think such a dramatic event will ever be replicated at a technical conference.”
Andy had read Bush’s essay and seen Engelbart’s demo live, but his connection to the third visionary was more personal. He met Ted Nelson, the philosopher and sociologist who later coined the words hypertext, hypermedia, and virtuality, as an undergraduate at Swarthmore College. The marks of iconoclasm were already evident on both: during freshman orientation, Andy and Ted violated campus parietal rules by chatting in a dorm room with a female classmate.
After Swarthmore, Andy went on to the University of Pennsylvania, where he earned the second CS doctorate in the country, focusing on the nascent field of computer graphics. Brown’s Division of Applied Mathematics hired him in 1965 to add computer science, still a very new field then, to the Division. His newly-formed research group needed an interactive graphics terminal, and thanks to the University’s close ties to IBM, they soon had one: the 2250, which featured a QWERTY keyboard, a programmed function keyboard, a light pen for selection and navigation, and 12 inches by 12 inches of usable display area. It was powered by Brown’s IBM 360 mainframe, whose 512 kilobytes of main memory served the entire campus. Although it was shared with others, it was a multi-million-dollar piece of technology housed in a large machine room that van Dam’s team was able to use as essentially a personal computer between midnight and 4 AM.
“It was huge for us to have a personal display,” Andy says. It was the sort of device that because of its cost (more than a million dollars in today’s money) only a General Motors or a Boeing had at their disposal, not most universities, and certainly not an individual professor. Similar displays were used by their corporate owners largely for what we now call CAD (Computer-Assisted Design).
After years apart, Andy and Ted ran into each other in 1967 at the Spring Joint Computer Conference and immediately began catching up. Ted had coined the word hypertext in 1963 to mean “non-sequential writing”, and in 1966, had proposed a system called Xanadu in which he wanted to include features such as linking, conditional branching, windows, indexing, undo, versioning, and comparison of related texts on an interactive graphics screen. At the time, hardly anyone in the world was doing interactive text editing in its simplest form, much less with the functionality that Nelson had described.
Ted’s theories of information processing resonated with Andy’s thesis work on 2D graphics, which used an associative memory simulator to store data structures. (Associative memory is the idea that related items, without being stored next to each other, are accessed very efficiently.) The two men saw clear parallels between Bush’s description of associative trails, van Dam’s associative memory simulation, and Nelson’s hypertext links as a way of implementing associations.
And now van Dam had a world-class graphics display at his disposal – what about using it for a hypertext system? The intellectual pull was immediate, but Andy says there was a personal appeal as well. “I was always interested in the way people interact by annotating each other’s work,” he admits. “I like knowing what they find important, what they talk about, what they argue about!”
For the first of many times (the theme of attempting the near-impossible is another thread that runs throughout this story), van Dam and his young research assistants were setting out to do something that they had never seen or worked on before, whose feasibility was unknown.
“There was simply no Brown CS at Brown University back then,” says Steven Carmody ‘71, later an IT Architect at Brown who led the development of the Shibboleth single sign-on login software. “Andy was part of Applied Math and John Savage was in Engineering. That was it.”
It was the autumn of 1967 when work began on van Dam’s first hypertext experiment: Hypertext Editing System, or HES. Cultural upheaval for Brown mirrored the turbulence of a rebellious period in American history, and the adoption of the Open Curriculum (an educational rebellion of sorts, then called the New Curriculum) was close at hand. The transformative glow of the Summer of Love still lingered, and Andy and his students fit the zeitgeist with their long hair and sandals, but Brown’s earliest computer scientists were already getting a little less fresh air than their classmates.
Steve had just arrived on campus, and hoping to enroll in an in-depth programming course, he went to van Dam’s office in the basement of 182 George Street. He found a student on top of a filing cabinet, crouching like a gargoyle. “That should have been a heads-up as to what I was in for,” he laughs. “By the end of the semester, I was already used to spending all night in the old Computing Lab.”
There was no immediate sense of history in the making. “It was much more modest,” Andy insists. “Hypertext was an addition to my research agenda – I thought of it as a side bet. HES was just an attempt to effectively enact Bush’s vision of linked, related materials as influenced by Nelson’s desired list of features based on digital computers and display screens.”
HES had a dual personality from the start, van Dam says, with online word processing and printing as important as hypertext navigation. (At the time, the concept of sophisticated, on-screen, direct-manipulation word processing didn’t exist for practical purposes.) He wanted his team to experiment with linking, but at the time, the lure of creating and editing documents was equally exciting. “The metaphor we used for document editing was a blank sheet of paper,” says Andy. “So often, that’s how creativity works – blasting out ideas, fragments, then refining them.”
To picture HES, think back to the IBM 2250: the user sits at the screen and employs a keyboard, light pen, and function keys. “Today,” Steve says, “we take a screen for granted and can barely think of a computer without one, but to the average person in 1967, the idea of using a computer via a TV screen was completely unknown and unfamiliar to people who had only used punched cards to get one or two batch submissions per day.”
For the first time, instead of typing commands and waiting to see a printout, users were interacting with their documents and seeing the formatted results directly. Text was divided into areas that could be any length, growing and shrinking automatically as the material within them changed. Links were conceptual bridges, shown on screen as asterisks: one at the point of departure, one at the arrival point somewhere else. Adding text, deleting it, copying, pasting, and linking from one piece of text to another all occurred with direct manipulation using the light pen on screen.
After less than a year of development, it was 1968. Andy remembers multiple visits to New York City with his students, all well-scrubbed and in freshly-washed khakis. They were showing HES to a number of IBM customers, including TIME LIFE and The New York Times. To understand why there were no immediate takers for the system, we need a bit of context.
“Journalists dictated everything to secretaries!” Andy says, throwing his arms up. “They phoned in their stories, and we were told that it’d be at least a decade before we could expect them to sit in front of screens and type and edit their stories directly. We were trying to sell people the first document editor designed specifically for commercial equipment, and the idea of sitting at a workstation and writing text, then refining it, working in cycles with collaborators, printing – nobody did any of that or had even thought about it!”
“It was exciting work,” Steve says, “and very different, but at the time I had no framework in which to view it, no way of even imagining the potential impact. I was a freshman at the time, so I assumed that this was what all college students did – climb on a rocket and work on completely new and exciting stuff, ideas and systems that didn't exist in most people's imaginations. At the time, there was just the excitement of doing something I'd never been able to do before.”
We’ve already heard about Steve’s trip to 182 George Street, asking Andy’s permission to take his course, but we haven’t yet revealed van Dam’s response. Carmody remembers it clearly, and it was something that his fellow student researchers would hear again and again across the decades: "If you're crazy enough to try, then sure!"
We’ll return to this thread many times in the pages ahead: “crazily committed” students, as Andy has described them, working equally crazy hours on something whose feasibility van Dam wasn’t fully sure of. (We’ve provided a list of all known contributors to HES and all other hypertext projects at the end of this article.)
“When I came to Brown,” Andy says, “people didn’t know what a computer scientist was! We were scrappy underdogs.” Before long, he’d moved to a big room at the very top of the Applied Mathematics building. There were seven desks, including one for a secretary, and a perpetual flow of undergrads.
“A madhouse,” says Andy, who was barely older than his students and shared their stamina, working seven days a week and usually averaging about four hours of sleep per night. “And the Castle had been a very quiet place before we got there, with no tradition of using undergraduates as research assistants.”
“This was also the start of the Undergraduate Teaching Assistant program,” van Dam notes. “Our students have always been individualistic – quirky, but with a sense of humor, very smart, passionate, inner-directed.” Even today, he says, they’re more than willing to join him on a 30-mile bike ride to Bristol and back with lobster rolls for lunch. “I call it the classical Brown model of a super-engaged student, willing to experiment and embrace a certain lack of structure.”
And yet few people at Brown took the idea of undergraduates engaged in a team research project any more seriously than they did HES itself. Maybe it paralleled what Andy describes as a major struggle for acceptance of hypertext that was just beginning. “The idea of using a computer to process text rather than crunch numbers for science and engineering was considered sacrilegious,” Steve says, “a waste of a valuable resource.”
Even the senior faculty member from the Division of Applied Math who brought van Dam to Brown advised him to return to something serious, saying that hypertext had no future. Andy remembers a Chair of Applied Math some years later proclaiming that he knew all about CS because he’d written FORTRAN programs, and ending one conversation with a complete dismissal of the field: “Computer science is just bathroom thinking.”
Despite its detractors, it was almost time for the first hypertext system to give way to the second (FRESS, the File Retrieval and Editing SyStem). Unbeknownst to Andy at the time, IBM had found a client for HES: NASA, who used it for documentation on the Apollo missions. “Microfilm of that documentation went to outer space,” van Dam says. “I’m still proud of that!”
Hypertext was also starting to take hold on campus. It surprised him and brought real joy, Carmody says, when he saw Brown-based projects begin to use HES productively, for real work. “HES and later FRESS made it easy for people in the same locale to work together on documents,” he says, “and today's world expects people across the globe to collaborate.”
Striking a personal note, Steve picks up another thread that runs throughout this story, the effect that working on hypertext as a student has had on people’s careers for years afterward: “My work on HES put me on a career path of linking people and content. Over the last two decades, I've helped build infrastructure that makes it possible for teams spread around the world to collaborate easily and securely. This software is used by many commercial entities and by academic research projects around the world, including the LIGO project, which brought together a thousand physicists from six continents in the search for gravitational waves.”
That same group of scientists won the Nobel Prize for Physics in 2017.
“The goal for FRESS,” Andy says, “was to improve on Engelbart’s best ideas and add some things we really liked from HES. We wanted to let a large number of users share a system and give them an easy-to-use, flexible tool that wasn’t limited to a single display on campus but could be used on a variety of commodity display devices.”
Ed Lazowska ‘72, now holder of the Bill & Melinda Gates Chair in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, started working on FRESS in his sophomore year. His hypertext journey had begun in the HES era with a visit to the Watson machine room at midnight (time on the mainframe was precious), where he found a long-haired twentysomething working at the IBM 2250, light pen in hand. Hesitant to interrupt, Ed sat for a while in silence. A minute went by. Two minutes, maybe three. Then, without turning around, the man at the computer spoke.
“Well, chucklehead –” It was Andy, of course, although his choice of words was somewhat more vulgar. “–aren’t you going to ask any questions?”
In the days ahead, just as Lazowska was being drawn into van Dam’s orbit, Ted Nelson was moving in the opposite direction. “I came from an engineering background,” Andy says, “but Ted was a philosopher, and he wanted to keep hypertext pure. With the technology we had at the time, realizing his entire concept just wasn’t possible, and to me, building something that worked was paramount.”
In the end, Andy’s team had implemented a great deal of Nelson’s design, but not all of it, and had diverged from his vision by including a focus on word processing when Ted insisted that hypertext shouldn’t be printed. After making numerous contributions to HES and the ideas behind it, Nelson ended the partnership due to multiple creative differences.
“Ted wasn’t happy with what we did with his ideas,” Ed notes, “but we breathed life into them. We didn’t deliver everything that he wanted – maybe you could look at that as something less, but it was something real.”
Other disagreements were less theoretical and posed an imminent threat to continuing hypertext experimentation at Brown. Colleagues like Roderick Chisholm, a prominent philosopher, saw FRESS as a tremendous boost to their productivity, but many of Andy’s fellow scientists were hostile to what they saw as frivolous use of resources. One day, the Vice-President who controlled the computing budget told van Dam that the experiment was over. Time on the mainframe would be reserved for physics and engineering, not hypertext.
Hoping for a show of support, Andy contacted colleagues from other disciplines across the University, urging them to speak out on hypertext’s behalf. (Interdisciplinary outreach is another thread that we’ll return to later.) “Let the non-scientists use typewriters!” was the administration’s response to enthusiastic letters from Chisholm and others, but van Dam, not ready to give up, cited the official Brown policy for the mainframe, which proclaimed that it was a campus-wide resource.
“I told the administrator I’d go public,” he says, “and say that they were discriminating against the humanities. Either it was a university resource or it wasn’t.” In the end, Andy got his way. FRESS, which first took its name from the Yiddish verb for a gluttonous eater (it used 128 of the mainframe’s 512 kilobytes to run as a time-shared service) once again had the financial resources it needed. Designed to be both multi-user and multi-display, FRESS was the first hypertext system to run on readily-available commercial hardware, using one of the earliest virtual terminals to allow input from a range of keyboards, displays, and pointing devices.
FRESS soon moved to the University’s new mainframe, the IBM 360 Model 67, IBM’s first virtual machine hardware and operating system. This allowed FRESS to to have a novel paging system that allowed files to be much larger than even the mainframe's virtual memory would otherwise allow. FRESS files were stored as fixed-size “pages” on disk, which made autosave easy: before an edit, changed pages were written, and no more than one edit could be lost if the system crashed. This also made undo easy: because the permanent file on disk didn’t change immediately, the current edit could always be undone. Both functions are believed to be the first of their kind.
It was more than a decade before undo and autosave reached the masses with the release of Microsoft Word in 1983. (Jonathan Prusky ‘79, who was the first product manager for all versions of Word, had been part of the FRESS team.) In a slightly different implementation, Word allowed multiple edits to be undone by saving an edit log at the end of the file, and provided a fast autosave by requiring only the last part of the file to be saved. Over time, Jonathan says, both functions became industry standards, but the original FRESS implementation, which only required a portion of the file to be changed on disk, remained superior to many of its successors.
And the list of innovations goes on. Documents had no size limitations, and unlike HTML links, FRESS hyperlinks were intrinsically bidirectional, automatically supplying the destination link with a bridge back to the originating one. (In contrast to the World Wide Web and its unidirectional pointers, the non-global scale of FRESS made bidirectionality more easily to implement.)
FRESS also broke ground by giving users an overview mode that allowed them to make large-scale changes easily instead of getting bogged down in markup, editing individual instances of text. The advantages for handling large, complex documents were clear: FRESS was making a case for hypertext as a multipurpose tool equally well-suited for publishing, researching, browsing, exploring, and teaching.
One of the best examples came in 1976, when FRESS proved itself to be as revolutionary for education as it had been for computer science. Collaborating under a National Endowment for the Humanities (NEH) grant (“An Experiment in Computer Based Education Using Hypertext”), Andy and Professor Robert Scholes of Brown’s Department of English used the system to create a poetry textbook and linked corpus for students and instructors that contained course texts, supplementary materials, and an ongoing omnidirectional commentary and threaded discussion that they created themselves.
Not only was it one of the very few occasions for a computer scientist to receive an NEH grant, it was the first time hypertext had been used to teach the humanities, and arguably the first online scholarly community. Results were impressive: compared to a control group that took the course in a traditional setting, the FRESS users attained a deeper understanding of the material and higher satisfaction with the course. On average, students using hypertext wrote three times as much as their computerless counterparts.
Four years ago, the NEH celebrated its fiftieth anniversary. TIME magazine covered the occasion, including a photo of the edge-punched card used to summarize the grant. In preparation for the event, a program officer called van Dam and asked him if he’d ever produced the film (“HYPERTEXT: an Educational Experiment in English and Computer Science at Brown University”) that was promised as part of the grant application. He had (it’s now available at http://bit.ly/2LZX0n0), but it had never been publicly shown. Thus, the idea of a public showing at the University of Maryland with a panel discussion following was born. “It was a huge thrill to see the entire NEH upper echelon come out to honor what we’d done,” Andy says.
With its room-sized computers, monochrome monitors, and massive magnetic disk drives, watching the documentary feels like a journey backward in time, but Ed Lazowska explains that what looks dated to us in 2019 seemed unbelievably futuristic more than forty years ago. “Using hardware as a time machine is the only way to describe what we were doing,” he says. “We made profligate use of hardware to bring word processing and cloud computing – the entire personal computing era that didn’t come until much later – to a whole university.”
“I also have to note,” he adds, “that people may think of Andy’s forte as graphics and interaction, but with FRESS, we were dealing with co-processing, distributed systems, OSes, compilers – all these things under his direction that would later get differentiated into separate major research areas. It was magical.”
Maybe due to the staggering workloads, near-impossible deadlines, and the need to blow off steam, it was sometimes also (here’s that word again) crazy. After forming a partnership under the name Text Systems, Inc. with a Connecticut company founded by the inventors of the CP/CMS virtual machine operating system that ran on the 360/67, Ed remembers working out of an office building downtown. There were almost two dozen students, all using terminals connected by a 4800-baud (approximately 600 bytes per second) dedicated line to a fleet of 360/67s. At the end of a particularly demanding week, the team had a massive crab feed on a Friday night, then left the shells and remnants and took off for the weekend. The stench that awaited them on Monday fell just short of requiring professional remediation.
Billing for the use of Brown’s mainframe was handled by the operating system, which punched accounting records at the end of each eight-hour shift and reconciled them once a week. But computers of that era were more error-prone, and Andy’s students soon realized that crashing the system at the end of a shift made proper accounting impossible. “Big Grace” was the head computer operator: she turned a blind eye in exchange for Girl Scout cookie purchases from the FRESS team, and her daughter was soon the leading seller of cookies in Rhode Island. Thus, forgiveness wasn’t free, but at least it was tasty and convenient.
Despite the youthful hijinx, Ed describes it as the first time that he was really treated like an adult. In building FRESS, just like HES, Andy was treating undergrads as equal partners in discovery. “It was what college should be and mostly isn’t,” says Lazowska. “We weren’t aware of how groundbreaking it was to have Andy trust us with things he wasn’t sure how to do yet.”
And so HES gave way to FRESS, and the 60s to the 70s, with Moore’s Law operating all the while, and a kind of confluence was taking shape. “You have to remember that for every step of technological progress with hypertext, Brown was there really early. We have the longest history with hypertext in academia by far,” says augmented reality pioneer Steven Feiner ‘73, who received his PhD in 1987 under Andy’s supervision. He later co-authored Computer Graphics: Principles and Practice with Georgia Tech’s Jim Foley, Andy, and John Hughes of Brown CS, and has been a Columbia University faculty member since 1985.
For years, vector graphics had been the way of the world, and the pride of Andy’s research team was the Brown University Graphics System (BUGS), a pair of Digital Scientific Meta4 microprogrammable processors combined with a 3D Vector General display, augmented with Hal Webber ‘72’s Super-Integral Microprogrammable Arithmetic and Logic Expediter (SIMALE). But Andy’s team of dreamers was finding that commercial hardware had started to catch up with them.
With the advent of raster graphics, early versions of the color screens that we know today were finally becoming affordable, providing new hope of making hypertext less text-dominant and more pictorial. At the same time, the U.S. Navy was realizing that there was an alternative to weighing ships and submarines down with thousands of pounds of printed technical manuals that were needed to operate, maintain, and repair their systems.
That alternative was hypertext documentation, ultimately to be used on briefcase-sized portable displays: effectively, laptops, which hadn’t been invented at the time. Thanks to funding from the Office of Naval Research, IGD (Interactive Graphical Documents, also known as Electronic Document System, or EDS) was born, and prototyped on state-of-the-art raster displays connected to a set of Digital Equipment Corporation VAX-11/780s, one of which is now housed in the CIT’s Computer Museum.
What was the new generation of hardware like? “We went to the National Computer Conference in New York in 1979 to look at potential systems,” Feiner remembers. “This was the Ramtek 9400, 8 bits of color at 1024 by 1280 resolution, which doesn’t sound like much until you think about the 24-line ASCII green screen terminals that we’d been working at!”
More than a decade into van Dam’s “side bet” supplement to his graphics research agenda, hypertext was still a home for crazily committed students. “I took Andy’s CS 100 course, then his 101,” Feiner says. “Back then, people went home for the holidays and then came back for exams in January. Or everybody else did, rather – people doing 100/101 stayed in Providence, spending their vacation in the Computer Center.”
The central idea behind IGD was that its documents were directed graphs whose nodes were like the pages of a book. Pages could contain both text and illustrations, buttons that triggered actions or linked to another page, and indexing information, such as keywords. Compared to its predecessors, IGD was wildly colorful, with a palette of 64 pickable color chips; users could draw straight and freehand lines as well as create and manipulate arcs, circles, and polygons. The picture layout system and the document system were separate, making spontaneous presentations easy, and so was on-the-fly navigation: choosing a keyword resulted in relevant pages not just being listed alphabetically but laid out in color-coded bars, each with a miniature screenshot icon that served as a link to that page. Thanks to a collaboration with Brown CS Professor Steven Reiss, IGD also featured a built-in relational database, ERIS, which maintained the structure of the document and gave authors the opportunity to write and execute new functions.
“My work with IGD absolutely led to my doctoral thesis, and beyond,” Feiner says. “I wanted to have images, beautiful images, generated on the fly for a customized user.” Equally useful to him, he says, was van Dam’s characteristic fondness for outreach and collaboration with colleagues from other disciplines: “I learned from Andy how to work with people in the humanities. They provide a real use case, use your tools to do real things. I owe a lot to that environment.”
Raster graphics research continued to accelerate in the 1980s with projects that included an innovative electronic classroom known as the Brown ALgorithm Simulator and Animator. BALSA, by Professor Robert Sedgewick and Marc Brown '80 ScM '82 PhD '87, was based on Sedgewick’s design of the world’s first electronic classroom, which included forty 512K Apollo graphics workstations. BRUWIN, created by Norm Meyrowitz ‘81 and Peggy Moser ‘81, was the first raster graphics-based Unix window manager. Another novel application was the series of 3D and 4D mathematics visualizations by Professor Thomas Banchoff of the Department of Mathematics and the late Charles Strauss, Andy’s first PhD student and later a faculty member in Applied Math. Multiple students worked on the project, including David Salesin ‘83.
But by 1983, four years after Brown CS became an academic department, even van Dam was reaching his limit. Andy, Norm, and William “Bill” S. Shipp (then Professor of Medical Science and Associate Provost for Computing) had been raising $15 million for the campus computing effort through large grants from IBM, the Annenberg/CPB Project, and others. “I was already working 100-hour weeks,” van Dam says, “and I couldn’t run a graphics group, a hypermedia group, and also serve as a Department Chair who was trying to raise funds for building the CIT and getting workstations and personal computers all over Brown.”
The solution was to pass the torch of Brown’s hypertext legacy by creating the Institute for Research in Information and Scholarship (IRIS), which eventually employed more than a dozen people, including many Brown CS alums. Founded by Bill, Andy, and Norm, it had two goals: to deploy workstations and personal computers across the entire campus, and to develop educational hypermedia software to take advantage of those hundreds of networked machines.
“We had seven faculty members,” Andy says, “competing against MIT and CMU, which had hundreds of faculty and research scientists in CS. We three were called ‘workstation schools’ and Brown was really the first to put workstations to use for undergraduate education, not just research and graduate work. Imagine not the Boston Red Sox and not even the Pawtucket Red Sox but the Barrington Eagles going up against the Yankees! We had the experience and demos to blow everyone away at a time when almost all schools were still using glass teletypes. With the workstation auditorium, innovative homegrown applications, and a state-of-the-art, campus-wide broadband network (under Bill Shipp’s leadership and Hal Webber’s implementation), Brown was absolutely a tech leader!”
For students, what was it like to be drawn into the linked world of hypertext and networked workstations? “Linking pages together has been my life,” Norm laughs. “I was really into newspaper layout in high school, and I met Andy during freshman week, in a seminar that only I attended. We ended up talking about computers and newspaper design. Afterward, I had two takeaways: how amazing it was to be able to talk one-on-one with a professor during my first week on campus, and also that linking between documents is something that everyone naturally wants to do – and not just with footnotes. Andy’s one silent takeaway was that he would wrangle me to be his hypertext minion someday.”
By 1984, hundreds of Macintosh computers and IBM RT workstations had arrived on campus: a cover of Brown Alumni Magazine depicted them flowing through the Van Wickle Gates. “It was totally radical – Brown was on the cutting edge of personal computing on campus,” says Norm. “Having them across the entire campus was a game-changer.”
IRIS designed its hypertext system, Intermedia, from the viewpoint that creating and following links should be an integral part of the desktop user interface metaphor invented by Xerox and popularized by the Macintosh. The cut-and-paste metaphor was a key part of the early Mac experience, and Intermedia put it to good use in its treatment of linking.
To create a link, users made a selection and issued the “Start Link” command. The selection, known as an anchor in Intermedia parlance, was stored on a linkboard, which was similar to the clipboard that we use today for copying and pasting. They could then continue browsing, opening documents and following links and so on, until they found an appropriate place to link to. Next, they simply selected that anchor as the destination of the linkboard source and issued a “Complete Link” command to finish the process. The newly-created link was bidirectional and could be tagged for searching and filtering or spontaneously edited by users across a network.
Links also avoided breaking by tracking when their destinations moved. Bidirectional hypertext was something that Brown’s systems had offered since FRESS, based on the belief that people should not just be able to follow links out of a document but see which links point to it, and from where.
At a time when having multiple applications running simultaneously didn’t exist, Intermedia built tools that included a text editor, timeline editor, video clip editor, and one of the first online dictionaries, and brought them together in a multi-window world where linking was omnipresent. True to its concept of an information environment, it offered links across various media types, full-text searching across all kinds of documents, a local map that visualized the links in and out of an individual document, and a history of all browsed documents.
That environment was enthusiastically embraced by multiple courses that spanned cell biology, English literature, and lunar geology, bringing a new level of collaboration among peers. Collaboration was made possible by more than a dozen networked workstations; links weren’t limited to documents on a single machine but could be made and accessed across a local area network. “Students thought it was really cool,” says Nicole Yankelovich, one of the main Intermedia product designers. “Suddenly, writing papers wasn’t just an exercise between you and the professor to try and get a good grade. You had to come up with ideas that were compelling to your fellow students, then create the appropriate links and annotations for everyone on the network to see – it made you better at expressing yourself.”
As the 1980s turned into the 1990s, an economic downturn caused a major funding shortage. IRIS itself shut down in 1991, but Norm has few regrets. “Running on machines that were 4,000 times slower than current ones,” he says, “with 2,000 times less memory than we have now, we had a basic version of a web browser and Google Docs twenty-five years ahead of schedule – cloud computing at a local area network scale in 1989!”
And they made a mark. As hypertext research began to accelerate worldwide, Intermedia, along with Xerox’s Notecards, became one of the most well-known systems. It popularized features such as generalized links on any type of object, not just text; full-text indexing and searching of documents and links across a network; a visual history of a user’s path; and many more. The third-oldest surviving HTML document, written by World Wide Web founder Tim Berners-Lee, discusses Intermedia and its approach to linking. Intermedia coined the term anchor, now a tag in HTML, and Marc Andreesen (co-founder of Netscape) once told Norm that he’d read all of the Intermedia papers before designing the Mosaic/Netscape graphical browser.
“I’m happy we had an influence,” says Meyrowitz. “We helped people think through what the architecture of the online world should be, contributing to hypertext becoming ubiquitous. Pop culture would have you believe that a hit product arises fully-formed from the mind of a single brilliant person, but those brilliant people will freely acknowledge that they built on successes and failures by others that came before. Brown’s work in the field has had many ideas that influenced the thinking of hypertext as we know it today. You don’t have to make a billion dollars to be successful – you can contribute to a body of knowledge that brings about a big break later on.”
But just as it had been for Intermedia’s predecessors, Norm says, some of the technological changes that later had a profound impact were impossible to predict: “One of our key beliefs was that links should be bidirectional, so a user could not only follow links from a document to another but also see which documents link to the current one. We did this with a relational database that kept track of the anchors and updated itself whenever a document was edited so links pointed to the correct place. By the late 1990s, networking had gotten so fast and storage so amazingly cheap that you could crawl an entire corpus of documents and determine all the backlinks computationally rather than by precise tracking, which seemed insane in the late 1980s. But we were right that backlinks are important – they turned out to be one of the major components of the original PageRank algorithm that made Google the behemoth of search.”
“The unidirectional nature in Tim Berners-Lee’s original World Wide Web design made sense,” Norm says, “because the technology to handle a worldwide database of bidirectional links was untenable at the time. But it made the web incredibly fragile – if someone renames a document, all the links pointing to that document break and produce the dreaded ‘404 error’ when followed. People can easily fall into the belief that the web is ‘done’, but there are tons of things, like links that don’t break, that existed in Brown’s systems in the past and hopefully will find their way into the web of the future.”
“It’s also frustrating that using hypertext has become more passive,” Andy says, “and that’s because of commercialization. When you’re on a commercial web page, there’s a rigid separation between creating and consuming, and it was a matter of religion for us that there would be no separation between reading, creating links, and authoring. You may only do one at a time, but we wanted an absolutely smooth transition between them. The only time we want people to be completely passive is if they’re reading fiction or enjoying a piece of art solely for pleasure. From the very beginning, we were all thinking about supporting creativity, and communities of creatives.”
Hypertext And Education
“If you had a way to create connections, what would you use it for?” George Landow, Brown University Professor Emeritus of English and Art History, is sitting on a red leather couch in his Providence living room. The author of Hypertext and other works on the subject is paraphrasing William Beeman, once the Associate Director of Programs Analysis for IRIS, and he’s asking a question that was of particular interest to the Intermedia designers: they had created a general-purpose tool, but how would it be used?
As the new computers flowed across campus, van Dam’s colleagues in the humanities were beginning to see the potential that a linked world could offer. Initially skeptical of hypertext as an educational tool, a demo of Intermedia convinced George that it might be a way to teach “the fragility of the text”, the idea that the book readers hold in their hands isn’t a perfect reflection of authorial intent. “My classes became my laboratory,” he says, and IRIS provided his chemistry set. Context 32 was one of his early creations, an Intermedia web that provided contextual information for a survey course covering almost 300 years of English literature.
George remembers that his students were eager to start working with features such as one-to-many linking and the Local Tracking Map, a dynamic resource that displayed icons for documents linked to the one selected. “They were transcribing letters between Hunt and Tupper, and they became obsessed with the idea of setting rules for hypertext collaboration – fistfights were practically breaking out!”
“It always amazed me how much hypertext inspired students as authors,” Landow says. “Some of them wrote 250-page documents when I asked for 20 pages.” One of them created a map of the New York City subway system with a hypertext entry at every stop to evoke the overlapping identities of being Chinese, gay, a first-generation student. The project includes perhaps the highest possible statement of support for the new technology as an educational tool, the subsuming of it into self: “I am a hypertext.”
In his retirement, Landow continues to work as Editor-in-Chief and Webmaster for the “gigantic, all-consuming” Victorian Web, which he started as a small Intermedia project. Over decades, it grew to a corpus of more than 100,000 objects, with millions of views per month (before the rise of Wikipedia, the number once topped 14.3 million) from all over the United States, Europe, the Americas, and Asia. Accidental discovery reigns as the reader moves from Alma-Tadema paintings to recordings from The Singing Bourgeois to Indian railroads.
“Educators have found us too scholarly,” he says, “and scholars have said that we’re too much in the educational realm, but that’s our strength – it’s not a blog, not a silo. Not a day goes by when I don’t learn something new from working on it.”
Looking back on the half-century, George sees a significant symbiosis between van Dam’s evangelism for hypertext in education and the changes brought by the Open Curriculum. “We were getting,” he says, “a new caliber of students and an entirely new way of thinking at exactly the same time that Andy was making educational hypermedia so unbelievably fashionable. Faculty really did rethink their teaching, and just as they were encouraged to try new things, students were, too. And students at Brown don’t know what they don’t know, so they invent things.”
Hypertext And Electronic Literature
Still other faculty members, finding hypertext to be a tool for creativity as well as education, were producing inventions of their own. On the second floor of the then recently-built CIT, Robert Coover (once described as “the high priest of hypertext” by Wired, now Professor Emeritus in Literary Arts) was using a unique Intermedia installation to teach the first hyperfiction workshop. It opened the door to the linked world for published authors like Mary-Kim Arnold, Matt Derby, Andrew Sean Greer, and Will Oldham, letting them experiment with electronic interactive fiction for the first time. Before long, Robert’s gleefully anarchic Hypertext Hotel, a pioneering experiment in collaboration that filled a suite of virtual rooms with ever-changing narrative, was bringing electronic literature wide attention through features in National Geographic and The New Yorker and a cover story in The New York Times Book Review.
Just like George Landow, he was discovering that hypertext had a remarkable effect on his students. “Breaking them out of old habits has always been the one ‘teacherly’ thing I do,” Robert says, “and that was also facilitated by hypertext. I used to teach a course called Exemplary Ancient Fictions, in which we read everything from creation myths and the Bible to fairy tales and medieval romance. The class members did all the usual academic stuff, but also wrote one-page stories using the peculiarities of each genre, the purpose being to stimulate breakaway forms. It was hard work, but I saw on the very first day of the hypertext workshop that all resistance disappeared, and everyone was immediately writing in new forms.”
And Coover, like Landow and van Dam and others, was finding his students to be invaluable collaborators. One of them was Robert Arellano ‘91 MFA ‘94, now Professor of Emerging Media/Digital Arts at Southern Oregon University, who developed the initial hyperfiction workshop thanks to an Undergraduate Teaching and Research Award. As a student and later an adjunct lecturer, he co-taught this and other workshops with Coover and wrote a graduate thesis, Sunshine ‘69, that became the first full-length work of hypertext fiction on the web.
“1990-1991 stands out for me,” Arellano says, “as a golden moment of creativity, where everybody was exhilarated by the process of writing nonlinearly, but nobody was self-conscious about making headlines, much less history. There were future award-winning musicians, artists, and at least one Pulitzer Prize-winner in that first workshop. I was surprised how quickly my own creative process – which for a decade already had been relatively linear – took to affordances of interactive reading. During the first week of that first workshop, I opened ‘Room 212’ in the Hypertext Hotel, and the experience of writing it was so inspiring and generative that I still thrill to think of it – and continue to pursue that feeling of inspiration in my writing 30 years later.”
A second multidisciplinary bridge at Brown, the use of VR to explore electronic literature, dates back to the 1990s, when Andy and Professor David Laidlaw of Brown CS invited Creative Writing and Fine Arts faculty and students to collaborate in the CAVE (Cave Automatic Virtual Environment). Tom Meyer, Dan Robbins, Benjamin Shine, and others went on to create 3D interactive literature that included a reinterpretation of a Robert Coover short story (“Inside the Frame – Interactive”) and other works that were some of the first hyperfiction projects in virtual reality. Coover himself debuted “Cave Writing”, his spatial hypertext writing workshop, telling Wired in 2003 that immersive technologies were likely to become ubiquitous: “What we’re trying to do here is ensure that they develop as places for literature.”
And the experiments continue. Since his arrival at Brown in 2007, Professor of Literary Arts John Cayley has been using hypertext for poetic innovation in what he calls “writing in networked and programmable media”, using first the CAVE and now its successor, the YURT (Yurt Ultimate Reality Theater). Robert Arellano, now working with the Unity and Unreal programming environments, continues to extend the legacy of interactive storytelling. In yet another hypertext resurrection after recovering files from an elderly PowerBook G4, he and his students are creating a virtual reality version of the Hypertext Hotel for a generation of explorers who may be more familiar with VR headsets than 2400-baud modems and floppy disks.
“My current students,” he says, “are still working on and exploding what I considered the creative boundaries of innovating with hypertext on a daily basis. Over the past six or seven years of encountering students seemingly ‘born digital’, the generational leaps in hypertext-readiness seem to occur annually now.”
Hypertext And Scholarship
Students who don’t know what they don’t know, as George Landow told us, will invent things. One of those inventive students was Elli Mylonas, now Director of the Brown University Library’s Center for Digital Scholarship. During doctoral work in classics at Brown in the 1980s, she took a break to serve as Managing Editor of the Perseus Project, a digital library of Ancient Greek texts founded at Harvard University, now hosted at Tufts University and the University of Leipzig. When she returned to Providence, she found herself part of a rapidly-coalescing group of peers who had all heard the siren song of hypertext and digital media.
“We thought of it as a stealth seminar,” Elli remembers. This was the Computing in the Humanities Users’ Group (CHUG), an informal gathering mostly composed of student enthusiasts. “We were all really interested in what technology could do to solve what was basically the Tower of Babel problem: scholars around the world working with difficult corpora who needed digital tools to understand them properly. We met every two weeks, but there was no funding, so we could offer people cookies and fizzy water and that was about it.”
Entrepreneurship was in the air. 1990 saw the creation of Electronic Book Technologies, a spin-out from Brown CS founded by CEO Louis Reynolds, van Dam, and a cast of Andy’s grads: CHUG member Steven DeRose ‘81 AM ‘86 PhD ‘89, Greg Lloyd ‘70 MS ‘74, and Jeffrey Vogel ‘90. Initial employees included Dave Sklar MS ‘84, and Bill Smith.
EBT soon became a major player in the world of electronic documents, influencing stylesheet technologies such as DSSSL and CSS and working with clients like Novell and Boeing. Its best-known creation, Dynatext, was a publishing tool and the first system to use style sheets to render arbitrarily large SGML (Standardized Generalized Markup Language, which led to HTML) documents. Four years later, CHUG and its allies convinced Brown’s Computing and Information Services (CIS) that digital scholarship wasn’t just a necessity but an entrepreneurial opportunity, and the Scholarly Technology Group (STG) was born with an initial staff of three: Geoffrey Bilder, FRESS user Allen Renear PhD ‘88, and Elli.
Finding inspiration in FRESS, Intermedia, and Brown’s other hypertext efforts, STG and members of CHUG quickly became heavily involved with an international effort to develop new text and electronic document technologies. This included review and finalization of SGML, participation in working groups for XML (eXtensible Markup Language, a successor to HTML), submissions to the HyTime standard for hypermedia, co-development of the Open eBook Publication Structure (OEBPS), and leadership in the creation of TEI, the now widely adopted Text Encoding Initiative schema and guidelines for encoding literary and historical texts.
“They’re tools,” Elli says, “and we couldn’t expect scholars to build those tools on their own.” Notable early efforts included the Women Writers Project, a massive attempt to make early women’s writing accessible and reclaim its cultural importance after long neglect.
“We started out with a philosopher, a classicist, and a historian,” Elli remembers, “all very excited by our ability to be humanists and help academics do things they couldn’t do before, to show that they could express ideas and share their research in ways that made more sense than conventional print.”
“We’ve always worked behind the scenes,” she explains, “worrying about things like user interfaces, encoding, and metadata conventions so researchers don’t have to engage more than necessary with this aspect of digital scholarship. It’s carefully planned, ongoing work – we’re turning this enormous crank and eventually amazing things start coming out of the texts, things that scholars can use.”
Recent projects include contributions to the Mellon Foundation-funded “Furnace and Fugue”, an enhanced digital edition of an alchemical text from 1617 that features new vocal recordings and interactive music representations of fifty fugues. Another notable effort is the Database of Indigenous Slavery in the Americas, a collaborative project to design and build a database of biographical references to enslaved indigenous people from 1492 to 1900.
“When you do the kind of work that CHUG and STG and the Center for Digital Scholarship have done for a long time,” Elli says, “you just develop a kind of hypertextual attitude to everything. It’s a way of probing into information that you already have, another kind of close reading. It’s just how we think now.”
Garibaldi on the Surface/LADS/TAG/TAG Nobel
After decades of Andy’s bridge-building with colleagues in the humanities, perhaps it’s unsurprising that Brown’s modern era of hypertext began in 2009 when one of those colleagues reached out to him. “I’d been working with 3D interactive graphics and virtual reality,” Andy says, “but the money had dried up, so I went into pen-and-touch interaction on newly-developed touch computers. Harriette Hemmasi, the University Librarian, had heard about that, and she offered to buy two Surfaces, one for the library and one for us, if we’d write some software for a very unusual asset.” The Surface was a newly-developed, coffee-table-sized touch screen from Microsoft.
That asset was the Garibaldi Panorama, a massive watercolor depicting the life of Italian patriot Giuseppe Garibaldi that runs for 260 feet on a single, continuous paper scroll. Too fragile to handle, it had been professionally digitized at high resolution, but needed what Massimo Riva, Royce Family Professor of Teaching Excellence and Professor of Italian Studies, calls “hybrid media” to be fully appreciated and understood.
“We wanted,” he says, “to use the Panorama itself as a great canvas...to add and collect documents that would explain and explore it in all its details, and do that in an interactive and collaborative mode.”
That was exactly what Andy’s team created: Garibaldi on the Surface, a hypermedia experience that put the vast scroll at the user’s fingertips for navigation with maximum freedom and a host of linked resources. As usual, they found it necessary to go beyond an existing feature set, so they supplemented the Surface’s pinch-zooming and swiping with a virtual magnifying glass that could focus on a small portion of the Panorama without taking over the entire screen. Users could also copy portions of the artwork for future reference or move them to an external screen with a simple flicking gesture. Numerous hotspots also embedded context-sensitive items such as images of historical correspondence and ephemera, videos, and scene-specific narration in multiple languages.
“But then we decided,” Andy says, “that it didn’t make sense to have software that was just for one piece of artwork, no matter how impressive.” Device and platform independence, the researchers soon realized, were equally important. And so Garibaldi on the Surface led to Large Artwork Displayed on the Surface (LADS) and then Touch Art Gallery (TAG), which allowed users to explore a variety of assets through a simple touch interface.
The decision paid off: TAG was soon used at the Seattle Art Museum, for the Thousands of Little Colored Windows exhibit at the John Hay Library, and in Professor Sheila Bonde’s courses on medieval art and architecture. Eventually, Andy received an inquiry from his sponsor at Microsoft Research to see if, on two weeks’ notice, he and his group would be willing to compete with another project Microsoft had sponsored. The competition was the much larger and better-funded ChronoZoom project at Berkeley, and the prize was hosting a digital exhibition for the Nobel foundation.
In two weeks of day-and-night work, Andy’s team mocked up an interactive experience using whatever assets they could find on the Web, and despite being the underdogs, they were chosen as the bake-off winner on the basis of the mockup and TAG’s capabilities. This led to the creation of TAG Nobel, two interactive museum experiences that focused on Alfred Nobel's life and final will and the 900 Nobel laureates to date. As the main implementer of the specialized version of TAG, Trent Green ‘18, then a sophomore, got to travel to Singapore to install the exhibit; he also got to meet several of the prize’s recipients.
“The classic moment,” Andy says, “was when I brought the idea to the students: ‘We’re never gonna win, and we only have two weeks to do a mock-up – are you with me?’ There have been more than a few moments like that over the years.”
“We hoped more adoption would result from TAG,” Andy says, “but it didn’t, so with infinite reluctance, we put it in the deep freeze.” Its immediate successor was NuSys, which was partially inspired by Worktop, a project by Bob Zeleznik, Director of Research for van Dam’s graphics group. Initially envisioned as an integrated development environment for document-centric work, it grew too cumbersome for what Andy and his team wanted. “And so it was time for another turn on the wheel of reincarnation!” he says.
Dash (short for “Dashboard”) was the result. Tyler Schicke ‘18 ScM ‘19 was a junior who had just taken Andy’s graphics class when he joined the team of researchers. “I was excited at the chance to do interesting work with a large group of people, which doesn’t always happen,” he says. “At the core, Dash is a set of tools to help you gather, organize, and explore information. There’s a very big focus on ingesting any kind of information that you have, then organizing it by how you think about it, not how you find it.”
“It turns out that the ‘desktop metaphor’ for organizing information, invented by Xerox and popularized by the Macintosh and then Windows, isn’t a desktop metaphor at all,” Andy reflects. “It’s really a file cabinet metaphor – a fairly rigid, mostly hierarchical filing system, and it possesses very little of the ease with which people place physical documents and photos and drawings on a real desk, browse through them, rearrange them, and compose something new. Our goal for Dash is to have an integrated environment in which someone can gather and organize all their digital assets and then annotate and associate them in a variety of ways: with traditional hand-made links, through link-recommending automation, through keywording and searching, and through spatial representation, without the confines of the 35-year-old desktop model still in use today.”
With CPUs and graphics literally thousands of times faster than in the earliest days of the desktop metaphor, Andy’s team has designed Dash’s user experience to be smooth and immersive, not choppy and hierarchical. “Dash’s model,” Andy says, “is to be open and provide an easy way to interoperate in the existing ecosystem of tools that people are familiar with today.”
“Our goal is to break down the barriers between data types – text, audio, video, spreadsheets, PDFs, and so on,” says Bob Zeleznik, chief architect for Dash. “With today’s windowing systems, for example, if we want to make annotations or comments about a project that incorporates Excel, PowerPoint, Word, and PDF documents, it’s a nightmare because they all have different, incompatible methods of doing so. With Dash, any media type can be embedded in any other media type, so there’s a unified way to compose all of the varied content.”
The new possibilities, Bob says, are really exciting. “Imagine if you could simply place a video camera document indicator in the margin of a PDF document so your collaborator could instantly conference with you. Or imagine easily embedding an audio capture document into a photo gallery to allow grandparents to verbally comment on those photos. And that’s just the tip of the iceberg. Since all of the data types are treated in a unified model, they can be searched, sorted, and rearranged together.”
In January, the team began working on what’s essentially Dash 2. By using browser-based software, including TypeScript, React.js, Node.js, and other popular components, they achieved feature parity with the original Dash by the end of the semester, and they’re continuing to add new improvements. Asynchronous collaboration, an obsession of van Dam’s dating back to the FRESS era, is provided by being browser-based, which also enables synchronous, real-time collaboration.
Schicke doesn’t see himself dealing directly with hypertext in his current work at Nvidia, but he likes to think that he’ll continue to help with Dash in his spare time. “Two or three years ago, I wouldn’t have thought I’d be interested in this,” he notes, “but now I’m interested and invested.”
“Brown has a huge history with hypertext,” Tyler says. “We’re a great place to look for where the state of the art was, how it progressed, and where things are going next. In many cases, there was a clear progression from HES and FRESS to what we’re doing now. It’s nice to see that logical progress, that unbroken chain, and it was great to be part of it. For my generation, CS has gotten too important to ignore – we all want to use it to change the world in some way.”
Looking Out On The Hills
In a recent demo of Dash in Toronto, Andy took the opportunity to look back on Brown’s five decades of hypertext and revisit the promises, met and unmet, of each system. In one of many recent interviews, van Dam’s thoughts return to Vannevar Bush and the metaphor of trails, so let’s join him at a favorite spot, the Blue Hills Reservation in Canton, Massachusetts, where we’re standing at a trail head. Explaining Bush’s metaphor of associative trails in terms of physical hiking trails comes naturally, Andy says, in reviewing hypertext’s ideals and its paths taken and not taken.
“I’m much less concerned now with what hypertext hasn’t done,” says Andy, “than I am with the weaponizing of misinformation. Hypertext doesn’t offer a solution for that, but Bush thought we’d all be active creators and followers of trails, and indeed he predicted a new profession of trailblazers. Learners and scholars would all be trailblazers, he thought, and that hasn’t happened. Now, every schoolchild does know how to follow trails, but we aren’t yet teaching them how to blaze trails and make great linked media and guided tours of that media.”
So what’s yet to come? The future is hypertextual, van Dam says, but hypertext itself is one tool among many. “Search has largely replaced explicit linking, but both of them are ways of capturing relationships. In the future, AI will definitely be a part of hypertext: intelligent crawlers that summarize and organize, collaborative filtering, taking data and building models, an entire recommendation system.”
The story doesn’t end here, but let’s take our leave with Andy paused on the trail for a moment, looking out over the hills. “You never have to go any further along a trail,” he says, “but a great hypertext system would analyze what you’ve seen, make expert recommendations for where to go next based on your interests, and then take you there if you wanted. That’s still intriguing to me, even after fifty years. It’s hard to believe that we’ve been doing this for a half-century!”
“It’s kind of a paradox,” Norm says. “Hypertext has been so second-nature to people at Brown, that we often miss the fact that we were major participants in the biggest technological change affecting the world since the advent of television. Brown is still looking over the next hill, but it’s worthwhile to step back, be amazed at where we started, and appreciate the impact Brown has had on the linked world we all take for granted today.”
The slide deck from the symposium is available at bit.ly/HypertextSymposiumSlides, and a full recording is at bit.ly/HypertextSymposiumRecordings. The list of contributors below includes all names that we’ve been able to uncover thus far, but given that some of these projects are decades old, we may have accidentally omitted someone. That’s why we’d like to ask for your help. If you or someone you know was left off, please email Jesse C. Polhemus and we’ll be happy to correct our lists.
Ted Nelson, David Rice, Andy van Dam, Steve Carmody, Terry Gross, Mark Pozefsky, Marty Michel, Ken Prager, Bob Wallace, Richard Schmidt, Bob Batts, Alan Blitzbau, Bill Braden, Lynn Kelley, Ken Sloan, Dan Stein, Bill Turrentine, Larry Weissman
Rick Harrington, Wolfgang Millbrandt, Carol Chomsky, Clare Rabinow, Steve DeRose, Joe Strandberg, Steve Carmody, Craig Mathias, Mark Pozefsky, David Irvine, Alan Hecht, John Patberg, Jonathan Prusky, John Woodward, Jan Michel, Bob Wallace
An Experiment in Computer Based Education Using Hypertext
Jim Catano, Nancy Comley, Joe Strandberg, Carol Chomsky, Robert Scholes, David Irvine, Donald Brown, Karl Zinn, Rick Harrington, Marty Michel
Software developed by: Steve Feiner, Kurt Fleischer, Sandor Nagy, Joe Pato, Randy Pausch, Will Poole, Joel Reiser, David Salesin, Adam Seidman, Barry Trent, Mark Vickers, Jerry Weil, Andy van Dam
Documents created by: Steve Hansen, Imre Kovacs, Charlie Tompkins, Nicole Yankelovich
Paulette Bush, Karen Smith Catlin, Tim Catlin, Jim Coombs, Helen DeAndrade, Steve Drucker, Page Elmore, Charlie Evett, Matt Evett, George Fitzmaurice, Nan Garrett, Allan Gold, Jim Grandy, Ed Grossman, Bern Haan, Jill Huchital, Paul Kahn, Ann Loomis, Norman Meyrowitz, Marty Michel, Muru Palaniappan, Victor Riley, Bill Shipp, Tom Stambaugh, David Temkin, Ken Utting, Nicole (Mordecai) Yankelovich
Workstation Development, Deployment, and IRIS Management
Kenneth Anderson, Steve Andrade, Rosemarie Antoni, Gail Bader, William O. Beeman, Mike Braca, Brian Chapin, Page Elmore, William Graves, Paul Kahn, James Larkin, Larry Larrivee, Katie Livingston, James Nyce, Mike Pear, Augustine Rega, Julie Ryden, Jane Sanchez, Mark Shields, Dan Stone, Sindi Terrien, Marlene Tober, Todd Van derDoes
Hypertext and Education
George Landow, Gary Weissman, Graham Swift, Barry Fishman, Bob Coover, Bobby Arellano, Shoshana M. Landow, Ronald Weissman, Pat Malone, Cooper Abbott, Richard Smoke, Nicole Yankelovich, Martha Nicolson, Paul Kahn, James Head, Peter Heywood, Tom Banchoff, David John Burrows
Hypertext and Electronic Literature
Robert Arellano, Bob Coover, John Cayley, Shelley Jackson, Jeff Ballowe, Scott Rettberg
David Barnard, Wendy Hall, George Landow, Stuart Moulthrop, Mackenzie Smith, Steve Ramsay, Gene Golovchinsky, Patrick Svenson, Dorothea Salo, Geoffrey Russom, Paul Kahn, Paul Dourish, Neel Smith, Selmer Bringsjord, Noah Wardrip-Fruin, David Herlihy, Mark Bernstein, Greg Crane, Tony Molho, Jim Coombs, Espen Aarseth, Thomas Rommel, Marilyn Deegan, Johanna Drucker, Gabriel Bodard
Louis Reynolds, Andy van Dam, Steven DeRose, Greg Lloyd, Jeffrey Vogel, Dave Sklar, Bill Smith
Andy van Dam, Karthik Battula, Karishma Bhatia, Nate Bowditch, Miranda Chao, Gregory Chatzinoff, Tiffany Citra, John Connuck, David Correa, Mohsan Elahi, Aisha Ferrazares, Jessica Fu, Yudi Fu, Kaijian Gao, Trent Green, Jessica Herron, Alex Hills, Ardra Hren, Hak Rim Kim, Inna Komarovsky, Ryan Lester, Benjamin LeVeque, Josh Lewis, Jinqing Li, Jeffery Lu, Surbhi Madan, Xiaoyi Mao, Ria Marchandani, Julie Mond, Ben Most, Carlene Niguidula, Tanay Padhi, Jonathan Poon, Dhruv Rawat, Emily Reif, Jacob Rosenfeld, Lucy van Kleunen, Qingyun Wan, Jing Wang, David Weinberger, Anqi Wen, Natasha Wollkind, Dan Zhang, Libby Zorn
Trent Green, Miranda Chao, Tiffany Citra, Carlene Niguidula , Lucy Van Kleunen, Rosemary Simpson, Andy van Dam
Tyler Schicke, Bob Zeleznik, Philipp Eichmann, Miranda Chao, Luke Murray, Abdullah Ahmed, Hannah Chow, Madeline Griswold, Julie Wang, Laura Wilson, Sam Wilkins, Stanley Yip, Emily Reif, Tiffany Citra, Carlene Niguidula