Select Page

The hallway is bathed in harsh white, a figment of LEDs. Along the walls, doors recede endlessly into the distance. Each flaunts a crown of blue light at its base, except for the doors you’ve walked through before, which instead emit a deep purple. But these are but specks of sand in the desert of gateways.

You are searching for something.

You prepare yourself for an arduous journey. Before the first door you come upon a pedestal. The box that lies on the pedestal gives airs of gildedness despite being as plain as the walls that surround it. It isn’t adorned with a title, but its name echoes in your mind, intuitively: the Answer Box. A plaque reads:

I have crawled through each and every door. Not just the doors in this hallway, but the doors in every hallway in existence, the doors within doors, as well as some doors that I dare not show you, doors that would make you flee in terror. I have seen everything. I am impartial. I have your best interests at heart. I understand what it is you want to know and it is knowable. I have the answer that you seek.

Your finger caresses the latch.

Cataloging the web was doomed from the start. In the summer of 1993, Matthew Gray created the World Wide Web Wanderer (WWWW), arguably the first internet bot and web crawler. During its first official attempt to index the web, the Wanderer returned from its expedition with 130 URLs. But even in the baby years of the internet, this list was incomplete.

To understand how a simple web crawler works, imagine making a travel itinerary that contains three cities: New York, Tokyo, Paris. While visiting each destination, listen for any mentions of other places and add those to your itinerary. Your world crawl is complete when you have visited all of the cities on your ever growing list. Will you have seen a lot of places by the end of your journey? Undoubtedly. But will you have seen the whole world? Almost certainly not. There will always be cities, or entire webs of cities, that are effectively invisible to this process.

A web crawler similarly consults a list of URLs and recursively visits any links it sees. But the resulting index should not be confused with a comprehensive directory of the internet, which does not exist.

I have a theory of technology that places every informational product on a spectrum from Physician to Librarian:

The Physician’s primary aim is to protect you from context. In diagnosing or treating you, they draw on years of training, research, and personal experience, but rather than presenting that information to you in its raw form, they condense and synthesize. This is for good reason: When you go to a doctor’s office, your primary aim is not to have your curiosity sparked or to dive into primary sources; you want answers, in the form of diagnosis or treatment. The Physician saves you time and shelters you from information that might be misconstrued or unnecessarily anxiety-provoking.

In contrast, the Librarian’s primary aim is to point you toward context. In answering your questions, they draw on years of training, research, and personal experience, and they use that to pull you into a conversation with a knowledge system, and with the humans behind that knowledge system. The Librarian may save you time in the short term by getting you to a destination more quickly. But in the long term, their hope is that the destination will reveal itself to be a portal. They find thought enriching, rather than laborious, and understand their expertise to be in wayfinding rather than solutions. Sometimes you ask a Librarian a question and they point you to a book that is an answer to a question you didn’t even think to ask. Sometimes you walk over to the stacks to retrieve the book, only for a different book to catch your eye instead. This too is success to the Librarian.