Steve Krug’s Don’t Make me Think is considered one of the canonical texts of web-design, and as such was introduced to me as part of my web design studies at Sheridan College, which I undertook during the 2011-12 academic year. The title alone had always been offensive to me, someone who enjoys both ideas and thinking, and I always chaffed at the mindlessness it encourages.
However, the education process I went through helped me become conscious of my web-browsing behaviour, and the book is narrowly contextual to those times when we are on a website for a purpose-driven reason. For example, when we go to a theatre’s site to buy tickets, or are on some other commerce site trying to find contact-info or a business’ opening-hours. Primarily, the ‘design philosophy’ behind don’t-make-me-think is contextual to commercial or other service-oriented websites.
In the film Hannah Arendt, we get a wonderful defense of thought as a human activity, and a explication that the evil in question (the Holocaust) was facilitated by classic, mid-century modernist bureaucracy, and especially the German version which was predisposed by an education system which taught obedience and discipline. The system becomes one which encourages people to disregard thought which (as Arendt says in the film) dehumanizes us by ‘making people superfluous’ to the system. In other words, the indifference of a bureaucracy toward the individual it is meant to serve means people end up serving the bureaucracy.
It’s worth noting that the German education system, as developed by the state of Prussia, was imported to North America a century ago to transform farmer-children into future factory or corporate employees, by teaching a tolerance for boredom and a willing and mindless obedience to managerial directives. (See John Taylor Gatto’s essay, “Against School”)
~
This decade incorporates the 20th-year-anniversaries of everything web-related. The World Wide Web was first released in 1993 as an app that ran on the Internet (then an academic and government modem network). Now the W.W.W. is synonymous with the ‘Net and the design principles promoted by Don’t Make Me Think have become so standardized that we recognize websites as bad when their straightforward principles are violated. We know how websites are supposed to work, we recognize header menus as such, and understand what ‘home’ means.
Krug’s follow-up, Rocket Science Made Easy was a second-semester text, and I found both books very hard to read because they are both so patronizing and because he’s continually stating what is now obvious. They were written for people for whom computers, and using them, were new. Now they feel more like historical documents.
Inasmuch as we have a ‘web 2.0’ nomenclature (which in itself is about a decade out of date) I find the language shift from the ‘Net’ to ‘The Cloud’ indicative of where we are: the interconnected network was about siloed websites and email – essentially network nodes and lines of communication.
The Cloud (as a “post-Net 2.0” term) speaks to our ever-present interconnectivity, where we can download data to our devices out of thin air, and where server farms behind our screens can run the necessary compression algorithms to apply filters to our photos as we upload them.
The novelty of this technology has been intoxicating, and myself I’ve found it fascinating enough to both want to understand it and participate within it professionally. But after 20 years, the novelty is beginning to wear off; and the inevitable transitions evident fifteen years ago have come to pass.
Physically, publishing on paper is in decline (in some cases rightfully) whereas digital publishing is established and growing. This echoes the transition between Mediaeval manuscript book propagation in favour of the printed book, and if Gutenberg’s invention in 1452 echoes Berner-Lee’s of 1989, we are in the equivalent of the 1470s, by which time Guttenberg’s press had spread to France, Italy, England, and Poland.
The model of book-idea production has lasted since that time, until our era when we’ve figured out how to fluidly use a two-dimensional surface through the manipulation of electricity and light.
I spent a week last July helping put the finishing touches on a corporate website to publish a company’s annual report. Twenty years ago, the report would have been a booklet and print designers and typesetters would have been hired to produce it. As the novelty of working with computers is wearing off, and as our economy has shifted to incorporate them in our offices and studios, it is now obvious that this digital economy is essentially that of publishing: websites, apps and ebooks. It is supported, as it always has been, by ad money. And the big sites like eBay and Amazon represent the Platinum Age of mail-order. I grew up with Roch Carrier’s famous short story about a hockey sweater ordered from the Eaton’s catalogue. A future generation will probably know an equivalent that replaces Eaton’s with Amazon.
As I worked during that July week it occurred to me that in 200 years I would not be described as front-end developer, nor a web-designer, but perhaps just a publisher, in the same way that Benjamin Franklin is described as a printer, not a pamphlet designer, nor a typesetter. “To earn money he worked in publishing” – may be all that need to be said, for by then, publishing will be digital by default, and will have been for two-hundred years.
Last week at FITC Screens 13 I got to try Google Glass for the first time. Tom Emrich was there as part of the Xtreme Labs Lounge and I tried his device for about five minutes, long enough for him to show me how to use it and go through some basic commands.
The screen was a little out of focus, but it wasn’t important to me that it be perfectly fitted and adjusted. I took a picture, swiped, tapped, looked at the New York Times’ app, and had it read it to me.
Here is a mock-up I made to record the general impression:
The rectangle was smaller than I expected, and the fact that it was back-lit / transparent-black gave it a bit of a purple, out of focus sheen. It’s a see through screen hovering at your eyebrow, and I was thinking of this later when I tweeted:
I wrote that while sitting in Mike DiGiovanni’s Google Glass presentation, and watching him onstage I now understood the gestural language he was presenting: not that of someone with a neurological disorder, unable to focus on what’s in front of him, with eyes rolling upward, but someone who was experiencing a prosthetic hallucination in the right corner of his visual field.
I used the word ‘sociopathic’ specifically: a social pathology, that is a social illness, where one behaves in a manner that is either offensive or unfriendly and unsociable.
Human interaction requires at least two people, but Glass is not a device meant for two people. It’s an internalized, private experience. When you’re wearing one, you are meant to forget that it’s on, in the same way that traditional corrective eyeglasses become forgettable to the wearer.
All the pictures we’ve seen are of other people wearing glass but of course this is because of how difficult it is to show the subjective experience, which is really what the product offers.
Google Glass is a beta product, and is the technological equivalent of 1990s cell phones with retractable antennas. In the 90s, owning a cell phone was a little offensive, because it signaled that you were either a busy-body big-shot or you were narcissistic enough to think that you were as important. (I remember telling someone in 1999 that I didn’t want a cell phone because I wouldn’t want to be bothered when I was away from home).
However, the utility of a cell phone soon meant that by 2005, almost everyone had one. By 2007, the year Apple released the iPhone, the executives and managers of the world were already carrying Blackberries and other email-capable cellphones, and I was used to seeing these people staring at their little machines while in line for coffee. It occurred to me then that the Blackberry was the wand of the overclass, and I wondered what their jobs were that they had to be checking email while in line. (At the time I carried a basic Nokia).
Now, people everywhere can be seen looking down at something they’re holding in their hands. This is such a common sight that you can find examples on Google Streetview:
For this argument I’ll refer to this posture – or more specifically this behaviour, as “digital attention behaviour”.
In 2007, in line at coffee shops, the future wasn’t yet evenly distributed, but now this digital attention behaviour has spread wide and become normalized.
Part of the normalization is that looking at a rectangle displaying digital messages isn’t that much different than looking at a pad of paper. We were already used to seeing people read things silently, which in itself was a revolution centuries ago, when reading was usually done aloud.
The rolling of eyes may eventually become a normalized digital attention behaviour, but right now, absent the even distribution allowing the rest of us to relate, it still looks strange and offensive.
Unintentionally, Google Glass manifests the Western narcissistic ego, where private experience on public display happens without care for how it affects others. The selfishness of Glass is expressed when the Other cannot tell if a picture is being taken or if the time is being checked. With a smartphone, a glance can tell you if the person is emailing, texting, web browsing, or playing a video game. The information leaks, and this information is valuable in contextualizing our person-to-person interaction.
Rendered completely private, this interferes with our Theory of Mind, our ability to imagine what others are doing and be able to relate to them. We can’t empathize without sufficient contextual clues. Inasmuch as Glass is a prosthetic for hallucination, it may also be a prosthetic for autism.
Having said all this …
I am nevertheless excited by the idea of Glass as both a prototype and an attempt to get us away from current digital attention behaviour, so that we can benefit from the data cloud while also being able to interact with one another as we did in the past. The irony that Glass is at present such a private experience that it interferes with human-to-human interaction, and is one of the bugs that needs to be resolved.
I like to think of Glass a pathfinder project to get us to causal augmented reality, via “smart”-eyeglasses, contact lenses, and/or eventually implants, such as described in Alastair Reynold’s Poseidon’s Children novels, the second of which (On The Steel Breeze) has this scene:
“Wait, I’m hearing from Imris again.” Her face assumed the slack composure of aug trance, as if someone had just snipped all the nerves under her skin.
In that world of implant-enabled augmented reality, an aug trance is something everyone can relate to, and fits into everyone’s Theory of Mind. It is not disturbing to see, and is an understood appearance.
Having said all this, I suspect that a product like Glass will be successful. Again, its current design is reminiscent of the first cell phones. We know from the movies that portable radio-phones were available during World War II.
Original 1960s Star Trek communicators were more skeuomorphic of the walkie-talkie than a phone, but when Motorola marketed the StarTAC phone in 1996 the reference to the fiction was obvious.
In the 2009 Star Trek movie, Simon Pegg as Scotty is seen wearing an eyepiece:
And in 1991, Star Trek The Next Generation featured holographic eyewear which took over the minds of the crew:
Which exemplify that the idea of a heads-up-display is an old one, but Google decided to build it, tether it to an Android phone, and begin to market it. I don’t doubt something like it will eventually be successful.
What is especially interesting is how such a simple idea and device turns out to have complicated social side effects, but these side-effects would never have become apparent if Google hadn’t taken the chance to implement this old idea to begin with.