We interact with the world through screens. The reality of this idea is so commonplace that we don’t usually think in these terms. When we think of messaging, FaceTime, emails, group chats, Twitter, Facebook, and this blog post, we think of the content exchanged through those applications. But stop, and take a second to look at the screen that you’re reading. This screen is a medium for you to interact with the world.
A brief history
There were fewer than 10,000 television sets in the United States in 1945; by 1950 there were 6 million, and by 1960 more than 60 million sets had been sold. Television dethroned radio, and screens became the preferred medium for receiving information. In 1975 the first personal computers were introduced, the MITS Altair 8800 and the IMSAI 8080, both using the Intel 8080 CPU. They provided word processing, electronic spreadsheet calculations, gaming, and many other methods of inputting, analyzing, and manipulating data. As their functionality and practicality increased, personal computers experienced the same type of explosive growth as television sets—by 1989, there were approximately 55 million personal computers in the United States. In addition to providing information, screens became the most common way to interact with and manipulate data and exercise computing power. Then the internet went nuclear in the mid-nineties, initiating a complete takeover of the global communication landscape. In 1993, the internet communicated approximately 1% of the information flowing through two-way telecommunications networks. By 2007, it communicated 97% of that information. The internet gave our screens an unprecedented level of connectivity, and they became our windows into the tech revolution.
The golden child of all of this innovation was the iPhone. Launched in 2007 and arguably the most successful product line ever, the iPhone turned Apple into the most valuable company in history. It was a beautiful, high-quality, portable screen—a medium that combined all of the aforementioned technological advances and fit them in your pocket. People used it because it was powerful and practical. They loved it because it was creative, fun, and elegant.
Thinking about layers
When thinking about the iPhone (and the rest of the smartphone market) it will be helpful to think about it as layers of technology. The outermost layer is the screen, the primary subject of this post. Our interactions with all of the other layers of technology are mediated by this layer. We look at the screen to receive information, and we touch the screen to send, search, manipulate, and compute. The layers underneath provide the technological features that make the interactions useful.
Think about how the iPhone models upgrade—Apple adds or improves the layers of technology. The latest model has a faster processor, water resistant technology, a better camera, a larger screen, etc. Improvements to the interior layers make the phone more powerful. Improvements to the exterior layer make interactions more natural and intuitive. The screen has progressed from something that we had to interact with using physical buttons, to touchscreen buttons, to intuitive motion touchscreen, to fingerprint reading and force sensitive technology.
All of this is good. Rapid, incremental improvements in technology are important for developing high quality products. Technology companies in general have done an excellent job of executing this strategy (at least, the ones that are still around have). But if you’re looking for “the next iPhone”, you’re looking for something revolutionary; incremental improvement isn’t enough.
“Look Ma, no screen!”
The brief historical recap and proposed “layered-view” of the smartphone were introduced to help us re-frame a question. Rather than thinking, “How can we make screens more natural and intuitive?” we should be thinking, “What is the best possible exterior layer for our technology?” To some extent, I think that the term smartphone is holding us back. It calls to mind a handheld cellular device, usually with a touchscreen interface, internet connectivity, and an operating system to run applications. But why should we limit ourselves to a handheld device and a touchscreen interface? Isn’t what we’re actually interested in cellular capabilities, internet connectivity, an operating system to run applications, and a seamless interface that brings it all together?
As an external layer serving as a technological interface, the screen has some major drawbacks. It demands attention in a rather selfish way, forcing us to turn away from the people and places in our lives in order to interact with it. Screen size creates a difficult trade-off between quality of entertainment and portability. Screens aren’t a particularly immersive interface, and can be more distracting than they are peripherally beneficial. Once we realize that the screen is expendable, we open up a whole new world of possibilities. We start to imagine more creative ways that technology can complement rather than dominate our lives. We start thinking about augmented and virtual reality.
Augmented reality vs virtual reality
Both AR and VR are trying to gain traction as alternative external layers. As technology has advanced, these interfaces have made great strides and garnered significant attention. Virtual reality headsets are slowly finding their way into mainstream culture, and the wild success of Pokémon Go has been encouraging for the augmented reality community. They both show significant promise, both are fun and trendy topics, and I wouldn’t bet against either platform in the long run. However, I think that the ceiling for AR is much higher. I think that AR has the potential to replace the screen as the most common interface for technology, and that a high quality AR product can be “the Next iPhone” that everyone is desperately searching for.
Despite virtual reality’s incredibly immersive nature and ability to bring a high level of emotion to technological experiences, it’s not at all suited to be a day-to-day interface. It might be able to replace the stationary screen, but not the mobile one. Though some see VR headsets as possible laptop replacements, I find the idea highly unlikely. I think it’s much more plausible that VR headsets could replace game consoles and large TVs. As VR technology advances, the headset will seem less and less isolating. Communities will start to form around shared and participatory VR experiences, much like the video game communities that exist online now. But virtual reality will only make its way out of the living room to the extent that it starts to become augmented reality.
Augmented reality starts to solve the problems we have with the screen. A type of heads-up display, similar to the concept of Google Glass, is a tantalizing possibility for a new technology interface. It’s immersive but doesn’t need to be distracting or overbearing; it doesn’t sacrifice display size for portability; and it can operate on the periphery, serving as a complementary view rather than an alternative one. Google may have botched their product launch, but I think it was a step in the right direction. Snap Inc. seems to have learned a few lessons from Google’s mistakes, rolling out their new specs with a very different marketing approach. The functionality of Snap’s first specs won’t be anything close to the power that Google Glass boasted, but I’m confident that if they can create market interest they’ll be able to quickly roll out more powerful versions.
Let’s allow our imaginations to run wild with this idea of augmented reality. Let’s capture some of the creativity, daring, and joy that seem to come in waves in the world of technology. Imagine if your smartphone were replaced by a pair of glasses or contacts, perhaps wirelessly connected to an additional component that could be kept in your pocket but didn’t require much interaction. Imagine if you could access technology with discreet gestures rather than by typing on a screen. Imagine if you could overlay data onto the real world, things like directions or recipes. Imagine if you were able to send interactive emojis to the periphery of other people’s augmented reality interfaces, little visual gifts that seemed set in the real world. Thinking about these things might seem ludicrous. It might seem far-fetched and unrealistic. But it’s fun—the type of fun we need to help us usher in a new era of interacting with technology.