Icons are everywhere. These “little miracle workers” (as John Hicks described them) help us reinforce meaning in the interfaces we design and build. Their popularity in web design has never been greater; the conciseness and versatility of pictograms in particular make them a lovely fit for displays large and small.
But icons on the web have had their fair share of challenges. They were time-consuming to prepare for every intended display size and color. When high-resolution displays hit the market, icons looked particularly low-res and blocky compared to the text they often accompanied.
So it’s really no wonder that icon fonts became such a hit. Icons displayed via
@font-face were resolution-independent and customizable in all the ways we expected text to be. Sure, delivering icons as a typeface was definitely a hack, but it was also useful, versatile, and maybe even a little fun.
But now we need to stop. It’s time to let icon fonts pass on to Hack Heaven, where they can frolic with table-based layouts, Bullet-Proof Rounded Corners and Scalable Inman Flash Replacements. Here’s why…
Screen Readers Actually Read That Stuff
Most assistive devices will read aloud text inserted via CSS, and many of the Unicode characters icon fonts depend on are no exception. Best-case scenario, your “favorite” icon gets read aloud as “black favorite star.” Worse-case scenario, it’s read as “unpronounceable” or skipped entirely.
They’re a Nightmare if You’re Dyslexic
Many dyslexic people find it helpful to swap out a website’s typeface for something like OpenDyslexic. But icon fonts get replaced as well, which makes for a frustratingly broken experience.
They Encroach on Emoji Turf
Most of the time, we rely on automated tools to choose which Unicode characters are assigned to which icon. But Unicode’s also where our beloved emoji live. If you aren’t careful, they can overlap in confusing (albeit hilarious) ways. My favorite example: Etsy’s “four stars and a horse” bug. More recently, our own Jason Grigsby encountered random fist-bumps on ESPN’s site.
They Fail Poorly and Often
When your icon font fails, the browser treats it like any other font and replaces it with a fallback. Best-case scenario, you’ve chosen your fallback characters carefully and something weird-looking but communicative still loads. Worse-case scenario (and far more often), the user sees something completely incongruous, usually the dreaded “missing character” glyph.
Custom fonts shouldn’t be mission-critical assets. They fail all the time. One look at Bootstrap’s icon-related issues and it’s no wonder why they’re removing them entirely from the next version.
Worse still, many users will never see those fonts. Opera Mini, which frequently rivals iOS Safari in global usage statistics with hundreds of millions of users worldwide, does not support
@font-face at all.
They Never Looked Right
The way browsers hint fonts to optimize legibility was never a good fit for our custom iconography, and support for tweaking that behavior is all over the place.
Multicolor icons are even worse. The technique of overlaying multiple glyphs to achieve the effect is impressively resourceful, but the results often look like their printing registration is misaligned.
You’re Probably Doing It Wrong
“But Tyler,” I hear you say. “You’ve completely ignored Filament Group’s Bulletproof Icon Fonts, complete with feature tests and sensible, content-driven fallback solutions.”
And you’re right. Those techniques are great! If you’re using an icon font, you should definitely follow their recommendations to the letter.
But you probably won’t.
What you’ll probably do is adopt whatever your framework of choice has bundled, or drop in some massive free icon font you can use right away. You won’t modify how they work out of the box because that’s really hard to prioritize, especially when they look great on your monitor with virtually no effort at all.
Or maybe you will do the work, designing and curating a custom icon font, choosing your Unicode characters carefully, documenting and evangelizing the importance of implementing your icons in an accessible way with appropriate fallbacks. Then one day, Dave forgets to add a fallback image to that iconographic button he added (which looks great, by the way), which Roberta reuses for her related Pull Request, and before you know it, your app has devolved into a fragile, hack-littered wasteland once more.
These examples are not hypothetical (though names have been changed to protect the innocent). I’ve seen them happen to multiple organizations, all of them starting with the best possible intentions.
There’s Already a Better Way
But I hear a lot of excuses for why teams avoid using it, even for brand-new projects. Here are a few…
“SVGs can’t be combined into sprites.”
They totally can. There are even really great tools like svg-sprite and IcoMoon that can help automate that process.
“SVGs are larger in file size.”
Usually when I hear this, the team’s comparing a single binary icon font to multiple, uncompressed SVG files. The gap narrows dramatically when you optimize your SVGs, combine reusable ones into sprites, and deliver those with active Gzip compression or embedded in-page.
Occasionally I’ve heard the gap is still too wide when hundreds of icons are included. This begs the question: Why are you including hundreds of icons on every page?
“The icon markup is too verbose by comparison.”
<!-- Typical @font-face icon: -->
<span class="icon icon-search" aria-hidden="true"></span>
<!-- Typical SVG icon: -->
The SVG markup is barely more verbose, and way more descriptive and semantic than some empty, ARIA-hidden
“Browser support for SVG is worse.”
As of this writing, global support for SVG is up to 96%… 4% higher than the same stat for
@font-face. Plus, SVGs are way more accessible and their fallbacks are much more straightforward.
“The framework we chose already has an icon font.”
If your framework told you to jump off a bridge, would you?
Don’t Be “Table Guy”
I was in school when the Web Standards movement hit critical mass. While the majority of my instructors saw the merits of semantic markup and embraced it wholeheartedly, one passionately held out. “Table Guy” argued that no layout tool could usurp
<table>, that it was inherently better-suited for crafting grid-based designs. He boasted of how quickly and easily he could achieve the “Holy Grail” layout with his trusty table cells. He cited the wealth of cross-browser inconsistencies that continued to plague CSS.
Table Guy and I kept in touch. Today, he freely admits he was wrong about CSS. He feels embarrassed to have been so married to a technique that was so clearly misappropriated in hindsight.
If you won’t stop using icon fonts for people with screen readers, people with dyslexia, people with browsers that don’t support
@font-face, people who randomly didn’t load the icon font once for some reason, or designers who just want their icons to look right on-screen…
Then do it for yourself. Don’t be Table Guy.
Fun fact: Cloud Four’s design team really digs SVG. Our enthusiasm for the image format accumulated gradually over many months, thanks in large part to Sara Soueidan’s tireless documentation of its most mysterious features and quirks. It was during the process of designing the Responsive Field Day site that our collective interest level hit fever pitch, which caused our coworkers to wonder what all the fuss was about!
It turned out to be a difficult question to answer. Most of the resources we found online either covered the very basics of the format, or jumped right into the nitty-gritty of coordinate systems, complex animation, automated sprite-building, etc. So fellow Cloud Four designer Sara Lohr and I decided to put together an internal presentation with reveal.js to bring everyone up to speed.
SVG 101: A Gentle Introduction →
In spite of those challenges, the talk was a hit. I think we introduced concepts in a way that made sense for the audience, emphasizing the sorts of things they’d find most useful day-to-day.
Then again, maybe it’s just easy to win people over with demos like “Jasonflower”:
See the Pen Jasonflower: CSS by Tyler Sticka (@tylersticka) on CodePen.
It’s hard to argue that SVG isn’t the greatest format ever once you’ve seen that.
Apple announced the new Apple TV yesterday. As many expected, it didn’t come with Safari. What was unexpected is that it doesn’t appear to have WebKit at all.
The utility of WebKit for app developers seems straightforward. Apps often use embedded web views to display information that it doesn’t make sense to duplicate in native code or for rendering links that people share.
But without webkit available for tvOS, there will be no embedded web views and no third party browsers.
We now have both Apple TV and Android TV without the web, and it’s a bloody shame.
I’m well aware of the argument that people don’t want to browse the web on TVs. I believe the jury is still out on that one, but even if I concede that point, there is still tremendous utility in using web technology for building apps for TVs.
The reason I started researching the web on TVs dates back to the original Google TV Showcase. There was a Vimeo app in that showcase that I loved.
I used that app for several months before I accidentally hit a button that converted it from the TV app that I knew into the standard Vimeo web page.
I had inadvertently discovered that the Vimeo app was just a different view of Vimeo’s normal web page. Vimeo calls this couch mode.
Nintendo, Samsung, LG, and others have all built app platforms on top of HTML. For years, Netflix built all of their TV apps on HTML5 before recently going native.
Web apps on TVs can be great experiences. Maybe it is because we don’t notice the lag as much when we’re using remote controls instead of touching the interface directly. Maybe it is because the interfaces for most non-game, TV apps are fairly simple.
Whatever the reason, building TV apps using web technology just seemed to work. Bridging the gap between native and web apps on TVs was easier than it was on mobile.
But year after year at Google I/O when I’d try to talk to people about Chrome on TVs, I wouldn’t get anywhere. Google TV shipped with Chrome, but it was a forked version that the Chrome team wasn’t responsible for and grumbled about.
I remember trying desperately to figure out who to talk to about the browser on Google TV. I was repeatedly and humorously pointed to Chris Wilson. Chris hadn’t working on Google TV in months. It became a running joke between the two of us.
A couple years later, Google announced that their TV product would be called Android TV. Google touted how the TV would finally be running the same version of Android as phones and tablets. It would be kept up to date.
Except it would no longer have a browser.
At the time, Microsoft showed more interest in the web on TVs than Google. That may still be the case. I haven’t checked in awhile. Firefox OS has recently moved to TVs and Opera still has a TV browser. So all hope isn’t lost.
Back in 2012, I was trying to muster enthusiasm in browser makers for working on the web on TVs. It seemed likely that TVs were going to be the next platform and instead of playing catch up like the web did on phones, we could be ready for the web on TVs from the beginning.
I feared playing catch up again. In retrospect, I should have feared much worse.
The two biggest mobile operating systems are now on TVs. One started with a browser, but no longer has one. The other just shipped without even an embedded web view.
From what I’ve seen, the web on TV could have been a star. What a missed opportunity.
We’re about an hour away from the Apple event where they will announce the new Apple TV. Here are the things I’m going to be watching for based on my time researching Smart TVs, game consoles, and set top boxes.
How does the remote control work?
Input remains the biggest challenge for all attempts to bring computer smarts to the screens on our walls. While the software and content options for the new Apple TV will matter, if Apple truly revolutionizes TVs, I suspect if will come from an improvement in input.
Historically, improved input has accompanied Apple innovation. The Mac’s mouse. The iPod’s scroll wheel. The iPhone’s touch screen.
The other lesson here is that none of these inputs were wholly Apple inventions. In each case, the input technology had been used by other companies in the past. The iPhone’s touch screen seemed ho-hum until people actually used it and realized how much attention to detail Apple had put into perfecting the input.
So I’ll be surprised if the remote control has some feature that we haven’t seen on remote controls in the past, but I also suspect that if Apple TV is a game changer, it will be because of the remote control.
Is there a pointer? And where is it used?
The current Apple TV is limited to d-pad interactions—up, down, left, right. There are a lot of interactions that need the ability to select an arbitrary point on the screen instead of navigating to that point by successive d-pad button presses.
The most obvious need is in games. The rumors are strong that the new Apple TV will focus on games. The remote control has been described as Wii-like in its ability to detect motion.
The question is whether or not there will be any interfaces where you may see a pointer on the screen. I highly suspect app developers will build apps that include pointers, but do any of the Apple apps themselves include a pointer. And if so, where is it used and how does it work?
Where is web technology used? Is there a browser?
I have little doubt that the new TV operating system will support embedded web views. Web views are critical for many apps.
So the big question is whether or not Apple will include a browser as well. I’ve explored some of the arguments for and against a browser in the past.
And whatever other surprises Apple has in store.
I’ve been looking forward to today’s announcement since I started researching the web on TVs in 2012. I can’t wait to see what it looks like when Apple is no longer treating TVs as a hobby.
The fact the Apple may soon release an App Store for TVs has me revisiting a couple of questions that have troubled me these last few years: Where does the common device context continuum start and end? And more importantly, how do we know?
But before I look at those questions in detail, let’s talk about device context.
The Device Context Continuum
We now design for a continuum of devices. Responsive web design provides us with the techniques we need to design for varying screen sizes.
But responsive web design techniques wouldn’t be effective if there wasn’t a common context—or perhaps more accurately, a lack of context—between devices.
Put a different way, if people did demonstrably different things on mobile phones than they did on desktop computers, then responsive web design wouldn’t be a good solution.
We design for different screen sizes confident in our knowledge that people will do similar things whether they are on phone, tablet or desktop devices. This is our common device context and the continuum that it applies to.
But it hasn’t always been this way.
The Mobile Context Debate
In the early days of responsive web design, people often debated whether or not mobile context was a thing that should be considered in our designs.
At the time, I wrote about my conflicted thoughts on mobile context. I advocated for keeping context in mind. But by 2013, I had concluded mobile context didn’t exist.
Now we have a lot of experience to back up this perspective. Chris Balt, a Senior Web Product Manager at Microsoft, told Karen McGrane and Ethan Marcotte on the Responsive Web Design podcast:
Our data shows us quite plainly and clearly that the behavior of those on our mobile devices and the small screens is really not all that different than the behavior of those on the desktop. And the things they are seeking to do and the tasks they are seeking to accomplish are really quite the same.
Karen and Ethan have been doing a weekly podcast for a year. In that time, regardless of the company or industry being discussed, people say that they see no difference in what people want to do based on whether they are using a mobile, tablet or desktop.
I still think Luke Wroblewski nailed it when he wrote:
But if there’s one thing I’ve learned in observing people on their mobile devices, it’s that they’ll do anything on mobile if they have the need. Write long emails? Check. Manage complex sets of information? Check. And the list goes on. If people want to do it, they’ll do it on mobile -especially when it’s their only or most convenient option.
What about new devices? TVs? Watches?
It seems that not a day goes by without a new device form factor being introduced. Watches. TVs. Virtual reality goggles. Augment reality glasses.
Where do these new devices fit in on this device context continuum? Do they share the same context?
The consensus at the moment seems to be that they are not part of the same continuum as phones, tablets and computers. When you read the guidelines for designing for watches or TVs, designers are advised to take context into consideration.
At Responsive Day Out, Rosie Campbell, who works in Research and Development for the BBC, gave a compelling presentation entitled Designing for displays that don’t yet exist. She shared research on what it would take to build a compelling smart wallpaper experience in the future when such technology might become commonplace.
In the talk, Rosie made two comments that I’ve been thinking about ever since. She addressed what we need to do as screens get weirder:
It’s not just about making content look beautiful on those different screens. We also need to think about what is appropriate for each device because you’re probably not going to want the same kind of information on your smart watch as you want on your smart wallpaper.
This makes intuitive sense to me. For whatever reason, my Apple Watch feels very different than my phone or my computer.
But Rosie also used browsers on Smart TVs to illustrate a point that just because a technology makes something possible, doesn’t mean that we should design experiences around it:
Suddenly, we all got Smart TVs. And it was great. We got Internet on our TVs. But actually browsing the web on the TV was a really clunky experience. It was not very pleasant. And no one really wanted to do it especially when you’ve got a mobile or tablet next to you that makes it easier.
Again, what Rosie states here is the popular consensus that people won’t browse the web on their TVs. Steve Jobs famously said that:
[People] don’t want a computer on their TV. They have computers. They go to their wide-screen TVs for entertainment. Not to have another computer. This is a hard one for people in the computer industry to understand, but it’s really easy for consumers to understand. They get it.
I’ve spent the last three years researching the web on TVs wondering about exactly this question. And it isn’t clear cut to me whether or not people will browse the web on TV screens in the future.
The consensus on mobile context has changed
The popular consensus used to be that no one wanted to browse the web on their phones. If you dared to build something targeting phones, you were advised to keep the mobile context in mind:
- People are on the go.
- Devices are slow and clunky.
- Phones are hard to type on.
Even after the iPhone came out, people argued that yes, the iPhone had a good browser, but that we’ve had browsers on phones for years and no one used them. People simply don’t want to browse the web on their phones.
This seems laughable now, but it was the accepted consensus at the time.
The reason we had a debate about mobile context when responsive design first arrived is because responsive design challenged the widely accepted idea that people wanted to do different things on their phones.
What do we know about how people will use new devices?
Rosie shared solid research on smart wallpaper. The BBC tested their theories and watched people interact with prototypes. Those observations led to their conclusions about where that future technology would go.
But I found myself wondering what researchers in the early 2000s found when they observed people using their phones. Might they have said something like this:
Suddenly, we all got Smart
TVs phones. And it was great. We got Internet on our TVs phones. But actually browsing the web on the TV phone was a really clunky experience. It was not very pleasant. And no one really wanted to do it especially when you’ve got a mobile or tablet next to you computer that makes it easier.
I’m not picking on Rosie here. I do this myself. My gut instinct is to agree with her in many ways.
I find myself thinking, “Well clearly watches are a different thing.” I have similar thoughts about screens in odd places like refrigerators. They don’t feel like they part of the same device context continuum.
But how do I know? I used to think that phones were a different thing.
Predicting future behavior is difficult
Because I was on the wrong side of the mobile context debate, I’ve become leery of our ability to predict future behavior.
In 1994, the New York Times published an article asking “Has the Internet been overhyped?” People were looking at usage of AOL and Prodigy and trying to understand what the impact of the web was going to be.
On a smaller scale, we’re often told that a web site doesn’t need to worry about mobile because the analytics show that people don’t use that site on mobile.
To which I counter, “Why would anyone use your site on mobile if it isn’t designed to work well on those devices? How do you know what people will do after it has been designed to work well on small screens?”
I now have a fundamental rule: we cannot predict future behavior from a current experience that sucks.
Where does the device context continuum end?
All of which brings me to back to my original questions: Where does the common device context continuum start and end? And more importantly, how do we know?
I’m uncomfortable with the current consensus. Particularly when it comes to TVs. It feels like Justice Potter Stewart saying “I know it when I see it.” It makes me wonder if we’re in the feature phone era of TVs.
I want some guidelines to help me know when something is going to be part of the device context continuum and when it won’t. Some questions we can ask ourselves about devices that will help us see if our current views on a particular device’s context are real or simply artifacts of a current, flawed implementation.
And I wonder if what I wish for is simply impossible in the present.
Thanks to Tyler Sticka for the illustrations.