Skip to main content

Responsive Design for Apps — Part 1

By Jason Grigsby

Published on January 3rd, 2013

Topics

A few months ago I was tasked with finding a good solution for a client who wanted to move to responsive design, but had a web app that they needed to support as well. The question they asked is one that I’ve seen others argue about in the past: does responsive design make sense for apps?

In this case, the client was using a JavaScript UI framework from Telerik. This framework has been deprecated in favor of Kendo UI.

Kendo UI follows a pattern that I see for many of these frameworks: there are desktop widgets and mobile widgets. The same is true of nearly every framework that I could find:

The pattern seems clear at least from a framework perspective. There are widgets for desktop web development and widgets for mobile. They look and behave differently.

But does that make sense? Is that where things will head in the long run?

Why aren’t these frameworks responsive?

I know within these frameworks, that portions are responsive. jQuery Mobile, for example, is designed to be responsive. But there is still a separation between the mobile/tablet UIs of jQuery Mobile and the desktop widgets of jQuery UI. Why is that the case?

When it comes to Kendo UI, we know what the developers are thinking. They wrote about their thoughts on responsive design back in September:

Responsive design is great for creating mobile sites, but it’s not as useful for creating mobile apps. Responsive design can help you hide, show, resize, and reformat UI for screens of varying size, but it is less suited for presenting completely different modes of usability on different form factors.

On the face, it seems like a reasonable enough argument. But what stuck in my craw was something else they wrote about why they have a separation of tablet UI versus phone UI:

It’s not that we’re technically incapable, but adapting a phone UI to a tablet UI is not so dissimilar from trying to automatically adapt desktop UI to a phone. They are fundamentally different platforms with different usability considerations, and something that makes sense on phones may or may not belong on tablets.

These two sentences struck me as odd and difficult to reconcile with what we’ve seen happen in the market over the last few months.

Where is the line between tablets and phones?

So what separates a phone from a tablet? I’m going to assume they’re not talking about the fact that one can make phone calls and the other cannot.

In truth, I don’t know why the Kendo UI folks think the platforms are different1 . What I can say is that when the iPad came out, the lesson was clear that simply increasing the size of an iPhone app to make it fit on a 10″ screen was not sufficient. And since then, I’ve heard a lot of people talk about how tablets are different than phones.

So let’s assume for a second that the major difference is screen real estate because so many other things are similar (touch screens, operating system, etc. are all consistent between phones and tablets). Let’s take a closer look at screen real estate:2

Model Type Size Display Resolution Viewport
W H W H W H
Samsung Galaxy Note 2 Phone 3.17” 5.95” 5.5” 720 1280 360 640
Motorola RAZR HD Phone 2.67” 5.19” 4.7” 720 1280 360 519
Motorola Atrix HD Phone 2.75” 5.26” 4.5” 720 1280 540 812
HTC Droid DNA Phone 2.78” 5.5” 5” 1080 1920 360 640
Nexus 7 Tablet 4.72” 7.81” 7” 800 1280 600 793
Kindle Fire Tablet 4.72” 7.44” 7” 600 1024 600 819
Kindle Fire HD Tablet 5.4” 7.6” 7” 800 1280 533 731

In the table above, I’ve picked some of the larger phones and smaller tablets. It seems that phones stop at around 5 inch displays and tablets pick up at 7 inches. So there is a gap in physical size between the two device classes—even if that gap is getting smaller over time.

But for web developers, the screen resolution—and more specifically the viewport size—make a bigger difference than the physical size. And when it comes to viewport size, the differences between tablets and phones are less clear.

Quick, without looking at the table above, identify which of the following viewport measurements belongs to a phone and which belongs to a tablet:

  1. 640 px
  2. 600 px
  3. 519 px
  4. 640 px
  5. 622 px
  6. 533 px
  7. 812 px

Can’t tell the difference can you?

Those of you paying close attention will notice that I used the widths of tablets and the heights of phones. Now before you accuse me of cheating, do you really think no one uses their phone in landscape orientation?

HTC Pro 7 phone with slide out keyboard that can only be used in landscape orientation

(Quiz answers: Phones—1,3,4,7; Tablets—2,5,6)

Is tablet UI different than phone UI?

So it is true that phones and tablets “are fundamentally different platforms with different usability considerations, and something that makes sense on phones may or may not belong on tablets”?

Fundamentally different? With the exception of the ability to make a call, the data suggests that they’re aren’t so different and that the differences between phones and tablets are narrowing all the time.

Ok, but desktop UI is definitely different, right?

Surely desktop is different, right? Every JavaScript framework that I looked at makes a distinction between mobile and desktop. Even Apple has different SDKs for iOS and OS X.

This seems to be the common opinion particularly when it comes to building intranet or enterprise applications that “will only be used on desktop”.

Earlier this year, my friend Boris Smus made a compelling argument for why web apps need more than media queries:

To create a good user experience, you need to know who your users are and what devices they are using. If you build a user interface for a desktop user with a mouse and a keyboard and give it to a smartphone user, your interface will be a frustration because it’s designed for another screen size, and another input modality.

I highly recommend reading Boris’s article because he does a good job of describing a method for classifying devices into form factors not based on whether they are sold as a “phone” or a “desktop computer”, but instead based on the characteristics of the device.

Boris offers a middle ground between responsive design and separate code bases for every device:

Here’s a compromise: classify devices into categories, and design the best possible experience for each category. What categories you choose depend on your product and target user. Here’s a sample classification that nicely spans popular web-capable devices that exist today.

  • small screens + touch (mostly phones)
  • large screens + touch (mostly tablets)
  • large screens + keyboard/mouse (mostly desktops/laptops)

This made a lot of sense to me at the time. Designing a complex application that is finely tuned to keep someone in the flow while working with a keyboard and mouse is different than designing something tuned to touch.

That is, it made a lot of sense to me until…

Microsoft Surface

Windows 8 obliterates the distinctions between tablets and desktop

Jeremy Keith once wrote that web design was always full of a bunch of unknowns including things like screen size, but that web designers had:

this unspoken agreement to pretend that we had a certain size. And that size changed over the years. For a while, we all sort of tacitly agreed that 640 by 480 was the right size, and then later than changed to 800:600, and 1024; we seem to have settled on this 960 pixel as being this like, default. It’s still unknown. We still don’t know the size of the browser; it’s just like this consensual hallucination that we’ve all agreed to participate in: “Let’s assume the browser has a browser width of at least 960 pixels.”

I’ve always loved this idea of a consensual hallucination that all we all agreed to participate in. I still remember nervously presenting work to clients or bosses and hoping that they had their browser set to the default font. I crossed my fingers and hoped they also believed in the hallucination that people didn’t adjust the font size in their browser.

I bring this up because we have a similar consensual hallucination about the distinctions between tablets and desktop. At the same event that Steve Jobs introduced the iPad, he also unveiled the iPad Keyboard Dock.

Steve Jobs introducing the iPad Keyboard Dock

How many people have you seen carrying an iPad case with a built-in keyboard? I was in a meeting recently where nearly everyone in the room had iPads with keyboards.

Yet, in our collective hallucination, we believe large screen and touch equals tablet whereas large screen plus keyboard and mouse equals desktop.

Jeremy points out that mobile didn’t create more unknowns for web designers. It just forced us to recognize the unknowns that were already there.

The same is true of Windows 8. Our illusion that there are sharp differences between tablets and desktop is destroyed by a whole slew of devices that can change between tablets and desktop machines on a whim.

Ultrabooks by HP, Dell, Asus and Toshiba that can switch from tablets to laptops.

And it’s not just these laptops/tablet hybrids that break our preconceived notions of what desktop means. Many manufacturers are also producing Windows 8 desktop computers that feature touch screens. Or touch screen monitors that can be added to any Windows 8 machine.

Toshiba All-in-One LX830 with touch screen

And just to confuse things further, Ubuntu is introducing phones that act like desktop computers when docked.

Touchscreens on desktop are just a fad, right?

I’ve seen a fair amount of criticism of Microsoft for incorporating touchscreens into their laptops and desktop devices. Jon Gruber wrote:

A touch-optimized UI makes no more sense for a non-touch desktop than a desktop UI makes for a tablet. Apple has it right: a touch UI for touch devices, a pointer UI for pointer (trackpad, mouse) devices. Windows 8 strikes me as driven by dogma — “one Windows, everywhere”.

Windows 8 may not have all of the pieces worked out yet, but I believe they are on the right track and Apple will follow suit at some point. Intel has published detailed findings of their usability studies of touch on notebook computers:

Users who were presented with a way to interact with their computers via touch, keyboard, and mouse found it an extremely natural and fluid way of working. One user described it using the Italian word simpatico-literally, that her computer was in tune with her and sympathetic to her demands.

They go on to dispute the conventional wisdom that people get fatigued using touchscreens. The people who I’ve talked to who have Windows 8 touchscreens talk about how natural it is and how quickly they stopped thinking about it and just flow from using their trackpad or mouse to touching the screen. They say simply, “Don’t knock it until you try it.”

Device interaction by input type. Touchscreen 77%, Mouse 12%, Trackpad 3%, Keyboard 8%
Device interaction by input type.

And really, why wouldn’t this be true? We’ve seen children who are confused that computer screens don’t respond to touch the same way the other screens around them do. We laugh at ourselves when we absentmindedly reach out and touch our screen expecting it to do something.

We call these touch interfaces natural user interfaces. Is it any surprise then that we would want these interfaces on our desktop machines as well?

Touch as a baseline experience

Luke Wroblewski has neatly summarized our current device landscape in a single graphic:

Chart showing devices covering the full range of sizes and touch nearly everywhere

We have devices at nearly every screen size and we have multiple types of input at each resolution. The small gaps that exist are either things that seem inevitable (high-dpi on large screens) or are so small to be inconsequential (does it matter that we don’t have six inch displays?).

Luke produced an under-appreciated series of videos for Intel that take a closer look at what it means to design applications for these new class of touch laptops. In that series, he looks at what it would mean to design for targets for mouse versus touch:

Keyboard/Mouse First: 5mm minimum control size, 7mm recommend size, 10 mm common control size; Touch First: 7mm minimum control size, 10mm recommended size, 10mm+ common, errors, edge size

In the video, Luke makes the point that an app designed with targets appropriate for a keyboard/mouse UI will be difficult for someone to interact with using touch. But the opposite isn’t the case. If targets are designed for touch, they will by necessity be larger and will be easier for all users to hit due to Fitt’s Law.

Josh Clark came to a similar realization recently and argues that Every Desktop Design Has To Go Finger-Friendly.

To me, it seems like nearly every lesson we’ve learned about designing for mobile and tablets—whether it is designing larger targets for touch, using larger typefaces for readability, or simplifying interfaces—are things that desktop applications can benefit from. And this is why you see both Apple and Microsoft incorporating the lessons learned from mobile into their desktop operating systems.

Perhaps in the past desktop UI was something completely different from mobile UI, but that is no longer the case.

Lines in the sand do not persist

Any attempt to draw a line around a particular device class has as much permanence as a literal line in the sand. Pause for a moment and the line blurs. Look away and it will be gone.

Let’s take the absolute best case scenario. You’re building a web app for internal users for whom you get to specify what computer is purchased and used. You can specify the browser, the monitor size, keyboard, etc.

How long do you think that hardware will be able to be found? Three years from now when a computer dies and has to be replaced, what are the chances that the new monitor will be a touchscreen?

By making a decision to design solely for a “desktop UI”, you are creating technical debt and limiting the longevity of the app you’re building. You’re designing to a collective hallucination. You don’t have to have a crystal ball to see where things are headed.

And once you start accepting the reality that the lines inside form factors are as blurry as the lines between them, then responsiveness becomes a necessity.

I’m not saying there isn’t usefulness in device detection or looking for ways to enhance the experience for specific form factors and inputs. This isn’t a declaration that everything must be built to be with a single html document across all user agents.

What I am saying is that even in scenarios where you’re fine-tuning your app code and UI as much as possible for a form factor, that the differences in screen size and the various forms of input within a form factor alone will be enough to require you to design in a responsive fashion.

And once you start designing in a responsive fashion for a given UI widget, you’re going to find that you have to think about what happens to that widget across a wide range of screen resolutions.

To do otherwise means ignoring the reality of our device landscape and requires you to buy into a collective hallucination.

This is your last chance. After this, there is no turning back. You take the blue pill- the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill- you stay in Wonderland, and I show you how deep the rabbit hole goes.

Continue reading in Part 2


  1. Finding viewport sizes for all of these devices proved difficult. If there is an error, please let me know.
  2. I worry that it will seem like I’m picking on the Kendo UI folks, but that isn’t what I set out to do. It just so happens that our client was using their tools and my investigation started with their framework. Their article on responsive design spurred a ton of thought which I’ve captured here.

    FWIW, I think the tools they provide are pretty damn cool, and we’re all still grappling with what Windows 8 means for us.

Comments

Jeremy Keith said:

Great stuff, Jason!

Although I do have a question regarding the introduction…

What’s a web app?

Replies to Jeremy Keith

Jason Grigsby (Article Author ) replied:

On further reflection, the question, which we both know is unanswerable, does deserve some further clarification in this context.

The problem that I was given was a client whose developers were building enterprise applications and had come to rely on web application frameworks like Telerik and Kendo UI to do their development. If they need to build something like and editable grid widget, using an existing widget instead of reinventing the wheel made a lot of sense.

In the move to a responsive design, they didn’t want to give up the utility of those frameworks. So I spent a lot of time looking for other JavaScript UI frameworks that were responsive. I found none. I asked around and no one could point me to one.

Faced with that reality, I started to dig into why no one seemed to be making responsive versions. I stumbled onto the Kendo UI post and decided to try on that logic for awhile. I don’t spend my days building UI frameworks. Maybe they’re right?

It was around this same time that I started really exploring Windows 8. The two lines of research converged into this post.

The second part of my research was to look at the “web app” that the client had created using desktop UI widgets from Telerik and try to envision how that would work responsively. That’s part II.

So the question of how I define a web app is less material than the fact that all of these frameworks seem to be looking at mobile/tablet as being distinct from desktop.

Hans Verhaegen said:

All screens equal in a way. You can talk, touch, turn, hold, mouse and keyboard them all. How long before we will see also square screens. and round ones. And resizable or foldable screens! And screens that are just a layer of dense floating light adapting itself to the context where it ‘hangs’! Prepare to respond to that! Great article! Looking forward to Part II.

aetherpoint said:

Excellent article. I was sitting in a Microsoft Store on Surface Tablet and wondering what it will mean for Responsive Design’s navigation patterns. Sure, Windows 8 has an incredibly small market share, but tablets are on the rise. Additionally, is really there any reason to not make a desktop screen touchable?

The Global Moxie Article shows the use of them thumb positions on the left and right side of the screen. A swipe / touch side flyout or push navigation like Facebook’s seems like it would be the most accessible on both traditional mobile devices and hybrid “desklets”.

Itamar Rogel said:

Great article! Very insightful. Thank you.

What’s funny is that, as you point out – this is just a snapshot in time. I wonder how reading back this article in 5 years would feel like 🙂

Keith Richnafsky said:

Excellent article at a point in time where I am proposing of moving our internal enterprise application to a responsive design. I get odd looks when I point out what the future could be and how we need to design for where devices will be in a few years. The odd looks may be from us being a Java shop where several folks think what Oracle says is Gold, i.e. JSF, ADF etc.. (yuck!). However, why stick to those standard rules when you can leverage techniques to get your code base to work on multiple devices?

Your article gives me another reference point to show others why thinking of future devices is far more important than designing just for one device. Thanks!

Alex Debkaliuk said:

Thanks for a great insight. A lot of things you mention resonate with what I have in mind for a while now.

What industry is lacking now I guess is a clear practical instructions on implementing responsive in medium to small projects.

For now only the big boys like Google and Facebook can afford true responsive cross platform experience. And they too aren’t all good at it. Mostly because many features are missing when you switch from platform to platform. Add consistency issues too…

Jon Arne Sæterås said:

A great article, Jason.
Have been reflecting a bit on similar issues lately… This is where I am at:
viewport- and screen size is not what’s REALLY important.
The important thing is the relationship between 1) Where is the device? (in your hand, in your handS, placed on your desk/lap, mounted on the wall, etc.)
2) What’s the distance from your eyes and interaction means (hands) to your screen? (too far away so that you need some kind of “remote controll” (mouse, keyboard, voice, etc)?)
This will tell something about how big buttons, images, text etc. need to be. Regardless of the actual viewport or screen size and interaction model.
How to group devices based on this? No idea. Maybe I’m drunk… Cheers! (You’ll probably take us into the rabit hole in pt 2, anyway)

Jason Lander said:

Awesome post Jason. I love your comparisons to the viewports of tablets and phones, as well as the comments about designing targets for touch across all devices. We’ve been experimenting with the Windows 8 tablet. Although there are many things I like about it, I still personally struggle with the interaction mix of trackpad, keyboard and touch. It feels awkward to me. I find myself just using the trackpad, even though I could touch. As soon as I get in keyboard mode for some reason I don’t think about touching my screen. It will be interesting to see if the continued use of these devices changes that.

Replies to Jason Lander

Jason Grigsby (Article Author ) replied:

Thanks Jason. I recommend reading the Intel usability study. One of the more interesting things they found was that “users reported that touch transforms the notebook from a work device into a play device”. If you’re just experimenting with it, but not adopting it, I think it would be hard to get into the mode of thinking of the laptop as a play device.

There is also a certain sense of inevitability about it to me. You’ve got all of these screens being created for phones, tablets, and Windows 8 devices that incorporate touch. At some point, the price differential between traditional screens and touch screens is going to be so small that even Apple will be tempted.

It is hard for me to look out 10 years from now and believe that the screens on our desktop will not support touch, but everything else seemingly will. It is much more likely that touch creeps into desktop paradigms in some fashion.

Replies to Jason Grigsby
Lou Powell replied:

Jason, Great article. Lots of great conversation. I agree that touchscreens will become more prevalent on desktops and laptops. From a Fitts’s Law perspective there is no comparison in the speed and ergonomics involved in directing a mouse vs. pointing with my finger. Jobs said that Apples testing showed that vertical touchscreen was a bad UX. However, I believe we have now been conditioned to the touch interface. For information gathering (surfing) whether on desktop, laptop or mobile, I think touch will be preferred.

Brett Jankord said:

Great post Jason, I think you touch on a lot of great topics here.

The collective hallucination we have chosen to take part in, knowingly or unknowingly, does seem to be the main culprit behind the thinking that we can group the web into set categories like “mobile”, “tablet”, “desktop”.

I believe the desire to classify or categorize devices/browsers is natural. Lines in the sand give us a sense of control in a landscape that is constantly changing. Though what criteria we draw those lines in the sand will determine if they are future friendly or not.

I would encourage developers to group devices/browsers based on feature support instead of grouping them based on the type of device they are: mobile, tablet, desktop.

I’ve gone down the path of grouping devices by type: mobile, tablet, desktop and it’s a fruitless exercise. Or as Jeremy might tell you, an endless arms race.

Following the mindset of progressive enhancement and classifying devices based on capabilities, such as CSS3 support, or advanced JavaScript support will provide better solutions that are also easier to maintain.

In regards to touch optimization, I believe there are two factors. Designing for touch and developing for touch. I would say it is safe and probably best to always design for touch. Large touch targets do not have any negative effect on non-touch devices.

As for developing for touch, starting with non-touch interactions and building touch interactions on top will help to make sure you site works on a wide range of devices, even touch-devices that don’t support touch events. There has been some recent discussion about how Modernizr handles touch that is worth checking out.

I see the most difficult issue not being waking other developers from this collective hallucination, but rather our employers/clients.

Todd said:

Good post, Jason. Thanks for referencing my Kendo UI blog post. 🙂

Frankly, I think you make some good points regarding the increasingly blurred lines between phone, tablet, and desktop. As more desktops become touch-enabled, this blur will likely continue, especially for certain kinds of apps.

That said, the reality we observe today is that developers are building distinct apps for phones, tablets, and desktop. Good or bad, these are the realities of today’s requirements.

Why separate phone and tablet apps? Because iOS and Android do. Regardless of the device similarities, iOS and Android prescribe specific UX guidance for their phone and tablet form factors. With Kendo UI, we aid developers in building apps that follow these distinct guidelines.

In order for those lines in the sand to go away, Apple and Google (and even Microsoft with Win8 vs. WinPhone) must not distinguish between phone and tablet. It may happen eventually, but not soon in our opinion.

I think you must also be careful with how broadly you define “responsive design.” We agree that apps should be built with layouts that can respond to varying screen sizes/input methods. But there is a big difference between using RD to hide/show/reflow an app UI and trying to use RD to introduce/remove usability paradigms unique to various form factors (for example, split views on tablets). The Kendo UI tools aim to allow the later. We think the best implementations use RD + Kendo UI widgets.

Hope that clarifies a bit. Looking forward to Part II. Perhaps I will expand on this feedback on the Kendo UI blogs soon, too. 🙂

-Todd

Replies to Todd

Jason Grigsby (Article Author ) replied:

Thanks Todd. I enjoyed your post as well. And like I said in the footnote, I didn’t intend this post as criticism. Your post was one of several things (Windows 8, Stack Overflow threads, etc) that I was ruminating on as I was thinking through the issues. You just happened to be the most articulate on the subject and one of the first people I read. 🙂

You wrote:

“Why separate phone and tablet apps? Because iOS and Android do. Regardless of the device similarities, iOS and Android prescribe specific UX guidance for their phone and tablet form factors. With Kendo UI, we aid developers in building apps that follow these distinct guidelines.”

Do you think that the apps that developers are building are intended to go into app stores?

One of the things that I’ve spoken about at conferences, but have yet to write about at any length is my belief that with a handful of exceptions, we have yet to develop web-specific UI conventions for mobile and tablet form factors. Most of the UI conventions that we have ape either iOS or Android. And if we’re being honest, they ape iOS.

That isn’t the case on desktop web to the same degree. Yes, we might find our web UI’s inspired by native UIs. But if someone said they were building a word processing web app for desktop, we would immediately be able to visualize how that web app would differ from a native word processor.

When it comes to mobile, our design language for web is limited. When we visualize a mobile web app, we visualize native ui components.

I don’t think that will always be the case. But it seems true now.

I think you must also be careful with how broadly you define “responsive design.” We agree that apps should be built with layouts that can respond to varying screen sizes/input methods. But there is a big difference between using RD to hide/show/reflow an app UI and trying to use RD to introduce/remove usability paradigms unique to various form factors (for example, split views on tablets). The Kendo UI tools aim to allow the later. We think the best implementations use RD + Kendo UI widgets.

I understand where you’re coming from when you say that I should be careful not to expand the definition of responsive design too broadly. Ethan has in the past resisted the calling designs that don’t include the three main components—media queries, fluid grids, and flexible images—responsive designs.

But that’s not where I’m coming from. If I look at an app being built for the web of today and what the interaction model for that app is going to be, I’m going to start from the assumption that the app will need to handle a wide range of screen resolutions. That I honestly can’t know anything about what device might hit it.

From that point, I’ll start to figure out what the end points for the design might look like. How should the app work on small screens? On large screens?

After that, maybe we code a simplified version in HTML and CSS so we can resize the browser and figure out where things look awkward. Where do we need to make changes to the layout or add features?

At that point then, we can start figuring out the best way to implement that design in production code. Maybe that means multiple code bases. Maybe we can use progressive enhancement to pull in pieces. Maybe we use RESS like Luke Wroblewski talks about.

What I’m describing is what Stephanie Rieger wrote about so eloquently earlier this year:

choosing responsiveness, as a characteristic shouldn’t necessarily define the wider implementation approach

One of the things Ethan did best in his book on responsive design to describe the perspective—a philosophy—of what it means to embrace the web as a fluid medium and design to that constraint. I fully recognize that in some cases building things completely as one document isn’t possible. But even in those scenarios, responsiveness as a characteristic and a philosophy are critical.

Todd said:

I think we’re in violent agreement. 🙂

I agree that the mobile *web* is still up for grabs. That said, research shows most mobile users still rely far more on *apps* today for things they could also do using a browser. By most measurements, 80%+ of mobile time is spent in apps. So while mobile web is important, we see more time and emphasis on apps.

To answer your question, yes, I think many of the mobile apps we see people building today are designed for app stores. Or perhaps more accurately, they are designed to be installed on the device and thus are striving for “native” app UX conventions.

I also see this trend spilling over to the desktop, with platforms like OS X and Windows 8 ushering in the “app store era” for desktop users. As that happens, I think we’ll see some web apps return to the client.

Why? What problem did the web originally solve for apps? Distribution.

The beauty of the web for apps is that it eliminates the challenges of installing/upgrading client software. With app stores everywhere, that benefit can now be had with client apps.

Of course, there’s still cross-platform concerns with client apps. And that’s where we see HTML5/JavaScript (and Kendo UI) increasingly playing a larger roll in building “native” apps with a code base that can easily be ported across platforms. (Perhaps this is the root of the debate confusion- in our world view, HTML/JS are no longer the languages of the web…they are cross-platform languages used for the web and native apps. No browser required.)

But look at us. We’ve probably got 3 blog posts in the comments now. 🙂

The most important conclusions at this point are:

1. Responsive design is definitely important for developers to embrace as widely varying form factors proliferate.
2. Responsive design is not necessary the “silver bullet” for building “native” app experiences for different form factors.
3. (For the record) Kendo UI tries to enable developers to pick the path that works best for their web app (Kendo UI Web) or mobile app (Kendo UI Mobile). We’re simply here to save time and make building apps with HTML and JavaScript (installed or delivered via the browser) easier to do.

Replies to Todd

Jason Grigsby (Article Author ) replied:

I suspect we are in agreement. 🙂

The challenge for our client, and others like them, is that building for the app store makes no sense for them. They have come to rely on JavaScript UI frameworks to make their development easier, but find that the current frameworks are either focused on ui conventions that map to native mobile and tablet apps or based on assumptions about the web when we were designing for desktop only.

There are a ton of applications for which the app store makes no sense. Intranet applications are a prime example of this. These apps will make sense as responsive apps.

I suspect there will be a growing market for responsive app tools. I’m looking forward to see what you guys build to enable those apps. 🙂

geoff said:

Phone, tablet, laptop, desktop. Screen sizes and resolutions. Touch versus trackpad vs mouse and keyboard.. It will all seem rather quaint in a decade or so when we’re all using AR glasses and voice / gestural / direct neural interfaces.

Steve said:

I liked the point you made about keeping touch in mind when designing for the laptop. Intel did one study and the way the experimenter presented tasks, type of tasks, etc. could have produced flawed results. Apple has said they’ve done a lot of studies and touch on a vertical surface doesn’t work. I like the idea of it as an option but it is much more exerting to touch than use the mouse and people will not accept it as a substitute in my opinion. The mouse as a pointing device is pretty elegant and the pen on tablet is as well. When screens fold down for everyone and people can draw on them like they do at Disney then photoshop users may move on but the mouse is here for quite a while I think.

Dustin K said:

The line in the sand is CONTEXT of use, not technical specs.

Apps should not designed in a responsive manner across all screen sizes because the context of use is different. Phone and Tablet sizes are blurred, but the context of use is more focused. People purchase different devices depending on the contexts and taks they want to use them for (ie: sitting at a office desk, navigating roads in a car, lounging in a chair, or hiking a trail).

When designing an app, if you are creating a single responsive design that works across all devices, than you are not taking into consideration the context of use. The result may be functional, but it is also likely to be very unusable in certain contexts. More likely, there are probably certain contexts that you care less about or not at all about. While technical specs will change over time, the context of use will not.

If the “App” is thought to be used across all devices and context doesn’t matter, then you have a website masquerading as an app. At that point, defer to responsive website designs.

Brett Jankord said:

@Dustin K. – I’m sorry, but the “context” argument is BS. It’s part of the collective hallucination.

You can’t tell what “context” a user in by their device. You can’t tell if I want to read just snippets of news or full stories when I come to your site on my phone. There’s a chance a phone is all I have to access your site, and I want the full experience I’ve come to know.

You can ASSUME that I want just a watered down version when I’m on my phone, but we all know what happens when we assume…

Samo said:

I’m sorry, but this reads like a dream scenario that some developers would want to be in. Magical frameworks that adapt to every platform you want your app to run on and somehow still provide a halfway decent experience? The same kind of revolution that Java was for code once, run everywhere on the desktop, right? How did that turn out?

The fact of the matter is that designing interfaces means you define the restrictions with which you have to work with and find solutions within their boundaries. That means cutting functionality, changing what and how much data is presented and how your user can interact with it.

Assuming the “context” is always as simple as “reading snippets or whole articles” just shows that the thinking itself here is too simplistic. If reading articles is what you think applications are about then, sure, a responsive design is pretty damn easy to make for everything from a phone to a desktop computer.

I am sure that there are plenty of cases where responsive design can cover all devices (like an article reader). But trying to come up with an _automated_ (responsive) solution that works on screens sizes from 3.5 to 27″? At best—and that’s assuming someone will manage to make a framework that handles this well—you’ll end up with subpar experience compared to “native” (in terms of having their own UI) apps that can make full use of the environment they are running in.

There is no magic pill and I don’t think there ever will be. Developers will either have to make a lot of compromises if they want the same app to run on all devices using one framework or bite the bitter pill and develop multiple apps. Chosing to develop your apps as web apps using JavaScript frameworks and then complaining that there is no magic system that would make your apps work well across multiple platforms without much effort is naïve. I’ve only known “theoretical” people talk about it, too, not a single person I would consider a serious developer thinks a one-for-all solution is feasable or can be expected anytime soon. And it’s not for lack of trying!

Even _if_ you somehow managed to architect and develop a framework that would cover all those bases—do people demanding that have any idea how much effort that is? And someone will just do that for free for you to use? Perhaps a JavaScript engineer with no experience in system and interface framework development?

Don’t hold your breath. 🙂

Replies to Samo

Tessa Thornton replied:

I totally agree with you here. I often wonder if responsive design evangelists have actually developed a fully-featured complex web *app* responsively. I have, and it’s sometimes impossible, always a nightmare, and will cost exorbitant amounts of time and money in bug fixes and testing.

I really don’t see how the surface poses that much of a problem, as long as you’re keeping it in mind when you’re designing the ‘large screen’ experience, and check to make sure your targets are finger-friendly.

I’ve been experimenting with a variant of Boris Smus’ approach (a rails app that serves different views to different devices) and it seems like a good compromise to me.

Jason Grigsby (Article Author ) replied:

@samo wrote:

Magical frameworks that adapt to every platform you want your app to run on and somehow still provide a halfway decent experience?

Chosing to develop your apps as web apps using JavaScript frameworks and then complaining that there is no magic system that would make your apps work well across multiple platforms without much effort is naïve.

Even _if_ you somehow managed to architect and develop a framework that would cover all those bases—do people demanding that have any idea how much effort that is? And someone will just do that for free for you to use?

FWIW, not a single thing you argued against is something that I wrote in the original piece. I never said it would be easy. I didn’t say that you could write once run everywhere. There was no “complaining that there is no magic system”. In fact, there was no complaining. And I certainly don’t expect anyone to solve this for me for free.

You’re reading a lot into what I’ve written that isn’t there.

Francesco Belladonna said:

I like the article a lot, still, I don’t think is correct. Here is the point that I think you are wrong:

“Now before you accuse me of cheating, do you really think no one uses their phone in landscape orientation?”:

Yes there are landscape phones, but the greatest amount of device are touch and are used vertically, just think about iPhone which covers 50%+ marketshare in USA and Europe. That is used vertically. I don’t know the numbers for Android because of that devices, but the greatest amount for them is “touch” and used vertically.
And yes, at least for me and people I met, using your phone vertically is something you don’t want, not only this, using it with two fingers is something you would like to avoid, when possible. I even don’t like that I have to use 2 fingers to unlock my screen (1 to push the button and then I can use it).

Sure, you can make a responsive design that works on everything, but will not be, at least for now, comfortable as something designed for that device. So, this schema is the right path to understand devices, but it’s definitely incomplete:

– small screens + touch (mostly phones)
– large screens + touch (mostly tablets)
– large screens + keyboard/mouse (mostly desktops/laptops)

Instead (“screen type”, “input”, “rate (1 to 5 where 5 is maximum) for same screen type + input”):

– small vertical screen + touch (thumb), 5 stars = can access only to 3/5 of your screen from the lowest part of it
– small vertical screen + touch (index finger), 3 stars = can access whole screen, need 2 hands to be used
– small vertical screen + touch (2 thumbs, thumb + index finger, dunno what else), 2 = can do almost anything
– small horizontal screen + touch = 1 at least for my experience. Phone is “heavy” for one hand horizontally, not comfortable as vertically
– large screen horizontal + touch (while you sit, you have 2 hands available, if you stand, you have only 1 hand available, that’s a huge difference), 4 if sit, 3 if you stand (or maybe 2?)
– large screen horizontal + keyboard/mouse, 5 = well we used it for long time and actually I feel comfortable with it
– large screen horizontal + keyboard/mouse/touch, don’t know = This is a different device, that’s the important part you are missing.

It’s important to notice that where rate is 5, you also need less focus to use it, that’s important, becomes more intuitive.

Now this can be deeper, I don’t have enough time to write everything, anyway, we can summarize it a bit by removing the part for “fingers” and by doing some assumptions which are almost true, usually:

1) small vertical screen + touch (thumb) = This is the “default” way to use your phone, it comes from “old” mobile phones that had a keyboard under screen.
2) large horizontal screen + touch (1hand/2hands) = Usually you put tablet on your leg/bed/table because it’s heavy
3) large horizontal screen + keyboard + mouse = This is computer (yea we can have even no mouse, in a company this may happen if you work on non-graphical UI)
4) large horizontal screen + keyboard + mouse + touch = This is very strange, at least for now I consider “touch” as a plugin. When you work with keyboard + mouse is comfortable because you don’t move a lot your arms. With touch, if you move your arms too much (so you use touch a lot) you may feel tired. I don’t know anyway because I didn’t use it. Also moving a lot from mouse to screen may “increase your input lag”, I mean it may takes you more time to send some input. What if instead we use mouse + touch? Could be interesting, it’s like having one slower focused hand and 1 fast hand.

As you can see, we have at least 3 “devices” with the 4th being a big ?. All of them have special features, for example:

1) Can access only to the lowest part of the screen (3/5 of the screen maybe), can be used while standing/walking, you need big buttons here
2) Usually when you are in a relaxed position: on the sofa/bed I imagine, like paper.
3) “Working” position, you are very active
4) Same for 3, maybe you would like to use “touch” input as a blackboard, is the most intuitive way at least

Sure, I got lost inside the comment because it’s really long, but I feel that responsiveness is something interesting if the content you are using is read only. If you have to consider some user input, first you must understand how much input you’ll have: using a keyboard involves a lot of input, using only your thumb involves much less input.

Responsiveness between devices is something we want to reduce UI Design cost on budget, but the truth is that every screen paired with different input device should have it’s own ui design.

That being said, we need to make some compromise, but as I’ve stated input is something we must consider if we aren’t on a read-only website, is not a matter of “screen size” only.

Replies to Francesco Belladonna

Jason Grigsby (Article Author ) replied:

Francesco Belladonna wrote:

Yes there are landscape phones, but the greatest amount of device are touch and are used vertically, just think about iPhone which covers 50%+ marketshare in USA and Europe. That is used vertically. I don’t know the numbers for Android because of that devices, but the greatest amount for them is “touch” and used vertically.

The iPhone can be used in landscape mode. I’ve read and keep track of a lot of statistics and studies, and I haven’t seen any that measure the amount of time people people spend in each orientation on devices that can rotate.

Do I agree that the majority of use is probably in portrait mode? Yes. How much is in landscape? 10%? 20%? 30%?

It’s important to notice that where rate is 5, you also need less focus to use it, that’s important, becomes more intuitive.

I’m afraid I lost the thread in this section with the ratings, etc. Any chance you could write a blog post to extend your thinking here? I can tell you’ve got some interesting ideas there, but I can’t quite figure out how it all fits together.

If you have to consider some user input, first you must understand how much input you’ll have: using a keyboard involves a lot of input, using only your thumb involves much less input.

I think input often matters more than screen size. This was one of my big takeaways from the research I did on TV browsers. I plan on writing more about input at a later date.

Nathan Cross said:

Excellent article! This stuff has been on my mind lately as I use my new Lenovo Yoga Windows 8 laptop/tablet hyrbid alongside my HTC DNA phone with a 5″ screen. The section about blurring the lines in the sand really hit home. Screen size is no longer a meaningful measure of the type of device being used. This will only continue to increasingly be the case. As to the touch-screen laptop issue… I never thought I wanted a touch screen laptop, but I am completely amazed at how quickly using it in conjunction with the keyboard has become second nature and I cannot imagine being without it.

Thomas said:

Great article which largely reflects my view which is the opposite of the jQuery mobile folks’ view.
Regrading the classification, I want to point out this library:

https://github.com/n-fuse/pointeraccuracy.js

which exactly serves the purpose of deciding which “input modality” to choose.

George Vettath said:

I think there is going to be a divide between ‘on the go’ applications and ‘serious applications’, but responsive design will be here to stay for both given the large variations in devices we see.

‘On the go’ applications will be mostly for viewing and little inputting, and used for smaller devices (mobiles + tablets). Serious applications involve a lot of inputting, and will be designed for tablets, laptops and high res smart forms used in landscape mode.

As Enterprise application developers we have to get the standards right for ‘serious applications’ with responsive design – and that’s going to be fun to see how it plays out.