It’s hard to believe it’s been almost three years since the publication of Ethan Marcotte’s seminal Responsive Web Design article on A List Apart. The ideas and techniques described therein blew our minds while forcing us to drastically reconsider our design processes. Even today, the struggle continues to determine which design deliverables make sense in a post-PSD era.
Responsive deliverables should look a lot like fully-functioning Twitter Bootstrap-style systems custom tailored for your clients’ needs. These living code samples are self-documenting style guides that extend to accommodate a client’s needs as well as the needs of the ever-evolving multi-device web.
The whole post is great, and it got me thinking… along with solid content strategy, design and engineering processes, what steps can we take to insure our “tiny bootstraps” are comprehensive enough to remain relevant and useful long after launch?
Cue Jason with a cool idea: We could document patterns in existing frameworks. A list of what’s typically included might serve as a good starting point, something to measure our client deliverables against to make sure we haven’t overlooked anything obvious.
In which I willingly make a spreadsheet
I combed through a (non-exhaustive) collection of suitably broad or noteworthy links from Anna Debenham‘s list of front end styleguides and pattern libraries, recording instances of observed patterns and adding new ones as necessary. I skipped over anything that seemed too idiosyncratic, and grouped elements of similar intent even if their description or implementation differed.
I found this to be a worthwhile exercise. It helped me wrap my head around the elastic scope of a “tiny bootstrap.”
I thought there’d be more overlap between frameworks than there is. I recorded over 160 distinct patterns, none of them ubiquitous. Some came pretty close, especially headings 2 through 4, typographic elements and pre-HTML5 form elements. No single framework included even half of the total recorded patterns (Bootstrap had the most).
Sometimes the most infrequent elements surprised me with how obvious they seemed in retrospect. For example, color guides and font stacks only occur in a couple of places.
The thought of maintaining the document indefinitely makes me queasy, but I’ve already started referring to it frequently. I’d love to know if anyone finds it as interesting or useful as I have.
Simply matching the image breakpoints to the major breakpoints being used for the design.
While the first method is more efficient and will probably result in better image sizes, my suspicion is that defining “sensible jumps in file size” is so nebulous that most web developers are going to choose to do the second, easier option.
That is unless we can find a formula to calculate what constitutes a sensible jump in file size and that’s what got me thinking about performance budgets.
What is a performance budget?
I’m not sure how long the idea of a performance budget has been around, but I first became cognizant of the idea when Steve Souders talked about creating a culture of performance on the Breaking Development podcast.
So that’s the basic idea. Establish a performance budget and stick to it. If you add a new feature to the page and you go over budget, then you have a three options according to Steve (and transcribed by Tim):
Optimize an existing feature or asset on the page.
Remove an existing feature or asset from the page.
Don’t add the new feature or asset.
What is the performance budget for flexible images?
Let’s apply this idea of a performance budget to responsive design. In particular, let’s treat the idea of flexible images as a feature. Because flexible images are a feature, we need a budget for that feature.
And as long as we’re making up the rules, let’s establish a few more hypotheticals:
The page we’re working with has 10 images on it of varying formats and visual content.
We haven’t reached our performance budget yet so we don’t have to remove other features, but we still need to make sure that flexible images do not add too much to the page weight.
We’ve concluded that flexible images can add up to 200k to the page above what the size of the page would be if we provided fixed width images. We picked 200k because it is ~1 second at HSDPA (recent mobile) speeds. And well, 200k is a nice even number for this thought experiment.
Because this page has 10 images on it, each image has a 20k budget for flexible images.
One thing to keep in mind, 200k isn’t the cap for the file size of all ten images combined. Instead, it is the price we’re willing to pay for using flexible images instead of images that are perfectly sized for the size they are used in the page.
For example, say you had a responsive web page with the following image on it:
That image is 500×333 pixels and 58K in file size.
Now imagine a visitor views that web page and based on the size of their viewport, the image is displayed at 300×200 pixels, but the source image is still the same. The cost of using flexible images is the difference in file size between what the image would be saved and optimized at 300×200 versus the file size of the image downloaded at 500×333.
In this case, I’ve taken that example image and resized it to 300×200 and saved it with the same compression level as the 500×333 image to see what the file size cost is of using that flexible image.
In this example, the visitor is downloading an extra 34k of image data because they are downloading a flexible image instead of downloading one that had been resized to the exact size being used in the page.
Translating the budget into breakpoints
Let’s go back to the page we want to optimize—the one with ten images on it and a total performance budget of 200k for flexible images. How do we translate that into image breakpoints?
Thinking back to the example above, the price for using flexible images is the difference between the size of the file that is downloaded and the size the file would have been if perfectly sized for its use in the page.
Our budget says that we can only download up to an extra 20K per image. Therefore, we need to make sure that we have a new image breakpoint every time the size of the image increases 20K.
We now have a methodology for picking sensible jumps in image file size that is tied to user experience instead of picking them arbitrarily.
Finding the breakpoints
How would this translate into a heuristic that could be used to find the breakpoints? You would need the following:
What is the minimum size this image will be used at? (In our example, let’s say 320×213)
What is the file size between breakpoints? In other words, what is your per image budget? (20k for our example)
A high quality source file to use for resizing.
Optionally, the largest size the image will be used at. (990×660 for this example).
Once you have this information, the basic logic looks like this:
Take the source image and resize it to the smallest size the image will be used at.
Keep the file size of that image handy.
Start a series of tests that create new image files from the source that are gradually getting bigger.
Check each image created. If the difference between the file size of the new image and the image file size you stored is less than your budget, discard the new image.
When you find an image that hits your budget, save that image and replace the previous file size that you stored with the new file size.
Repeat steps 2 through 5 until you reach either the maximum resolution of the source image or the largest size the image will be used at.
I’m tickled to say that my co-founder John Keith got excited by this idea and built a rough prototype of how this might work.
Using the script that John built, I created a demo page containing ten images. The source images were 990 by 660 pixels and all but one of them were saved as JPEGs at 50% quality. The one exception is a PNG8 image with an optimized color palette.
I tried to pick a variety of images so we can see how each image might have different breakpoints using our budget. Let’s take a look at three sample images.
Time Square — 8 Image Breakpoints
This image has a lot of visual diversity. The variations in colors and textures means that JPEG’s lossy compression cannot do as much without damaging the image quality.
Because of this, there are eight image breakpoints—set at 20k intervals—between the smallest size of the image (320×213) and the largest size of the image (990×660).
This is a simple PNG8 file. At its largest size (990×660), it is only 13K. Because of this, it fits into our 20K budget without any modifications.
On a recent consulting project with a company that has over 800,000 images on its site, we identified a class of images—some icons, little badges, etc.—where the size of the image on desktop retina was not much different than the size used on mobile either because the image resolution doesn’t vary much or because the image compresses well. For those images, we decided to deliver the same size image to all screen sizes.
Take a look at the other images on the sample page we created. See how the number of breakpoints vary even through all the images start with the same resolution end points.
This diversity exists despite the fact that with the exception of the Microsoft logo, all of the images start at the same size with the same compression settings. On a real site, we would see even more diversity with varying levels of JPEG quality, PNG8 with gradients going horizontally instead of vertically, and PNG32 images in the mix.
But what intrigues me about this approach to setting breakpoints is that it we wouldn’t be setting one-size-fits-all image breakpoints. Instead, we would make decisions about where the breakpoints should exist based on the our goals for user experience—the performance budget—and the unique characteristics of the image and how will it can be compressed.
What conclusions can we draw from this thought experiment?
The point of this thought experiment wasn’t to provide a complete methodology to set responsive image breakpoints. I started by simply asking the question about whether we might be able to use performance budgets to come up with a way to calculate what are sensible jumps in image sizes.
But the outcome of this exercise has caused me to draw some interesting conclusions as well as sparking more questions about responsive images:
Images do contain clues that can tell us where the breakpoints should be. Last year I wrote that “the problem is there is nothing intrinsic to the image that would guide you in deciding where you should switch from one size of the image to another.” But this experiment shows that images do have intrinsic information—how well the image compresses, what type of compression is being used, the range in size between the smallest and largest use of an image—that can be used to decide when you should switch from one source file to another.
We can set a performance budget for flexible images. There’s no reason why we can’t treat the use of flexible images like any other feature that we add to a page and define a performance budget for its use. In fact, setting a performance budget for flexible images could be the key to making informed decisions about where image breakpoints should be set.
Can we set a performance budget for flexible images across an entire site? For our sample page, setting the performance budget to 200K for the whole page worked well. But in the real world, we often don’t know how many images are going to be on a given page. Similarly, we may not know what pages a given image is going to be added to. It seems like it would be useful to be able to say that for any given flexible image on the site, we’ve established a 20K budget. It would be less precise than a per page limit, but it may be the only practical way to translate this thought experiment into a production environment.
An image and its breakpoints could be stored as a bundle. The outcome of this approach to image breakpoints is that the breakpoints could be specific to the image no matter what context the image is used in. You could store the calculated breakpoints with the image and whenever the image is displayed on a page, no matter what size the image is used at within the page, the same breakpoints could be used.
An image and breakpoint bundle would be difficult to use with the proposed picture and srcset standards Image breakpoints calculated this way depend on knowing the size of the element in the page. Both picture and srcset make the switching of image sources contingent on the size of the viewport instead of the element. This means that you’d have to find a way to translate your image breakpoints to viewport sizes which would undermine a lot of the utility of storing breakpoints with the image.
Crazy? Or crazy like a fox?
Phew, you made it to the end. So what do you think? Is there merit in using performance budgets for flexible images to determine responsive image breakpoints?
Thanks to John for creating the sample script and for being my partner in crime on this crazy idea and to Lyza for being an amazing photographer and publishing her photos under creative commons.
A few months ago I was tasked with finding a good solution for a client who wanted to move to responsive design, but had a web app that they needed to support as well. The question they asked is one that I’ve seen others argue about in the past: does responsive design make sense for apps?
Kendo UI follows a pattern that I see for many of these frameworks: there are desktop widgets and mobile widgets. The same is true of nearly every framework that I could find:
The pattern seems clear at least from a framework perspective. There are widgets for desktop web development and widgets for mobile. They look and behave differently.
But does that make sense? Is that where things will head in the long run?
Why aren’t these frameworks responsive?
I know within these frameworks, that portions are responsive. jQuery Mobile, for example, is designed to be responsive. But there is still a separation between the mobile/tablet UIs of jQuery Mobile and the desktop widgets of jQuery UI. Why is that the case?
Responsive design is great for creating mobile sites, but it’s not as useful for creating mobile apps. Responsive design can help you hide, show, resize, and reformat UI for screens of varying size, but it is less suited for presenting completely different modes of usability on different form factors.
On the face, it seems like a reasonable enough argument. But what stuck in my craw was something else they wrote about why they have a separation of tablet UI versus phone UI:
It’s not that we’re technically incapable, but adapting a phone UI to a tablet UI is not so dissimilar from trying to automatically adapt desktop UI to a phone. They are fundamentally different platforms with different usability considerations, and something that makes sense on phones may or may not belong on tablets.
These two sentences struck me as odd and difficult to reconcile with what we’ve seen happen in the market over the last few months.
Where is the line between tablets and phones?
So what separates a phone from a tablet? I’m going to assume they’re not talking about the fact that one can make phone calls and the other cannot.
In truth, I don’t know why the Kendo UI folks think the platforms are different1 . What I can say is that when the iPad came out, the lesson was clear that simply increasing the size of an iPhone app to make it fit on a 10″ screen was not sufficient. And since then, I’ve heard a lot of people talk about how tablets are different than phones.
So let’s assume for a second that the major difference is screen real estate because so many other things are similar (touch screens, operating system, etc. are all consistent between phones and tablets). Let’s take a closer look at screen real estate:2
Samsung Galaxy Note 2
Motorola RAZR HD
Motorola Atrix HD
HTC Droid DNA
Kindle Fire HD
In the table above, I’ve picked some of the larger phones and smaller tablets. It seems that phones stop at around 5 inch displays and tablets pick up at 7 inches. So there is a gap in physical size between the two device classes—even if that gap is getting smaller over time.
But for web developers, the screen resolution—and more specifically the viewport size—make a bigger difference than the physical size. And when it comes to viewport size, the differences between tablets and phones are less clear.
Quick, without looking at the table above, identify which of the following viewport measurements belongs to a phone and which belongs to a tablet:
Can’t tell the difference can you?
Those of you paying close attention will notice that I used the widths of tablets and the heights of phones. Now before you accuse me of cheating, do you really think no one uses their phone in landscape orientation?
(Quiz answers: Phones—1,3,4,7; Tablets—2,5,6)
Is tablet UI different than phone UI?
So it is true that phones and tablets “are fundamentally different platforms with different usability considerations, and something that makes sense on phones may or may not belong on tablets”?
Fundamentally different? With the exception of the ability to make a call, the data suggests that they’re aren’t so different and that the differences between phones and tablets are narrowing all the time.
Ok, but desktop UI is definitely different, right?
This seems to be the common opinion particularly when it comes to building intranet or enterprise applications that “will only be used on desktop”.
To create a good user experience, you need to know who your users are and what devices they are using. If you build a user interface for a desktop user with a mouse and a keyboard and give it to a smartphone user, your interface will be a frustration because it’s designed for another screen size, and another input modality.
I highly recommend reading Boris’s article because he does a good job of describing a method for classifying devices into form factors not based on whether they are sold as a “phone” or a “desktop computer”, but instead based on the characteristics of the device.
Boris offers a middle ground between responsive design and separate code bases for every device:
Here’s a compromise: classify devices into categories, and design the best possible experience for each category. What categories you choose depend on your product and target user. Here’s a sample classification that nicely spans popular web-capable devices that exist today.
small screens + touch (mostly phones)
large screens + touch (mostly tablets)
large screens + keyboard/mouse (mostly desktops/laptops)
This made a lot of sense to me at the time. Designing a complex application that is finely tuned to keep someone in the flow while working with a keyboard and mouse is different than designing something tuned to touch.
That is, it made a lot of sense to me until…
Windows 8 obliterates the distinctions between tablets and desktop
this unspoken agreement to pretend that we had a certain size. And that size changed over the years. For a while, we all sort of tacitly agreed that 640 by 480 was the right size, and then later than changed to 800:600, and 1024; we seem to have settled on this 960 pixel as being this like, default. It’s still unknown. We still don’t know the size of the browser; it’s just like this consensual hallucination that we’ve all agreed to participate in: “Let’s assume the browser has a browser width of at least 960 pixels.”
I’ve always loved this idea of a consensual hallucination that all we all agreed to participate in. I still remember nervously presenting work to clients or bosses and hoping that they had their browser set to the default font. I crossed my fingers and hoped they also believed in the hallucination that people didn’t adjust the font size in their browser.
I bring this up because we have a similar consensual hallucination about the distinctions between tablets and desktop. At the same event that Steve Jobs introduced the iPad, he also unveiled the iPad Keyboard Dock.
How many people have you seen carrying an iPad case with a built-in keyboard? I was in a meeting recently where nearly everyone in the room had iPads with keyboards.
Yet, in our collective hallucination, we believe large screen and touch equals tablet whereas large screen plus keyboard and mouse equals desktop.
Jeremy points out that mobile didn’t create more unknowns for web designers. It just forced us to recognize the unknowns that were already there.
The same is true of Windows 8. Our illusion that there are sharp differences between tablets and desktop is destroyed by a whole slew of devices that can change between tablets and desktop machines on a whim.
And it’s not just these laptops/tablet hybrids that break our preconceived notions of what desktop means. Many manufacturers are also producing Windows 8 desktop computers that feature touch screens. Or touch screen monitors that can be added to any Windows 8 machine.
I’ve seen a fair amount of criticism of Microsoft for incorporating touchscreens into their laptops and desktop devices. Jon Gruber wrote:
A touch-optimized UI makes no more sense for a non-touch desktop than a desktop UI makes for a tablet. Apple has it right: a touch UI for touch devices, a pointer UI for pointer (trackpad, mouse) devices. Windows 8 strikes me as driven by dogma — “one Windows, everywhere”.
Users who were presented with a way to interact with their computers via touch, keyboard, and mouse found it an extremely natural and fluid way of working. One user described it using the Italian word simpatico-literally, that her computer was in tune with her and sympathetic to her demands.
They go on to dispute the conventional wisdom that people get fatigued using touchscreens. The people who I’ve talked to who have Windows 8 touchscreens talk about how natural it is and how quickly they stopped thinking about it and just flow from using their trackpad or mouse to touching the screen. They say simply, “Don’t knock it until you try it.”
And really, why wouldn’t this be true? We’ve seen children who are confused that computer screens don’t respond to touch the same way the other screens around them do. We laugh at ourselves when we absentmindedly reach out and touch our screen expecting it to do something.
We call these touch interfaces natural user interfaces. Is it any surprise then that we would want these interfaces on our desktop machines as well?
Touch as a baseline experience
Luke Wroblewski has neatly summarized our current device landscape in a single graphic:
We have devices at nearly every screen size and we have multiple types of input at each resolution. The small gaps that exist are either things that seem inevitable (high-dpi on large screens) or are so small to be inconsequential (does it matter that we don’t have six inch displays?).
In the video, Luke makes the point that an app designed with targets appropriate for a keyboard/mouse UI will be difficult for someone to interact with using touch. But the opposite isn’t the case. If targets are designed for touch, they will by necessity be larger and will be easier for all users to hit due to Fitt’s Law.
To me, it seems like nearly every lesson we’ve learned about designing for mobile and tablets—whether it is designing larger targets for touch, using larger typefaces for readability, or simplifying interfaces—are things that desktop applications can benefit from. And this is why you see both Apple and Microsoft incorporating the lessons learned from mobile into their desktop operating systems.
Perhaps in the past desktop UI was something completely different from mobile UI, but that is no longer the case.
Lines in the sand do not persist
Any attempt to draw a line around a particular device class has as much permanence as a literal line in the sand. Pause for a moment and the line blurs. Look away and it will be gone.
Let’s take the absolute best case scenario. You’re building a web app for internal users for whom you get to specify what computer is purchased and used. You can specify the browser, the monitor size, keyboard, etc.
How long do you think that hardware will be able to be found? Three years from now when a computer dies and has to be replaced, what are the chances that the new monitor will be a touchscreen?
By making a decision to design solely for a “desktop UI”, you are creating technical debt and limiting the longevity of the app you’re building. You’re designing to a collective hallucination. You don’t have to have a crystal ball to see where things are headed.
And once you start accepting the reality that the lines inside form factors are as blurry as the lines between them, then responsiveness becomes a necessity.
I’m not saying there isn’t usefulness in device detection or looking for ways to enhance the experience for specific form factors and inputs. This isn’t a declaration that everything must be built to be with a single html document across all user agents.
What I am saying is that even in scenarios where you’re fine-tuning your app code and UI as much as possible for a form factor, that the differences in screen size and the various forms of input within a form factor alone will be enough to require you to design in a responsive fashion.
And once you start designing in a responsive fashion for a given UI widget, you’re going to find that you have to think about what happens to that widget across a wide range of screen resolutions.
To do otherwise means ignoring the reality of our device landscape and requires you to buy into a collective hallucination.
This is your last chance. After this, there is no turning back. You take the blue pill- the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill- you stay in Wonderland, and I show you how deep the rabbit hole goes.
I worry that it will seem like I’m picking on the Kendo UI folks, but that isn’t what I set out to do. It just so happens that our client was using their tools and my investigation started with their framework. Their article on responsive design spurred a ton of thought which I’ve captured here.
FWIW, I think the tools they provide are pretty damn cool, and we’re all still grappling with what Windows 8 means for us.
Finding viewport sizes for all of these devices proved difficult. If there is an error, please let me know.
I doubt Apple cares how much HTML Facebook uses in its app. And if it does, it is being hypocritical. All of the following Apple-made iOS apps use embedded webviews in some capacity:
And that’s not counting the fact that both the iAd and iBooks formats are built on HTML5. Why aren’t people clamoring for Apple to create a “native” version of iBooks or the App Store?
Apple has a lot invested in HTML5. How quickly we forget that WebKit—the rendering engine used by Google Chrome, Android Browser, Samsung’s Dolfin Browser, Blackberry Browser, and numerous others I’m forgetting—was partially created by Apple because of the need to embed web content in a native application.1
On the mobile front, Apple has pushed the browser more quickly and further than other companies. The advances still aren’t fast enough for my tastes, and I hope competitors catch up and turn the heat on Apple’s browser effort. Regardless, you’d be hard pressed to make a case that Apple isn’t a major contributor to HTML5.
I suspect Apple does what we do at Cloud Four when we look at apps. Apple likely takes a look at the features of the app and tries to determine if the feature is better as native, web or some combination thereof. When I took a look at the traffic from Apple’s own apps, I certainly saw indications of that thought process. I found:
Screens that were full native and receiving binary plist files
Screens that were native and received JSON data
Screens that were embedded webviews with the full HTML document, associated CSS and JS downloaded
Screens that were mostly native, but received JSON with HTML encapsulated inside of it for display in certain areas of the screen
So do I think Apple cares if Facebook uses HTML5 in their app? No. I think Apple cares that users have a great experience using Facebook on its platforms. And it is clear that the old Facebook app wasn’t a good experience, and they needed to improve it.
Does that mean that Facebook had to go “fully native” to create a great experience in the eyes of Apple? Obviously not given the fact Apple is using a mixture of web and native its own apps.
There’s a common saying in startups: Ideas don’t matter. Execution does.
For apps, a similar statement can be made: Languages don’t matter. The experience does.
Focus on a great experience, use whatever tools you need to create that experience, and if you succeed, no one will care how the app is built.
When Steve Jobs announced Safari at MacWorld Expo 2003, he also announced the WebCore framework and how it had been included in Sherlock. Did the need for a browser come first? Or the need to embed web pages in apps like Sherlock and later iTunes come first? Probably the browser, but it is clear Apple thought early on how the browser rendering engine could be reused inside apps.
I recently did some research into the HTML that Facebook was using in the old version of its iOS app. More on that in a future post. In the meantime, I thought I’d share how to inspect what an iOS app is sending over the network using Charles Proxy.
Before I begin, I must disclose a few things:
I am not an expert at using a proxy server nor even at how to use Charles Proxy. I am likely using Charles Proxy in naive ways. I have resisted writing this for some time because I know I am an amateurish hack.
Charles Proxy runs on Windows, MacOS and Linux. I have only used the Mac version and will be talking about it.
I am a HUGE fan of Charles Proxy. It gives me tremendous joy to be able to see into activity that I otherwise would have no insight into. I feel like the app gives me super powers. I am unabashedly biased about this product.
Ok, let’s start looking at an app shall we?
Setting up Charles Proxy for iOS
First things first, you have to buy Charles Proxy. You can get a free trial to begin with. But the full price for the app is $50 with discounts if multiple licenses are purchased.
You will be surprised to learn that I think it is well worth the $50 and one of the best purchases I’ve made. ;-)
After you install Charles and have it running, getting your iOS device to recognize Charles is fairly easy. First, make sure your iOS device is on the same network as the machine you’re running Charles on.
Go into your network settings on the iOS device and select the wifi network.
At the bottom of the network settings is the HTTP Proxy settings. They are likely off. Select Manual. Enter the IP address of the machine running Charles for the Server and 8888 for the Port.
If you open Safari (or anything else that makes a network connection), you should receive a prompt in Charles asking if it is ok to let the device connect to your proxy server.
After you approve your device, all future network traffic will be routed through Charles. You can record the traffic by hitting the record button in Charles.
If you see a lot of noise coming from your Mac, turn off the Mac OS X Proxy by unchecking it under the Proxy menu or by pressing Shift-Command-P.
Enabling SSL Proxying
Now you can launch the app you want to inspect and see what it downloads. I’ll use the profile page in the new Facebook app as an example.
When you first load the profile page in Charles proxy, things looked promising. You can see a series of requests, how long they took to download, and their size.
Unfortunately, if you try to see what the server sent in response to the request, or even any details on the request itself, you’ll find that you can’t see much or what you do see will be gibberish.
That’s because Facebook is using SSL to encrypt most of the communication between the app and the server. To see what is going on inside that communication, you need to trick your phone into thinking Charles Proxy’s SSL Certificate is valid for the domains you want to inspect.
Setting this up is a fairly easy two step process.
Step 1: Install the Charles Proxy SSL Certificate on your iOS device
Step 2: Configure Charles Proxy to support SSL proxies for the domains you need
In the menu under Proxy, select Proxy Settings. Select the SSL tab. Enable SSL Proxying if it isn’t enabled.
You can now add as many domains as you need to complete your task. Wildcards work. For examining the Facebook app, I added the following domains:
Basically, I watched the queries and saw what domains requests came in for that I couldn’t see because they were under SSL, and then I proceeded to add them to the list.
Looking at the requests again
Now that we’ve got SSL proxying in place, let’s take another look at those Facebook requests for the profile page and see if we can see more of what is going on.
Ah… that’s more like it. We can now see the full path requested. If we tap on the response section, we can also see what the content of the response was. In this case, the profile page is still an HTML page even in the new “native” Facebook app.
Now that you have the requests and responses, you can do some really cool stuff. Select all of the relevant requests to look at the total payload delivered.
The overview tab will tell you how many requests were made, how many errors occurred, etc.
The Summary tab will let you know the total size of the assets downloaded and the amount of time it took. You can easily sort the requests to find the largest files or the files that took the most time to download.
Exporting your results
Charles allows you to save your session in a Charles Session file or export it as a HAR (HTTP Archive Specification) file or other common formats like CSV which can be shared with others.
Remember to turn off the HTTP Proxy setting
After you’re done testing, you’ll need to turn off the HTTP Proxy settings on your phone. I know it seems obvious, but there have been multiple times where I’ve thought a site was down or my network wasn’t working only to realize I forgot to turn HTTP Proxy off.
X-ray vision for your little black rectangle with rounded corners
That’s it. Easy huh?
You can use Charles Proxy to examine mobile web sites or any network requests so long as the device you are testing on supports HTTP Proxies. It’s a great tool to have in your mobile toolbox.