About the author
Michael Hanson is a technologist and entrepreneur living in Silicon Valley. He is currently an entrepreneur in residence at Greylock Partners.
Native menus from web apps - 03-02-2011
Paul Rouget posted a experiment in allowing web applications to ...
Diagramming in JavaScript - 01-28-2011
Joint is a graph visualization library implemented in pure JavaScript. ...
Chamberlain: A User-Serving Model for Identity Management - 11-05-2009
Dr. Ernie writes about user-centric identity management in the wake ...
Best Warning Message Ever - 09-30-2009
From my diagnostic warning messages today: Instantiating NSNavExpansionButtonCell (superclass of ...
Toy Chest: Online Tools for Non-Programmers - 08-12-2009
UC Santa Barbara maintains The Toy Chest: a great list ...
Topia TermExtract - 08-12-2009
This little library looks fun - Topia TermExtract applies a ...
Terminology Watch: Log or Sign In? - 07-31-2009
Tanin Ehrami took the time to collate what terms are ...
Making the Web Smarter - 07-31-2009
Fred Wilson writes about the new Common Tag consortium. The ...
Curating the Real-Time Web - 07-31-2009
A catchy tag phrase from publish2. It's another way to ...
Shiny boxes for bits - 06-12-2009
From Core77, a design review of recent "digital talisman" products. ...

What's special about the web? (part two)


Yesterday, I wrote about what's special about the web. Today, I'm exploring what mobile and desktop operating systems do better - leading up to how the two platforms complement, and can mutually reinforce, each other.

So, what are the strengths of native application platforms? Which makes them valuable and compelling?

  • Native applications are enhancers of your device - people who have been working with computers all their life forget how magical this is. You have a gadget (laptop, phone, whatever) that you've already bought -- then you put this application on it, and suddenly it does something completely new. This can feel GREAT - and that emotional satisfaction is part of the experience.
  • Native applications are fully-installed, and more or less guaranteed to work, regardless of your network situation. Unless the app is completely dependent on a network service, of course. The manual-update cycle imposed by iOS is a pain and one that will probably be addressed in a future release.
  • Native applications use your device to its fullest. A good application quietly maximizes its use of your CPU, GPU, storage, network, camera, GPS, accelerometer... the list goes on. You feel clever for having a device that has all these capabilities, and clever for finding an application that makes elegant, useful, or fun use of them.
  • Native applications have access to a richer set of interactions with you. They can push notifications, present dialog boxes, run background services, read your contacts, and more. The why of this one is subtle -- there is no direct technical impediment to these APIs being displayed from the web, but there is a very practical reason why they are only available to native apps: the implementors of web browser runtimes haven't agreed on them, and haven't become comfortable with making them available.
    • This touches on an important point of difference between the web and native: native apps live, almost entirely, in a world of vendor-controlled distribution. This means, among other things, that vendors can be more comfortable with exposing APIs to apps, because they have more ways to be confident that the app hasn't been changed maliciously out from under them (see my earlier post about apps and signed code). When web apps are under the control of a vendor (see Chrome, Mozilla add-ons, Firefox OS) this problem tends to disappear.
  • Native applications live in a world of brand-, category- and keyword-based distribution. To find an app on The iTunes store, Google Play, or Windows Marketplace, you either browse a category, look for a specific title or brand, or investigate a keyword that leads to your app. This is great if the user knows to look for your app - but it can be a real problem if the user has no idea that your app exists. Native apps are weak at content-based discovery -- there's no way to find a great medical-and-personal-health app by searching for "post-ACL-surgery recovery". And, outside of the work that Facebook is doing with App Center, they have weak social discovery.

And - as an aside - I'll note that all of these advantages apply to HTML5 apps that are installed in Firefox OS. Not an accident, that.

So, what do these differences mean, for those of us who are in the business of providing compelling user experiences and services?

1. You want an installed app, because it gives you consistent organic interactions and permission to engage with the user's device.

If you are installed on the user's device, you have permission to do great stuff. So do it. Earn that place on their home screen.

You have access to notification APIs - so when there is high quality new stuff, let them know. You have the in-app payment API - think about how you can make the user's life easier, and change your business model, with it. You know what time it is, where the user is, and what they did the last five times they ran your app. On some platforms you can read the ambient light sensor, detect facial proximity, orientation, and acceleration. You can download assets in the background, save whatever you want to local storage, save screenshots to the photo stream, sync data through iCloud or gDrive, register as a document-type handler... get creative.

Remember, too, that when the user touches your icon, you are being handed user intent. The user wants to interact with your app - they have a specific goal in mind, and probably a very short time in which they want to achieve it. Maximize your understanding of that user intent, and shorten the distance from intent-to-satisfaction.

2. You need a web app, because new users will encounter you there.

Do not assume that this means you need a feature-complete app that runs in every browser!

Instead, design your web app to address three groups of users:

  1. People on a device you support, who don't have your app installed yet. If they want to see content, show it to them, beautifully. Then you have a brief window in which to try to convince them to install.
  2. People on an unsupported device. If you don't care about them, let them down gently and let them get on with their lives. If you have content they'd like to see, present it beautifully in their browser.
  3. Current users on a device you support, who ended up on the web somehow. Help them figure out what just happened, and get them back to engagement with your app.

What this does not mean is putting a giant interstitial ad for your app in front of your web content! If the value of your application is largely in content (rather than device interactions), then there is a very, very good chance that the user came to your web site out of explicit desire to see the content. Putting anything in front of that content will only cause user rage.

Rather, make use of intention cues that are present in the web interaction to make the user happy, and then pitch your installed app. This doesn't have to be complicated - you can use the Referrer header to tell if the user just arrived from a search engine, for example. And if the user arrived from a search - satisfy the search first!

If the user is landing on a user-generated content page, especially if they're landing from a social referral, make the author look great. That's why they shared this content - they want to amuse, impress, or affect the viewer. Make it happen. Then explain to the viewer that they can make an artifact that's just like this, too. Instagram executed this strategy brilliantly - but it is relevant across all forms of user-generated content.

If the user is arriving because of a directed engagement push - an email reminder, for example - remind them of the value of your installed application. If you have something new in your app, let them know right away. If their content has been popular, or triggered interactions, let them know about that right away too!

If your app is about an experience - a game, a camera, a contact list manager - use modern web features to show them how great it is. Embed a taste of your experience in the web, streamed directly into their browser, so they can try it out.

3. When users engage with your app - especially organically - you should generate artifacts that will drive engagement, and re-engagement

When the user interacts with your app, he or she is making something. It could be posts, reviews, and recommendations - or maybe just high scores and best times. But there is data there, which you should use to improve your experience, and help new users find you.

The single most obvious thing to do is to help users share their experiences in a way that leads new users to discover and install your app. This sounds simple, but it is amazing how few apps really get it right. Adam Nash's great posts on user acquisition are hugely helpful for understanding this - if you haven't read them already, you must: The Five Sources of Traffic, Viral Factor Basics, and Mobile Applications and the Mobile Web.

Remember, too, that you have an opportunity to re-engage the user, not just attract new users. Did they make it halfway through your content and then disappear? Post obsessively for a week and then fade away? You have a small window - maybe two interactions, maximum - to re-engage with them and try to win them back. If you can't win them back, maybe you can get them to tell you why. Timing is critical for this; you don't want to annoy them or remind them of why they hated your app. Do NOT hit them with an email or notification push at 9:00 in the morning on a Monday. If the user launches your app after a pause of weeks or months, you have a chance to re-explain your value, or let them know what's new.

4. Embedding the web into your app lets you keep the user engaged while seeing the world

Most social applications have this figured out by now: embedding a browser element into your application lets users enjoy sharing and curating the web, while keeping users engaged in the flow of the app. Before handing off control to the system browser or Safari, ask yourself if you can do a better job managing the user's web experience yourself. (Often, the answer is no - but in some specific cases, like a Twitter feed, you really can do it better). Stay alert to how the browser landscape is evolving - as read-it-later platforms evolve and system intents get better at handling web content, you may need to add features to keep up.

5. Embedding your app into the web gives you flexibility and reach

Give some thought to how your web application could be embedded or repurposed. Does it make sense for you to offer a content plugin for blog platforms? Are you serving good Open Graph metadata so you render well when linked on Facebook and Twitter? How do you look when shared on Pinterest? Should you have an API or a per-user RSS feed?

Make sure your web content looks good when viewed in the native apps from the big social networks, and that your engagement strategy works there, too.

Too often, analysis of the application landscape falls into a simple narrative of installed vs. web. The truth is that we will use both, and our products will need to understand them both to keep our users happy and engaged.

Tags: web mobile

What's special about the web? (part one)


The Verge's Paul Miller posted a thought-provoking article about Desktop 2.0 and the future of the networked operating system the other day. Go on and read it, I'll wait.

Back? Okay, cool. So, it got me thinking - what is the web, anyway? And what properties does it have, which a desktop operating system does not, that have made it successful in the past? And are some of those properties no longer relevant?

There's a potentially huge post here, but I just want to jot down some quick thoughts instead. What are the properties of the "web platform" that make it valuable and compelling? And how do those properties differ from those of mobile, and desktop, operating systems?

  • The web is a "universal platform": web content, and web applications can be deployed onto any device. The web platform defeated earlier attempts at this, including Java, and subsumed others, like Flash. But, in order to achieve cross-platform agreement, the "universal platform" is limited to the features that a sometimes fractious implementer community can agree on. Examples of limitations of the web platform are legion - though the limitations are sometimes enforced to preserve one of the properties I describe below.
  • The web is on-demand, streamed functionality: Unlike the many prior attempts at on-demand software installation, "instant-on" web applications actually work. There is over a megabyte of JavaScript behind Gmail, for example. Immense amounts of effort have gone into making web applications responsive during "first run", and the system is very far from perfect. But the fact remains that you can visit a new web application and experience it instantly. The ubiquity of JavaScript is a big part of this - but I believe that it could have been another language. The important thing was that the implementors of the web agreed on the language, and that competition occurred to make the implementation of the language fast.
  • The web is a super-aggressively sandboxed execution environment: The security model of the web is "you can load any page on the web and it won't hurt you". Anything that violates this security model is considered a critical flaw, and triggers a "chem spill" level of security response. Maintaining this property has become a real challenge for browser designers, as the Web API has grown to include access to disk, camera, mouse pointer, and 3D hardware. The networking stack, inter-page interaction model, and JavaScript runtimes are all carefully interdependent - to a much greater degree than most developers understand - to preserve this property.
  • Every application, and most application state, is externally referenceable: The humble hyperlink, combining two tiny technologies - the URL and the Anchor tag - has created billions of dollars of wealth and generated petabytes of human effort. It is easy to underestimate how the flexible, protean technology of the URL has grown, morphed, adapted, and blossomed as the web has grown. No other platform has a completely portable and easily persisted mechanism to allow one application to refer to the deep internals of another -- and it is this technology that has allowed search engines, social recommendation systems, blogging networks, advertising platforms, and much more to spring into being.
  • Every application is also internally embeddable: Through a somewhat unexpectedly powerful combination of frames and inter-window JavaScript messaging, any web application can embed functionality from any other. Through the sandboxed sharing of user state data, this has powered the creation of amazing applications, which the original implementers of the browser hardly dreamed of.

Notice that I'm not really interested in the networking layer, the layout engine, the multi-tab interface, or all that. Those are all table stakes now -- every consumer computing platform has them or something like them, and they all have to be fast and good. A web engine that doesn't have those features will disappear pretty quick. I'm not even jumping into the question of "who can create" -- while the techno-social implications of gatekeepers and deployment matter, a lot, I'm sticking to technical properties of the platform, for now.

Note, too, that I'm assuming that the web will outgrow its awkward "browser-centric" adolescence. There is nothing intrinsic to the web platform that requires a tabbed window with Bookmark and History menus; browser makers have been working towards eliminating those from "app mode" for years.

Tomorrow: How are these properties different from desktop and mobile operating systems? How is the web learning from these (especially from mobile)? And how can mobile learn from the web? (Read it here)

Tags: web mobile

Just show my movie, okay?


I was reading through the coverage of Google's decision to "postpone" the Nexus Q yesterday. The decision was hardly surprising -- the Q was late to the market, was priced MUCH higher than the competition (Apple TV, Roku, Boxee), with a much, much shorter feature list. It was clearly rushed for I/O, and the cancellation of its launch makes sense.

So no surprise there. But I was struck by how much of commentary dinged Google for the lack of a phone-to-Nexus, or laptop-to-Nexus, streaming solution. That observation crystallized for me as:

Apple is NAILING it on seamless melding of local and wide-area networking.

It pains me to even have to mention LAN vs. WAN networking in a consumer video conversation. Users absolutely do not care - and, in fact, if they are forced to think about it, will have a negative reaction to the entire product category.

The Apple TV doesn't care whether you're streaming live, streaming something that was downloaded from iTunes, streaming from your Mac, AirPlaying from your iPad, or AirPlaying from your Mountain Lion-equipped laptop. It just shows your movie and plays your songs. The sheer amount of networking, rendering, and licensing effort to present the illusion of homogenity is massive.

This shows Apple's DNA as a hardware maker. They are very comfortable with LAN-only solutions -- in fact, they've spent decades trying to create auto-configuring networks, largely in service of the "OMG where's my printer" use case. The degree to which the iOS team has worked with cloud content providers - both within Apple and externally - is a testament to how well the company has grown in the last decade.

To developers and entrepreneurs who cut their teeth on the web, LAN networking is a horrifying, desperately ignored afterthought. Dealing with local storage and peer-to-peer wireless video sharing feels like a huge, difficult, expensive waste of time. The painful truth is that, for many users, much of the time, that's right. But by tackling those problems, and solving them, Apple is creating a level of hardware ecosystem lock-in (and margin capture) that is stunning to behold. The switching costs associated with getting off Apple's LAN integration create a constant back-pressure towards repeat purchases.

A new beginning


It is with no small amount of excitement that I begin a new gig as Entrepreneur-in-Residence at Greylock Partners this week.

After three years of exciting and passionate work with the entire Mozilla community - and a number of great product and feature introductions - I found myself hungering again for the intensity of a small start-up environment. When the opportunity to work with Greylock's partners, analysts, and other entrepreneurs-in-residence came up, the decision was surprisingly easy to make.

As I turn the page on my Mozilla Labs tenure, I'd like to give a special shout out to a couple people. Ben Adida, Dan Mills, and Lloyd Hilaiel took the BrowserID technology vision and are turning it into a real, viable, distributed identity solution for the web. Pascal Finette is showing how open innovation can create products that work for real users, and create real value. And the entire team working on Firefox OS and the open web applications ecosystem is showing how distributed, open-source technology can work in the mobile future. Thank you all.

If you are seeking email contact, you can reach me as mhanson; as always, @gmail is my stable long-term mail host.

LIFDing the web


Locally Isolated Feature Domains for graceful browser feature rollout

How do we gracefully introduce new browser features that require cross-site data storage? We can use addons and extensions, of course, but those require per-browser development and require the user to take an explicit action to trust us with new powers.

It would be far better if we can simply introduce the feature into web content in a gradual way, using cross-browser technology, and add it natively to the browser when we're sure that the design is right.

Libraries intended to add forthcoming browser features have been around for years; the most common name for them is "polyfill" (after the UK name for the product Americans would call "spackle"). The word "shim" is also used in many cases. A typical polyfill, however, is stateless - it provides logic that is missing from the browser, but does not introduce new data persistence or communication features.

The tricky part of deploying a new feature with cross-browser technology is doing it in a way that mirrors the benefits of doing it natively:

  • All code running on the client
  • No dependencies on external services
  • Access to all the user's data from all web content, subject to the user's control
  • Protection from the introduction of untrusted remote code

In the last year or two, a technique that has all of these properties has emerged and been used in a number of projects. At Mozilla, we're using it for BrowserID and the Open Web Apps projects; Google is using it for Belay, and probably in other projects as well. The first use of it that I saw in the wild was in the xAuth project started by Meebo.

Here's how it works:

  • A developer wants to introduce a new browser API. Let's call it window.newFeature.
  • The developer implements the new feature in JavaScript, using HTML5-only APIs, and places it on a specific website; let's say it's newfeature.org.
  • The developer places the trusted parts of the feature in a JavaScript file served up from newfeature.org; for example https://newfeature.org/trusted.js. This code uses local HTML storage to store user data, in the newfeature.org domain.
  • The developer then creates another JavaScript file that is intended to be included by a website to add the new feature - for example https://newfeature.org/include.js.
  • The include script does a couple things:
    1. It checks to see if the new feature is already there, e.g. by looking for the presence of window.newFeature. If the feature is found, it stops immediately and does nothing.
    2. Otherwise, it opens a hidden IFRAME to the trusted JS file provided by the website, and constructs a postMessage channel to it.
    3. It then defines the new method (window.newFeature) and defines the body of it to be a remote-method-invocation through the postMessage channel to the trusted IFRAME.
  • At runtime, messages are securely passed to the trusted JS, which runs JavaScript code in the newfeature.org domain, and has control over its own data access and APIs. Data can be safely passed between domains, under the control of the developer of the new feature.
  • If the new feature needs to open a window, it can create a popup window in its own domain and present user interface elements there. The popup window can communicate back to the hidden IFRAME since it is in the same domain.
  • Websites "opt-in" to the new feature by including the include.js file from the trusted domain; if they don't include the file, the system doesn't touch them at all.
  • If, someday, the feature is picked up by browsers and implemented natively, the feature domain fades from use.

At the Internet Identity Workshop this week, we held a short workshop to try to come up with a name for this technique. The winner was LIFD: Locally-Isolated Feature Domain.

The technique is quite similar to that use by Facebook Connect and other federated data authorization systems, but the goal is different; rather than using the embedded message channel to communicate with a server, we are using it to communicate with local storage. The "feature domain" exists only to create a firewalled sandbox in which the browser will store the user's data.

Threats and Downsides

This technique isn't perfect, of course. It falls short of the native code ideal in a couple significant ways.

  1. Performance: It adds latency to the host webpage, since additional network steps are included during page load. This can be mitigated in part by intelligent caching of the JS files.
  2. Hosting Security: The security of the user's data is entirely subject to how well the developer controls access to, and authenticates reads from, the trusted feature domain. The code should, of course, be served up over SSL. An attacker who can man-in-the-middle the feature domain, or tamper with the JS code hosted on it, could steal all of the user's data.

The best LIFD deployment should counter these problems by making sure the JavaScript that implements the feature is entirely static, and served securely from a fast CDN with long-lived caching.

Calling websites who require the highest security should locally serve the include.js file from their domain, instead of trusting the feature domain to serve it.

The Content Security Proposal, which is moving from Mozilla to an official W3C standardization track very soon, includes ideas that, if adopted, could make this technique even safer. The sandbox attribute could be applied to the hidden IFRAME, with a connect-src of none, to forbid all remote network access from the LIFD feature.

All posts »