New Tricks for an Old Dog

posted: 17 Dec 2016

Tom Ashworth pops a few mince pies in the oven to warm through as he shares with us experiences learned when on-boarding new front-enders into his team. From frameworks to refactoring, code reviews to componentisation, it’s got everything bar the brandy butter.

Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy.

I’ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn’t know or exposes an area of uncertainty that you can go work on.

In this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways!

How do you do, fellow kids?

To start with I’d like to introduce you to our JavaScript framework, Flight. Right now it’s used by and TweetDeck although, as a company, Twitter is largely moving to React.

Over time, as we used Flight for more complex interfaces, we found it wasn’t scaling with us.

Composing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn’t come built-in to Flight, and it made reusing components a real challenge.

There was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn’t predict how a component would be built until you opened it.

Making matters worse, Flight relied on events to move data around the application. Unfortunately, events aren’t good for giving structure to complex logic. They jump around in a way that’s hard to understand and debug, and force you to search your code for a specific string — the event name‚ to figure out what’s going on.

To find fixes for these problems, we looked around at other frameworks. We like React for it’s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone.

But when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you’ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort!

Instead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using.

Boiled down, what we liked seemed quite simple:

  • Component nesting & composition
  • Easy, predictable state management
  • Normal functions for data manipulation

Making these ideas applicable to Flight took some time, but we’re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app.

While the specifics of our situation and Flight aren’t really important, this experience taught me something:

Distill good tech into great ideas. You can apply great ideas anywhere.

You don’t have to use cool kids’ latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already?

Times, they are a changin’

Apart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code.

Going back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module’s source was encased in AMD boilerplate.

In fact, we found ourselves with a mix of both AMD syntaxes:

define(["lodash"], function (_) { // . . . }); define(function (require) { var _ = require("lodash"); // . . . });

This just wouldn’t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax:

import _ from "lodash";

These days we’re using Babel, Webpack, ES2015 modules and many new language features that make development just… better. But how did we get there?

To explain, I want to introduce you to codemods and jscodeshift.

A codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL("...") to new URL("...").

jscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file’s syntax tree – a data-structure representation of the code — finding and modifying in place as it goes.

Here’s an example that renames all instances of the variable foo to bar:

module.exports = function (fileInfo, api) { return api .jscodeshift(fileInfo.source) .findVariableDeclarators('foo') .renameTo('bar') .toSource(); };

It’s a seriously powerful tool, and we’ve used it to write a series of codemods that:

  • rename modules,
  • unify our use of AMD to a single syntax,
  • transition from one testing framework to another, and
  • switch from AMD to CommonJS.

These changes can be pretty huge and far-reaching. Here’s an example commit from when we switched to CommonJS:

commit 8f75de8fd4c702115c7bf58febba1afa96ae52fc Date: Tue Jul 12 2016 Run AMD -> CommonJS codemod 418 files changed, 47550 insertions(+), 48468 deletions(-)

Yep, that’s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone!

From this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes:

  1. Find all the existing patterns
  2. Choose the two most similar
  3. Unify with a codemod
  4. Repeat.

For example:

  1. For module loading, we had 2 competing AMD patterns plus some use of CommonJS
  2. The two AMD syntaxes were the most similar
  3. We used a codemod to move to unify the AMD patterns
  4. Later we returned to AMD to convert it to CommonJS

It’s worked for us, and if you’d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer.

Welcome aboard!

As TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie.

Inspired by Amy’s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks.

Joining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding!

And as you build up an onboarding process, you’ll create things that are useful for more than just new hires; it’ll force you to write documentation, for example, in a way that’s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help.

This is something that’s taken for granted in open source, but somehow I think we forget about it in big companies.

After all, better documentation is just a good thing. You will forget things from time to time, and you’d be surprised how often the “beginner” docs help!

For TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts:

  • What are our dependencies?
  • Where are the potential points of failure?
  • Where does authentication live? Storage? Caching?
  • Who owns “X”?

Of course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track.

To address this, we’ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review.

In my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team’s work. But, if you’re not doing it, here’s my suggestion for getting started:

Every pull request gets a +1 from someone else.

That’s all — it’s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master.

At Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things:

  • Don’t review for more than hour 1
  • Keep reviews smaller than ~400 lines 2
  • Code review your own code first 2

After an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone’s put code up for a review, that review is blocking them doing other work. It’s your job to unblock them.

On TweetDeck, we actually try to keep reviews under 250 lines. It doesn’t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration.

But the most important thing I’ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team’s: with fresh, critical eyes, after a break, using a dedicated code review tool.

It’s amazing what you can spot when you put a new in a new interface around code you’ve been staring at for hours!

And yes, this list features science. The data backs up these conclusions, and if you’d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It’s ace.

For more dedicated information sharing, we’ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it’s someone else’s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results.

If you’d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session — thanks to James for this idea. And guess what… your onboarding architecture diagrams will help (and benefit from) this!

More, please!

There’s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations.

And finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk.

  1. Dunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software. ↩︎

  2. Cohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476. ↩︎ ↩︎

About the author

Tom is an engineer at Twitter, leading frontend on TweetDeck. When not wrangling code, he plays the sousaphone in a New Orleans brass band.

More articles by Tom

Front-End Developers Are Information Architects To

posted: 16 Dec 2016

Francis Storr delves deep into our HTML and considers if the choice and application of semantics in front-end code contributes in a meaningful way to the information architecture of our sites. Perhaps the true meaning of Christmas lies in the markup.

The theme of this year’s World IA Day was “Information Everywhere, Architects Everywhere”. This article isn’t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People’s ability to be successful online is unequivocally connected to the quality of the code that is written.

Places made of information

In information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In , Andrew Hinton stated:

People live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds.

We’re creating structures which people rely on for significant parts of their lives, so it’s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML.


The word “semantic” has its origin in Greek words meaning “significant”, “signify”, and “sign”. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building’s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don’t need an “entrance” sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building’s purpose happens, but the affect is similar: the building is signifying something about its purpose.

HTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating:

Elements, attributes, and attribute values in HTML are defined … to have certain meanings (semantics). For example, the <ol> element represents an ordered list, and the lang attribute represents the language of the content.

HTML’s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they’re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It’s also what a front-end developer does, whether they realise it or not.

A brief introduction to information architecture

We’re going to start by looking at what an information architect is. There are many definitions, and I’m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In he said an information architect is:

the individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information.

To me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people.

Just as there are many definitions of what an information architect is, there are for information architecture itself. I’m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as:

  • The structural design of shared information environments.
  • The synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems.
  • The art and science of shaping information products and experiences to support usability, findability, and understanding.
Information Architecture For The World Wide Web, 4th Edition

To me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015’s State Of The Browser conference, Edd Sowden talked about the accessibility of <table>s. He discovered that by simply not using the semantically-correct <th> element to mark up <table> headings, in some situations browsers will decide that a <table> is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by Léonie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page.

Our definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we’re going to use. Dan defines these terms as:

The definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate.
What we mean when we say what we say.
The arrangements of the parts. Developing systems and structures for what everything’s called, where everything’s sorted, and the relationships between labels and categories
Rules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time.

We now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states:

… data is not information. Information is data endowed with relevance and purpose.

Managing For The Future

If we use the correct semantic element to mark up content then we’re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we’re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an <h2> element should be relevant to its parent, which should be the <h1>. By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you’ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways:

  • by creating a list of headings so users can quickly scan the page for information
  • by using a keyboard command to cycle through one heading at a time

If we had a document for Christmas Day TV we might structure it something like this:

<h1>Christmas Day TV schedule</h1> <h2>BBC1</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>BBC2</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>ITV</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>Channel 4</h2> <h3>Morning</h3> <h3>Evening</h3>

If I use VoiceOver to generate a list of headings, I get this:

Once I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the <h2>s:

If we hadn’t used headings, of if we’d nested them incorrectly, our users would be frustrated.

Putting this together

Let’s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a <button>. Every browser understands what a <button> is, how it works, and what keyboard shortcuts should be used with them. The HTML specification for the <button> element says:

The <button> element represents a button labeled by its contents.

The contents that a <button> can have include the type attribute, any relevant ARIA attributes, and the actual text label that the user sees. This information is more important than the visual design: it doesn’t matter how beautiful or obtuse the design is, if the underlying code is non-semantic and poorly labelled, people are going to struggle to use it. Here are three buttons, each created with the same HTML but with different designs:

Regardless of what they look like, because we’ve used semantic HTML instead of a bunch of meaningless <div>s or <span>s, people who use assistive technology are going to benefit. Out of the box, without any extra development effort, a <button> is accessible and usable with a keyboard. We don’t have to write event handlers to listen for people pressing the Enter key or the space bar, which we would have to do if we’d faked a button with non-semantic elements. Our <button> can also be quickly findable: for example, in the same way it’s possible to create a list of headings with a screen reader, I can also create a list of form elements and then quickly jump to the one I want.

Now we have our <button>, let’s add the panel we’re toggling the appearance of. Here’s our code:

<button aria-controls="panel" aria-expanded="false" class="settings" id="settings" type="button">Settings</button> <div class="panel hidden" id="panel"> <ul aria-labelledby="settings"> <li><a href="…">Account</a></li> <li><a href="…">Privacy</a></li> <li><a href="…">Security</a></li> </ul> </div>

There’s quite a bit going on here. We’re using the:

  • aria-controls attribute to architect a connection between the <button> element and the panel whose appearance it controls. When some assistive technology, for example the JAWS screen reader, encounters an element with aria-controls it audibly tells a user about the controlled expanded element and gives them the ability to move focus to it.
  • aria-expanded attribute to denote whether the panel is visible or not. We toggle this value using JavaScript to true when the panel is visible and false when it’s not. This important attribute tells people who use screen readers about the state of the elements they’re interacting with. For example, VoiceOver announces Settings expanded button when the panel is visible and Settings collapsed button when it’s hidden.
  • aria-labelledby attribute to give the list a title of “Settings”. This can benefit some users of assistive technology. For example, screen readers can cycle through all the lists on a page, so being able to title them can improve findability. Being able to hear list Settings three items is, I’d argue, more useful than list three items. By doing this we’re supporting usability and findability.
  • <ul> element to contain our list of links in our panel.

Let’s look at the choice of <ul> to contain our settings choices. Firstly, our settings are related items, so they belong in a structure that semantically groups things. This is something that a list can do that other elements or patterns can’t. This pattern, for example, isn’t semantic and has no structure:

<div><a href="…">Account</a></div> <div><a href="…">Privacy</a></div> <div><a href="…">Security</a></div>

All we have there is three elements next to each other on the screen and in the DOM. That is not robust code that signifies anything.

Why are we using an unordered list as opposed to an ordered list or a definition list? A quick look at the HTML specification tells us why:

The <ul> element represents a list of items, where the order of the items is not important — that is, where changing the order would not materially change the meaning of the document.

The HTML 5.1 specification’s description of the

    Will the meaning of our document materially change if we moved the order of our links around? Nope. Therefore, I’d argue, we’ve used the correct element to structure our content.

    These coding decisions are information architecture

    I believe that what we’ve done here is pure information architecture. Going back to Dan Klyn’s model, we’ve practiced ontology by looking at the meaning of what we’re intending to communicate:

    • we want to communicate there is an interactive element that toggles the appearance of an element on a page so we’ve used one, a <button>, with those semantics.
    • programmatically we’ve used the type='button' attribute to signify that the button isn’t a menu, reset, or submit element.
    • visually we’ve designed our <button> look like something that can be interacted with and, importantly, we haven’t removed the focus ring.
    • we’ve labelled the <button> with the word “Settings” so that our users will hopefully understand what the button is for.
    • we’ve used an <ul> element to structure and communicate our list of related items.

    We’ve also practiced taxonomy by developing systems and structures and creating relationships between our elements:

    • by connecting the <button> to the panel using the aria-controls attribute we’ve programmatically created a relationship between two elements.
    • we’ve developed a structure in our elements by labelling our <ul> with the same name as the <button> that controls its appearance.

    And finally we’ve practiced choreography by creating elements that foster movement and interaction. We’ve anticipated the way users and information want to flow:

    • we’ve used a <button> element that is interactive and accessible out of the box.
    • our aria-controls attribute can help some people who use screen readers move easily from the <button> to the panel it controls.
    • by toggling the value of the aria-expanded attribute we’ve developed a system that tells assistive technology about the status of the relationship between our elements: the panel is visible or the panel is hidden.
    • we’ve made sure our information is more usable and findable no matter how our users want or need to interact with it. Regardless of how someone “sees” our work they’re going to be able to use it because we’ve architected multiple ways to access our information.

    Information architecture, robust code, and accessibility

    The United Nations estimates that around 10% of the world’s population has some form of disability which, at the time of writing, is around 740,000,000 people. That’s a lot of people who rely on well-architected semantic code that can be interpreted by whatever assistive technology they may need to use.

    If everyone involved in the creation of our places made of information practiced information architecture it would make satisfying the WCAG 2.0 POUR principles so much easier. Our digital construction practices directly affect the quality of life of millions of people, and we have a responsibility to make technology available to them. In her book How To Make Sense Of Any Mess, Abby Covert states:

    If we’re going to be successful in this new world, we need to see information as a workable material and learn to architect it in a way that gets us to our goals.

    I believe that the world will be a better place if we start treating front-end development as information architecture.

    About the author

    Francis Storr is the lead designer for Intel’s Software Accessibility Program. An ex-pat from the UK, he now lives in Portland, Oregon where he appreciates not having to commute from Kent to London five days a week.

    More articles by Francis

    Animation in Design Systems

    posted: 15 Dec 2016

    Sarah Drasner drops down on the sofa, turns on the TV and puts on some Christmas classics. Yes, it’s time to talk animation, and not just any animation, but how you prescribe and document animation in your design systems. Perhaps you’ll be moved to make some changes.

    Brought to you by The CSS Layout Workshop. Does developing layouts with CSS seem like hard work? How much time could you save without all the trial and error? Are you ready to really learn CSS layout?

    Our modern front-end workflow has matured over time to include design systems and component libraries that help us stay organized, improve workflows, and simplify maintenance. These systems, when executed well, ensure proper documentation of the code available and enable our systems to scale with reduced communication conflicts.

    But while most of these systems take a critical stance on fonts, colors, and general building blocks, their treatment of animation remains disorganized and ad-hoc. Let’s leverage existing structures and workflows to reduce friction when it comes to animation and create cohesive and performant user experiences.

    Understand the importance of animation

    Part of the reason we treat animation like a second-class citizen is that we don’t really consider its power. When users are scanning a website (or any environment or photo), they are attempting to build a spatial map of their surroundings. During this process, nothing quite commands attention like something in motion.

    We are biologically trained to notice motion: evolutionarily speaking, our survival depends on it. For this reason, animation when done well can guide your users. It can aid and reinforce these maps, and give us a sense that we understand the UX more deeply. We retrieve information and put it back where it came from instead of something popping in and out of place.

    “Where did that menu go? Oh it’s in there.”

    For a deeper dive into how animation can connect disparate states, I wrote about the Importance of Context-Shifting in UX Patterns for CSS-Tricks.

    An animation flow on mobile.

    Animation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn’t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped.

    14 second generic loading screen.
    22 second custom loading screen.

    This also works for form submission. Giving your personal information over to an online process like a static form can be a bit harrowing. It becomes more harrowing without animation used as a signal that something is happening, and that some process is completing. That same animation can also entertain users and make them feel as though the wait isn’t as long.

    Eli Fitch gave a talk at CSS Dev Conf called: “Perceived Performance: The Only Kind That Really Matters”, which is one of my favorite talk titles of all time. In it, he discussed how we tend to measure things like timelines and network requests because they are more quantifiable–and therefore easier to measure–but that measuring how a user feels when visiting the site is more important and worth the time and attention.

    In his talk, he states “Humans over-estimate passive waits by 36%, per Richard Larson of MIT”. This means that if you’re not using animation to speed up how fast the wait time of a form submission loads, users are perceiving it to be much slower than the dev tools timeline is recording.

    Reign it in

    Unlike fonts, colors, and so on, we tend to add animation in as a last step, which leads to disorganized implementations that lack overall cohesion. If you asked a designer or developer if they would create a mockup or build a UI without knowing the fonts they were working with, they would dislike the idea. Not knowing the building blocks they’re working with means that the design can fall apart or the development can break with something so fundamental left out at the start. Good animation works the same way.

    The first step in reigning in your use of animation is to perform an animation audit. Look at all the places you are using animation on your site, or the places you aren’t using animation but probably should. (Hint: perceived performance of a loader on a form submission can dramatically change your bounce rates.)

    Not sure how to perform a good audit? Val Head has a great chapter on it in her book, Designing Interface Animations, which has of buckets of research and great ideas.

    Even some beautiful component libraries that have animation in the docs make this mistake. You don’t need every kind of animation, just like you don’t need every kind of font. This bloats our code. Ask yourself questions like: do you really need a flip 180 degree animation? I can’t even conceive of a place on a typical UI where that would be useful, yet most component libraries that I’ve seen have a mixin that does just this.

    Which leads to…

    Have an opinion

    Many people are confused about Material Design. They think that Material Design is Motion Design, mostly because they’ve never seen anyone take a stance on animation before and document these opinions well. But every time you use Material Design as your motion design language, people look at your site and think GOOGLE. Now that’s good branding.

    By using Google’s motion design language and not your own, you’re losing out on a chance to be memorable on your own website.

    What does having an opinion on motion look like in practice? It could mean you’ve decided that you never flip things. It could mean that your eases are always going to glide. In that instance, you would put your efforts towards finding an ease that looks “gliding” and pulling out any transform: scaleX(-1) animation you find on your site. Across teams, everyone knows not to spend time mocking up flipping animation (even if they’re working on an entirely different codebase), and to instead work on something that feels like it glides. You save time and don’t have to communicate again and again to make things feel cohesive.

    Create good developer resources

    Sometimes people don’t incorporate animation into a design system because they aren’t sure how, beyond the base hover states. All animation properties can be broken into interchangeable pieces. This allows developers and designers alike to mix and match and iterate quickly, while still staying in the correct language. Here are some recommendations (with code and a demo to follow):

    Create timing units, similar to h1, h2, h3. In a system I worked on recently, I called these t1, t2, t3. T1 would be reserved for longer pieces, down to t5 which is a bit like h5 in that it’s the default (usually around .25 seconds or thereabouts). Keep animation easings for entrance, exit, entrance emphasis and exit emphasis that people can commonly refer to. This, and the animation-fill-mode, are likely to be the only two properties that can be reused for the entrance and exit of the animation. Use the animation-name property to define the keyframes for the animation itself. I would recommend starting with 5 or 6 before making a slew of them, and see if you need more. Writing 30 different animations might seem like a nice resource, but just like your color palette having too many can unnecessarily bulk up your codebase, and keep it from feeling cohesive. Think critically about what you need here.

    See the Pen Modularized Animation for Component Libraries by Sarah Drasner (@sdras) on CodePen.

    The example above is pared-down, but you can see how in a robust system, having pieces that are interchangeable cached across the whole system would save time for iterations and prototyping, not to mention make it easy to make adjustments for different feeling movement on the same animation easily.

    One low hanging fruit might be a loader that leads to a success dialog. On a big site, you might have that pattern many times, so writing up a component that does only that helps you move faster while also allowing you to really zoom in and focus on that pattern. You avoid throwing something together at the last minute, or using a GIF, which are really heavy and mushy on retina. You can make singular pieces that look really refined and are reusable.

    React and Vue Implementations are great for reusable components, as you can create a building block with a common animation pattern, and once created, it can be a resource for all. Remember to take advantage of things like props to allow for timing and easing adjustments like we have in the previous example!


    At the very least we should ensure that interaction also works well on mobile, but if we’d like to create interactions that take advantage of all of the gestures mobile has to offer, we can use libraries like zingtouch or hammer to work with swipe or multiple finger detection. With a bit of work, these can all be created through native detection as well.

    Responsive web pages can specify initial-scale=1.0 in the meta tag so that the device is not waiting the required 300ms on the secondary tap before calling action. Interaction for touch events must either start from a larger touch-target (40px × 40px or greater) or use @media(pointer:coarse) as support allows.


    Sometimes people don’t create animation resources simply because it gets deprioritized. But design systems were also something we once had to fight for, too. This year at CSS Dev Conf, Rachel Nabors demonstrated how to plot out animation wants vs. needs on a graph (reproduced with her permission) to help prioritize them:

    This helps people you’re working with figure out the relative necessity and workload of the addition of these animations and think more critically about it. You’re also more likely to get something through if you’re proving that what you’re making is needed and can be reused.

    Good compromises can be made this way: “we’re not going to go all out and create an animated ‘About Us’ page like you wanted, but I suppose we can let our users know their contact email went through with a small progress and success notification.”

    Successfully pushing smaller projects through helps build trust with your team, and lets them see what this type of collaboration can look like. This builds up the type of relationship necessary to push through projects that are more involved. It can’t be overstressed that good communication is key.

    Get started!

    With these tools and good communication, we can make our codebases more efficient, performant, and feel better for our users. We can enhance the user experience on our sites, and create great resources for our teams to allow them to move more quickly while innovating beautifully.

    About the author

    Sarah is an award-winning Speaker, Consultant, and Staff Writer at CSS-Tricks. Sarah is also the co-founder of Web Animation Workshops, with Val Head. She’s given a Frontend Masters workshop on Advanced SVG Animations, and is working on a book for O’Reilly on SVG Animations. She’s formerly Manager of UX Design & Engineering at Trulia (Zillow).

    Last year Sarah won CSS Dev Conf’s “Best of Best of Award” as well as “Best Code Wrangler” from CSS Design Awards. She has worked for 15 years as a web developer and designer, and at points worked as a Scientific Illustrator and a Undergraduate Professor.

    More articles by Sarah

    HTTP/2 Server Push and Service Workers: The Perfec

    posted: 14 Dec 2016

    Dean Hume pops on his gown and slippers and opens up his Christmas stocking to discover the high performance gifts of Server Push and Service Workers. It’s the gift that keeps on giving.

    Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push.

    During this year’s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination.

    If you’ve never heard of HTTP/2 Server Push before, fear not - it’s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users.

    What is HTTP/2 Server Push?

    When a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower.

    Before we go any further, let’s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this:

    As you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs.

    This is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and “pushes” it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster.

    Using the same example above, let’s “push” the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like.

    Whoa, that looks different - let’s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn’t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better!

    There are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I’ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I’m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings.

    HTTP/2 Server Push is awesome, but it isn’t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn’t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache “aware”. This means that the server isn’t aware about the state of your client. If you’ve visited a web page before, the server isn’t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they’re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs.

    HTTP/2 Server Push & Service Workers

    So where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you’ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand.

    Using Service Workers, you can easily cache assets on a user’s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server.

    Let’s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load!

    Let’s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files.

    <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>HTTP2 Push Demo</title> </head> <body> <h1>HTTP2 Push</h1> <img src="./images/beer-1.png" width="200" height="200" /> <img src="./images/beer-2.png" width="200" height="200" /> <br> <br> <img src="./images/beer-3.png" width="200" height="200" /> <img src="./images/beer-4.png" width="200" height="200" /> <!-- Scripts --> <script async src="./js/promise.min.js"></script> <script async src="./js/fetch.js"></script> <script> // Register the service worker if ('serviceWorker' in navigator) { navigator.serviceWorker.register('./service-worker.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }).catch(function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); } </script> </body> </html>

    In the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push.

    The Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns.

    I’ve add the following code to the service-worker.js file.

    (global => { 'use strict'; // Load the sw-toolbox library. importScripts('/js/sw-toolbox/sw-toolbox.js'); // The route for any requests toolbox.router.get('/push', global.toolbox.fastest); toolbox.router.get('/images/(.*)', global.toolbox.fastest); toolbox.router.get('/js/(.*)', global.toolbox.fastest); // Ensure that our service worker takes control of the page as soon as possible. global.addEventListener('install', event => event.waitUntil(global.skipWaiting())); global.addEventListener('activate', event => event.waitUntil(global.clients.claim())); })(self);

    Let’s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I’ve told the toolbox to listen to these routes too.

    The best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn’t exist in cache, the code above will add it into cache so that we can retrieve it when it’s needed again. You may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit.

    But what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to “push” everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing!

    Using this technique, the waterfall chart for a repeat visit should look like the image below.

    If you look closely at the image above, you’ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache.

    Whether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user’s browser doesn’t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance.


    When used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site’s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it’s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don’t just simply “push” everything; it could actually lead to having slower page load times. If you’d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information.

    All of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I’ll get back to you as soon as possible.

    If you’d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone’s talk at this years Chrome Developer Summit.

    I’d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live!

    About the author

    Dean Hume is an author, Google Developer Expert, and all-around web performance geek. He regularly writes articles based on software development on his blog at

    More articles by Dean

    Information Literacy Is a Design Problem

    posted: 13 Dec 2016

    Lisa Maria Martin harnesses the huskies of information to the sled of knowledge and takes us on a ride through the snowy peaks and valleys of information design. Plan your route carefully and make your intentions clear, for in the wintery wastelands there are no go-arounds.

    Information literacy, wrote Dr. Carol Kulthau in her 1987 paper “Information Skills for an Information Society,” is “the ability to read and to use information essential for everyday life”—that is, to effectively navigate a world built on “complex masses of information generated by computers and mass media.”

    Nearly thirty years later, those “complex masses of information” have only grown wilder, thornier, and more constant. We call the internet a firehose, yet we’re loathe to turn it off (or even down). The amount of information we consume daily is staggering—and yet our ability to fully understand it all remains frustratingly insufficient.

    This should hit a very particular chord for those of us working on the web. We may be developers, designers, or strategists—we may not always be responsible for the words themselves—but we all know that communication is much more than just words. From fonts to form fields, every design decision that we make changes the way information is perceived—for better or for worse.

    What’s more, the design decisions that we make feed into larger patterns. They don’t just affect the perception of a single piece of information on a single site; they start to shape reader expectations of information anywhere. Users develop cumulative mental models of how websites should be: where to find a search bar, where to look at contact information, how to filter a product list.

    And yet: our models fail us. Fundamentally, we’re not good at parsing information, and that’s troubling. Our experience of an “information society” may have evolved, but the skills Dr. Kuhlthau spoke of are even more critical now: our lives depend on information literacy.

    Patterns from words

    Let’s start at the beginning: with the words. Our choice of words can drastically alter a message, from its emotional resonance to its context to its literal meaning. Sometimes we can use word choice for good, to reinvigorate old, forgotten, or unfairly besmirched ideas.

    One time at a wedding bbq we labeled the coleslaw BRASSICA MIXTA so people wouldn’t skip it based on false hatred.

    — Eileen Webb (@webmeadow) November 27, 2016

    We can also use clever word choice to build euphemisms, to name sensitive or intimate concepts without conjuring their full details. This trick gifts us with language like “the beast with two backs” (thanks, Shakespeare!) and “surfing the crimson wave” (thanks, Cher Horowitz!).

    But when we grapple with more serious concepts—war, death, human rights—this habit of declawing our language gets dangerous. Using more discrete wording serves to nullify the concepts themselves, euphemizing them out of sight and out of mind.

    The result? Politicians never lie, they just “misspeak.” Nobody’s racist, but plenty of people are “economically anxious.” Nazis have rebranded as “alt-right.”

    I’m not an asshole, I’m just alt-nice.

    — Andi Zeisler (@andizeisler) November 22, 2016

    The problem with euphemisms like these is that they quickly infect everyday language. We use the words we hear around us. The more often we see “alt-right” instead of “Nazi,” the more likely we are to use that phrase ourselves—normalizing the term as well as the terrible ideas behind it.

    Patterns from sentences

    That process of normalization gets a boost from the media, our main vector of information about the world outside ourselves. Headlines control how we interpret the news that follows—even if the story contradicts it in the end. We hear the framing more clearly than the content itself, coloring our interpretation of the news over time.

    Even worse, headlines are often written to encourage clicks, not to convey critical information. When headline-writing is driven by sensationalism, it’s much, much easier to build a pattern of misinformation. Take this CBS News headline: “Donald Trump: ‘Millions’ voted illegally for Hillary Clinton.” The headline makes no indication that this an objectively false statement; instead, this word choice subtly suggestions that millions did, in fact, illegally vote for Hillary Clinton.

    Headlines like this are what make lying a worthwhile political strategy.

    — Binyamin Appelbaum (@BCAppelbaum) November 27, 2016

    This is a deeply dangerous choice of words when headlines are the primary way that news is conveyed—especially on social media, where it’s much faster to share than to actually read the article. In fact, according to a study from the Media Insight Project, “roughly six in 10 people acknowledge that they have done nothing more than read news headlines in the past week.”

    If a powerful person asserts X there are 2 responsible ways to cover:
    1. “X is true”
    2. “Person incorrectly thinks X”

    Never “Person says X”

    — Helen Rosner (@hels) November 27, 2016

    Even if we do, in fact, read the whole article, there’s no guarantee that we’re thinking critically about it. A study conducted by Stanford found that “82 percent of students could not distinguish between a sponsored post and an actual news article on the same website. Nearly 70 percent of middle schoolers thought they had no reason to distrust a sponsored finance article written by the CEO of a bank, and many students evaluate the trustworthiness of tweets based on their level of detail and the size of attached photos.”

    Friends: our information literacy is not very good. Luckily, we—workers of the web—are in a position to improve it.

    Sentences into design

    Consider the presentation of those all-important headlines in social media cards, as on Facebook. The display is a combination of both the card’s design and the article’s source code, and looks something like this:

    A large image, a large headline; perhaps a brief description; and, at the bottom, in pale gray, a source and an author’s name.

    Those choices convey certain values: specifically, they suggest that the headline and the picture are the entire point. The source is so deemphasized that it’s easy to see how fake news gains a foothold: daily exposure to this kind of hierarchy has taught us that sources aren’t important.

    And that’s the message from the best-case scenario. Not every article shows every piece of data. Take this headline from the BBC: “Wisconsin receives request for vote recount.”

    With no image, no description, and no author, there’s little opportunity to signal trust or provide nuance. There’s also no date—ever—which presents potentially misleading complications, especially in the context of “breaking news.”

    And lest you think dates don’t matter in the light-speed era of social media, take the headline, “Maryland sidesteps electoral college.” Shared into my feed two days after the US presidential election, that’s some serious news with major historical implications. But since there’s no date on this card, there’s no way for readers to know that the “Tuesday” it refers to was in 2007. Again, a design choice has made misinformation far too contagious.

    More recently, I posted my personal reaction to the death of Fidel Castro via a series of twenty tweets. Wanting to share my thoughts with friends and family who don’t use Twitter, I then posted the first tweet to Facebook. The card it generated was less than ideal:

    The information hierarchy created by this approach prioritizes the name of the Twitter user (not even the handle), along with the avatar. Not only does that create an awkward “headline” (at least when you include a full stop in your name), but it also minimizes the content of the tweet itself—which was the whole point.

    The arbitrary elevation of some pieces of content over others—like huge headlines juxtaposed with minimized sources—teaches readers that these values are inherent to the content itself: that the headline is the news, that the source is irrelevant. We train readers to stop looking for the information we don’t put in front of them.

    These aren’t life-or-death scenarios; they are just cases where design decisions noticeably dictate the perception of information. Not every design decision makes so obvious an impact, but the impact is there. Every single action adds to the pattern.

    Design with intention

    We can’t necessarily teach people to read critically or vet their sources or stop believing conspiracy theories (or start believing facts). Our reach is limited to our roles: we make websites and products for companies and colleges and startups.

    But we have more reach there than we might realize. Every decision we make influences how information is presented in the world. Every presentation adds to the pattern. No matter how innocuous our organization, how lowly our title, how small our user base—every single one of us contributes, a little bit, to the way information is perceived.

    Are we changing it for the better?

    While it’s always been crucial to act ethically in the building of the web, our cultural climate now requires dedicated, individual conscientiousness. It’s not enough to think ourselves neutral, to dismiss our work as meaningless or apolitical. Everything is political. Every action, and every inaction, has an impact.

    As Chappell Ellison put it much more eloquently than I can:

    Every single action and decision a designer commits is a political act. The question is, are you a conscious actor?

    — Chappell Ellison> (@ChappellTracker) November 28, 2016

    As shapers of information, we have a responsibility: to create clarity, to further understanding, to advance truth. Every single one of us must choose to treat information—and the society it builds—with integrity.

    About the author

    Lisa Maria Martin is an independent UX consultant and an editor with A Book Apart. She lives in Boston, MA. Follow her on Twitter, or learn more about her work at

    More articles by Lisa Maria

    Designing Imaginative Style Guides

    posted: 13 Dec 2016

    Andy Clarke unpacks his tinsel and untangles the Christmas illuminations to add some brilliance to the subject of style guides. Whether you choose a boxed pre-decorated tree, or lovingly choose each adornment to your Norwegian spruce, just remember to switch the lights on.

    (Living) style guides and (atomic) patterns libraries are “all the rage,” as my dear old Nana would’ve said. If articles and conference talks are to be believed, making and using them has become incredibly popular. I think there are plenty of ways we can improve how style guides look and make them better at communicating design information to creatives without it getting in the way of information that technical people need.

    Guides to libraries of patterns

    Most of my consulting work and a good deal of my creative projects now involve designing style guides. I’ve amassed a huge collection of brand guidelines and identity manuals as well as, more recently, guides to libraries of patterns intended to help designers and developers make digital products and websites.

    Two pages from one of my Purposeful style guide packs. Designs © Stuff & Nonsense.

    “Style guide” is an umbrella term for several types of design documentation. Sometimes we’re referring to static style or visual identity guides, other times voice and tone. We might mean front-end code guidelines or component/pattern libraries. These all offer something different but more often than not they have something in common. They look ugly enough to have been designed by someone who enjoys configuring a router.

    OK, that was mean, not everyone’s going to think an unimaginative style guide design is a problem. After all, as long as a style guide contains information people need, how it looks shouldn’t matter, should it?

    Inspiring not encyclopaedic

    Well here’s the thing. Not everyone needs to take the same information away from a style guide. If you’re looking for markup and styles to code a ‘media’ component, you’re probably going to be the technical type, whereas if you need to understand the balance of sizes across a typographic hierarchy, you’re more likely to be a creative. What you need from a style guide is different.

    Sure, some people1 need rules:

    “Do this (responsive pattern)” or “don’t do that (auto-playing video.)”

    Those people probably also want facts:

    “Use this (hexadecimal value)” and not that inaccessible colour combination.”

    Style guides need do more than list facts and rules. They should demonstrate a design, not just document its parts. The best style guides are inspiring not encyclopaedic. I’ll explain by showing how many style guides currently present information about colour.

    Colours communicate

    I’m sure you’ll agree that alongside typography, colour’s one of the most important ingredients in a design. Colour communicates personality, creates mood and is vital to an easily understandable interactive vocabulary. So you’d think that an average style guide would describe all this in any number of imaginative ways. Well, you’d be disappointed, because the most inspiring you’ll find looks like a collection of chips from a paint chart.

    Lonely Planet’s Rizzo does a great job of separating its Design Elements from UI Components, and while its ‘Click to copy’ colour values are a thoughtful touch, you’ll struggle to get a feeling for Lonely Planet’s design by looking at their colour chips.

    Lonely Planet’s Rizzo style guide.

    Lonely Planet approach is a common way to display colour information and it’s one that you’ll also find at Greenpeace, Sky, The Times and on countless more style guides.

    Greenpeace, Sky and The Times style guides.

    GOV.UK—not a website known for its creative flair—varies this approach by using circles, which I find strange as circles don’t feature anywhere else in its branding or design. On the plus side though, their designers have provided some context by categorising colours by usage such as text, links, backgrounds and more.

    GOV.UK style guide.

    Google’s Material Design offers an embarrassment of colours but most helpfully it also advises how to combine its primary and accent colours into usable palettes.

    Google’s Material Design.

    While the ability to copy colour values from a reference might be all a technical person needs, designers need to understand why particular colours were chosen as well as how to use them.

    Inspiration not documentation

    Few style guides offer any explanation and even less by way of inspiring examples. Most are extremely vague when they describe colour:

    “Use colour as a presentation element for either decorative purposes or to convey information.”

    The Government of Canada’s Web Experience Toolkit states, rather obviously.

    “Certain colors have inherent meaning for a large majority of users, although we recognize that cultural differences are plentiful.”

    Salesforce tell us, without actually mentioning any of those plentiful differences.

    I’m also unsure what makes the Draft U.S. Web Design Standards colours a “distinctly American palette” but it will have to work extremely hard to achieve its goal of communicating “warmth and trustworthiness” now.

    In Canada, “bold and vibrant” colours reflect Alberta’s “diverse landscape.”

    Adding more colours to their palette has made Adobe “rich, dynamic, and multi-dimensional” and at Skype, colours are “bold, colourful (obviously) and confident” although their style guide doesn’t actually provide information on how to use them.

    The University of Oxford, on the other hand, is much more helpful by explaining how and why to use their colours:

    “The (dark) Oxford blue is used primarily in general page furniture such as the backgrounds on the header and footer. This makes for a strong brand presence throughout the site. Because it features so strongly in these areas, it is not recommended to use it in large areas elsewhere. However it is used more sparingly in smaller elements such as in event date icons and search/filtering bars.”

    OpenTable style guide.

    The designers at OpenTable have cleverly considered how to explain the hierarchy of their brand colours by presenting them and their supporting colours in various size chips. It’s also obvious from OpenTable’s design which colours are primary, supporting, accent or neutral without them having to say so.

    Art directing style guides

    For the style guides I design for my clients, I go beyond simply documenting their colour palette and type styles and describe visually what these mean for them and their brand. I work to find distinctive ways to present colour to better represent the brand and also to inspire designers.

    For example, on a recent project for SunLife, I described their palette of colours and how to use them across a series of art directed pages that reflect the lively personality of the SunLife brand. Information about HEX and RGB values, Sass variables and when to use their colours for branding, interaction and messaging is all there, but in a format that can appeal to both creative and technical people.

    SunLife style guide. Designs © Stuff & Nonsense.

    Purposeful style guides

    If you want to improve how you present colour information in your style guides, there’s plenty you can do.

    For a start, you needn’t confine colour information to the palette page in your style guide. Find imaginative ways to display colour across several pages to show it in context with other parts of your design. Here are two CSS gradient filled ‘cover’ pages from my Purposeful style sheets.

    Colour impacts other elements too, including typography, so make sure you include colour information on those pages, and vice-versa.

    Purposeful. Designs © Stuff & Nonsense.

    A visual hierarchy can be easier to understand than labelling colours as ‘primary,’ ‘supporting,’ or ‘accent,’ so find creative ways to present that hierarchy. You might use panels of different sizes or arrange boxes on a modular grid to fill a page with colour.

    Don’t limit yourself to rectangular colour chips, use circles or other shapes created using only CSS. If irregular shapes are a part of your brand, fill SVG silhouettes with CSS and then wrap text around them using CSS shapes.

    Purposeful. Designs © Stuff & Nonsense.

    Summing up

    In many ways I’m as frustrated with style guide design as I am with the general state of design on the web. Style guides and pattern libraries needn’t be dull in order to be functional. In fact, they’re the perfect place for you to try out new ideas and technologies. There’s nowhere better to experiment with new properties like CSS Grid than on your style guide.

    The best style guide designs showcase new approaches and possibilities, and don’t simply document the old ones. Be as creative with your style guide designs as you are with any public-facing part of your website.

    Purposeful are HTML and CSS style guides templates designed to help you develop creative style guides and pattern libraries for your business or clients. Save time while impressing your clients by using easily customisable HTML and CSS files that have been designed and coded to the highest standards. Twenty pages covering all common style guide components including colour, typography, buttons, form elements, and tables, plus popular pattern library components. Purposeful style guides will be available to buy online in January.

    1. Boring people ↩︎

    About the author

    Andy Clarke is an art director and web designer at the UK website design studio ‘Stuff & Nonsense.’ There he designs websites and applications for clients from around the world. Based in North Wales, Andy’s also the author of two web design books, ‘Transcending CSS’ and the new ‘Hardboiled Web Design Fifth Anniversary Edition’ and is well known for his many conference presentations and over ten years of contributions to the web design industry. Jeffrey Zeldman once called him a “triple talented bastard.” If you know of Jeffrey, you’ll know how happy that made him.

    More articles by Andy

    First Steps in VR

    posted: 12 Dec 2016

    Shane Hudson dusts the snow from his jacket and helps us take our first tentative steps into the gloomy world of virtual reality. So mark his footsteps good my page, tread thou in them boldly. Thou shalt find the virtual world spin thy head less coldly.

    The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not.

    I think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people.

    Every year we see articles saying it will be the “year of virtual reality”, that was especially prevalent this year. 2016 has been a year of progress, VR isn’t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web.

    WebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier.

    Getting Started with A-Frame

    I like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That’s not to say you can’t build complex things, you have complete access to write JavaScript and shaders.

    I’m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There’s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It’s fun.

    For this project I wanted to keep things as simple as possible, so I can easily explain it without the classic “draw a circle then draw an owl”. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy:

    • Must work on Google Cardboard at a minimum, because of price
    • Therefore, it must not rely on having a controller
    • Auto-moving around a maze would be a good example
    • Move in direction you look
    • Stretch goal: Scoring, time until you hit a wall or get stuck in maze
    • Stretch goal: Levels, so the map doesn’t need to be random
    • Stretch goal: Snow!

    I decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground.

    <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>24 ways</title> <script src=""></script> <script src="//"></script> </head> <body> <a-scene> <a-entity id="player" camera universal-controls kinematic-body position="0 1.8 0"> </a-entity> <a-entity id="walls"></a-entity> <a-grid id="ground" static-body></a-grid> <a-sky id="sky" color="#AADDF0"></a-sky> <!-- Lighting --> <a-light type="ambient" color="#ccc"></a-light> </a-scene> <script> document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var MAP_SIZE = 10, PLATFORM_SIZE = 5, NUM_PLATFORMS = 50; var platformsEl = document.querySelector('#walls'); var v, box; for (var i = 0; i < NUM_PLATFORMS; i++) { // y: 0 is ground v = { x: (Math.floor(Math.random() * MAP_SIZE) - PLATFORM_SIZE) * PLATFORM_SIZE, y: PLATFORM_SIZE / 2, z: (Math.floor(Math.random() * MAP_SIZE) - PLATFORM_SIZE) * PLATFORM_SIZE }; box = document.createElement('a-box'); platformsEl.appendChild(box); box.setAttribute('color', '#39BB82'); box.setAttribute('width', PLATFORM_SIZE); box.setAttribute('height', PLATFORM_SIZE); box.setAttribute('depth', PLATFORM_SIZE); box.setAttribute('position', v.x + ' ' + v.y + ' ' + v.z); box.setAttribute('static-body', '); }'Platforms loaded.'); }); </script> </body> </html>

    As you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an <a-scene> which is the container for everything that is going to show up on the screen. There are a few <a-entity> which can be compared to <div> as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them.


    Now we’ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave. Note, you will probably need a server running for images to work. You can do this by running python -m "SimpleHTTPServer" in the folder where the code is, then go to localhost:8000 in browser.


    Unless you are going for a cartoony style, you probably want to find some textures. I found some on, one image worked well for the walls and the other for the floor.

    <a-assets> <img id="texture-floor" src="floor.jpg"> <img id="texture-wall" src="wall.jpg"> </a-assets>

    The <a-assets> is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures.

    To apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image.


    <a-grid id="ground" static-body></a-grid>


    <a-grid id="ground" static-body material="src: #texture-floor"></a-grid>

    This will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined.

    box.setAttribute('material', 'src: #texture-wall');

    That’s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object.


    The next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished.

    There are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the <a-light> entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit.

    To start with I wanted to light up the scene with a general light, type="ambient", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (<a-sky>), as it looked a bit odd with a white sky.

    <a-light type="ambient" color="#92455E" intensity="0.4"></a-light> <a-sky id="sky" color="#0000ff"></a-sky>

    I felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type="point" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn’t a walking human or anything but by moving light where the player is it feels a bit more physical.

    <a-light color="#fff" distance="5" intensity="0.7" type="point"></a-light>

    By this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the <a-scene> with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see.

    I was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers.


    One of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you’re looking, but I wasn’t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working.

    AFRAME.registerComponent('automove-controls', { init: function () { this.speed = 0.1; this.isMoving = true; this.velocityDelta = new THREE.Vector3(); }, isVelocityActive: function () { return this.isMoving; }, getVelocityDelta: function () { this.velocityDelta.z = this.isMoving ? -speed : 0; return this.velocityDelta.clone(); } });




    universal-controls="movementControls: automove, gamepad, keyboard"

    This works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn’t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels.

    Building a map

    Currently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels.

    I made the maze in Tiled by finding a random tileset online (we don’t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn’t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it’s easy to update in the future.

    var map = { "data":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], "height":10, "width":10 }

    As you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don’s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2.

    document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var WALL_SIZE = 5, WALL_HEIGHT = 3; var el = document.querySelector('#walls'); var wall; for (var x = 0; x < map.height; x++) { for (var y = 0; y < map.width; y++) { var i = y*map.width + x; var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE; if ([i] === 1) { // Create wall wall = document.createElement('a-box'); el.appendChild(wall); wall.setAttribute('color', '#fff'); wall.setAttribute('material', 'src: #texture-wall;'); wall.setAttribute('width', WALL_SIZE); wall.setAttribute('height', WALL_HEIGHT); wall.setAttribute('depth', WALL_SIZE); wall.setAttribute('position', position); wall.setAttribute('static-body', '); } if ([i] === 2) { // Set player position document.querySelector('#player').setAttribute('position', position); } } }'Walls added.'); });

    With this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There’s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web.

    It’s Not All Fun And Games

    A lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that’s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation.

    There have been a number of cases where it has been shown virtual reality can help as a tool for therapies:

    These are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool.

    Wrapping Up

    Ten years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. “The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it.”

    We can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the “desktop web” Cameron was referring to.

    So while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don’t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come… well, let’s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it’s fun… have a go anyway.

    About the author

    Despite being a constant presence on Twitter, Shane Hudson occasionally does some work. He is a developer interested in all things web. Currently focussing on completing a degree in Artificial Intelligence, Shane has previously written a book called JavaScript Creativity, worked on a web-based geographic data portal at Plymouth Marine Laboratory and freelanced as a front-end developer.

    More articles by Shane

    What next for CSS Grid Layout?

    posted: 12 Dec 2016

    Rachel Andrew opens the door on next generation CSS to bring us tidings of Grid joy. But rest not on your laurels, we must look to what comes next for the specification. Gold may be just around the corner, but whence will the frankincense and myrrh appear?

    Brought to you by The CSS Layout Workshop. Does developing layouts with CSS seem like hard work? How much time could you save without all the trial and error? Are you ready to really learn CSS layout?

    In 2012 I wrote an article for 24 ways detailing a new CSS Specification that had caught my eye, at the time with an implementation only in Internet Explorer. What I didn’t realise at the time was that CSS Grid Layout was to become a theme on which I would base the next four years of research, experimentation, writing and speaking.

    As I write this article in December 2016, we are looking forward to CSS Grid Layout being shipped in Chrome and Firefox. What will ship early next year in those browsers is expanded and improved from the early implementation I explored in 2012. Over the last four years the spec has been developed as part of the CSS Working Group process, and has had input from browser engineers, specification writers and web developers. Use cases have been discussed, and features added.

    The CSS Grid Layout specification is now a Candidate Recommendation. This status means the spec is to all intents and purposes, finished. The discussions now happening are on fine implementation details, and not new feature ideas. It makes sense to draw a line under a specification in order that browser vendors can ship complete, interoperable implementations. That approach is good for all of us, it makes development far easier if we know that a browser supports all of the features of a specification, rather than working out which bits are supported. However it doesn’t mean that works stops here, and that new use cases and features can’t be proposed for future levels of Grid Layout. Therefore, in this article I’m going to take a look at some of the things I think grid layout could do in the future. I would love for these thoughts to prompt you to think about how Grid - or any CSS specification - could better suit the use cases you have.

    Subgrid - the missing feature of Level 1

    The implementation of CSS Grid Layout in Chrome, Firefox and Webkit is comparable and very feature complete. There is however one standout feature that has not been implemented in any browser as yet - subgrid. Once you set the value of the display property to grid, any direct children of that element become grid items. This is similar to the way that flexbox behaves, set display: flex and all direct children become flex items. The behaviour does not apply to children of those items. You can nest grids, just as you can nest flex containers, but the child grids have no relationship to the parent.

    Nesting Grids by Rachel Andrew (@rachelandrew) on CodePen.

    The subgrid behaviour would enable the grid defined on the parent to be used by the children. I feel this would be most useful when working with a multiple column flexible grid - for example a typical 12 column grid. I could define a grid on a wrapper, then position UI elements on that grid - from the major structural elements of my page down through the child elements to a form where I wanted the field to line up with items above.

    The specification contained an initial description of subgrid, with a value of subgrid for grid-template-columns and grid-template-rows, you can read about this in the August 2015 Working Draft. This version of the specification would have meant you could declare a subgrid in one dimension only, and create a different set of tracks in the other.

    In an attempt to get some implementation of subgrid, a revised specification was proposed earlier this year. This gives a single subgrid value of the display property. As we now cannot specify a subgrid on rows OR columns this limits us to have a subgrid that works in two dimensions. At this point neither version has been implemented by anyone, and subgrids are marked as “at risk” in the Level 1 Candidate Recommendation. With regard to ‘at-risk’ this is explained as follows:

    “‘At-risk’ is a W3C Process term-of-art, and does not necessarily imply that the feature is in danger of being dropped or delayed. It means that the WG believes the feature may have difficulty being interoperably implemented in a timely manner, and marking it as such allows the WG to drop the feature if necessary when transitioning to the Proposed Rec stage, without having to publish a new Candidate Rec without the feature first.”

    If we lose subgrid from Level 1, as it looks likely that we will, this does give us a chance to further discuss and iterate on that feature. My current thoughts are that I’m not completely happy about subgrids being tied to both dimensions and feel that a return to the earlier version, or something like it, would be preferable.

    Further reading about subgrid

    Styling cells, tracks and areas

    Having defined a grid with CSS Grid Layout you can place child elements into that grid, however what you can’t do is style the grid tracks or cells. Grid doesn’t even go as far as multiple column layout, which has the column-rule properties.

    In order to set a background colour on a grid cell at the moment you would have to add an empty HTML element or insert some generated content as in the below example. I’m using a 1 pixel grid gap to fake lines between grid cells, and empty div elements, and some generated content to colour those cells.

    Faked backgrounds and borders by Rachel Andrew (@rachelandrew) on CodePen.

    I think it would be a nice addition to Grid Layout to be able to directly add backgrounds and borders to cells, tracks and areas. There is an Issue raised in the CSS WG Drafts repository for Decorative Grid Cell pseudo-elements, if you want to add thoughts to that.

    More control over auto placement

    If you haven’t explicitly placed the direct children of your grid element they will be laid out according to the grid auto placement rules. You can see in this example how we have created a grid and the items are placing themselves into cells on that grid.

    Items auto-place on a defined grid by Rachel Andrew (@rachelandrew) on CodePen.

    The auto-placement algorithm is very cool. We can position some items, leaving others to auto-place; we can set items to span more than one track; we can use the grid-auto-flow property with a value of dense to backfill gaps in our grid.

    Websafe colors meet CSS Grid (auto-placement demo) by Rachel Andrew (@rachelandrew) on CodePen.

    I think however this could be taken further. In this issue posted to my CSS Grid AMA on GitHub, the question is raised as to whether it would be possible to ask grid to place items on the next available line of a certain name. This would allow you to skip tracks in the grid when using auto-placement, an issue that has also been raised by Emil Björklund in this post to the www-style list prior to spec discussion moving to Github. I think there are probably similar issues, if you can think of one add a comment here.

    Creating non-rectangular grid areas

    A grid area is a collection of grid cells, defined by setting the start and end lines for columns and rows or by creating the area in the value of the grid-template-areas property as shown below. Those areas however must be rectangular - you can’t create an L-shaped or otherwise non-regular shape.

    Grid Areas by Rachel Andrew (@rachelandrew) on CodePen.

    Perhaps in the future we could define an L-shape or other non-rectangular area into which content could flow, as in the below currently invalid code where a quote is embedded into an L-shaped content area.

    .wrapper { display: grid; grid-template-areas: "sidebar header header" "sidebar content quote" "sidebar content content"; }

    Flowing content through grid cells or areas

    Some uses cases I have seen perhaps are not best solved by grid layout at all, but would involve grid working alongside other CSS specifications. As I detail in this post, there are a class of problems that I believe could be solved with the CSS Regions specification, or a revised version of that spec.

    Being able to create a grid layout, then flow content through the areas could be very useful. Jen Simmons presented to the CSS Working Group at the Lisbon meeting a suggestion as to how this might work.

    In a post from earlier this year I looked at a collection of ideas from specifications that include Grid, Regions and Exclusions. These working notes from my own explorations might prompt ideas of your own.

    Solving the keyboard/layout disconnect

    One issue that grid, and flexbox to a lesser extent, raises is that it is very easy to end up with a layout that is disconnected from the underlying markup. This raises problems for people navigating using the keyboard as when tabbing around the document you find yourself jumping to unexpected places. The problem is explained by Léonie Watson with reference to flexbox in Flexbox and the keyboard navigation disconnect.

    The grid layout specification currently warns against creating such a disconnect, however I think it will take careful work by web developers in order to prevent this. It’s also not always as straightforward as it seems. In some cases you want the logical order to follow the source, and others it would make more sense to follow the visual. People are thinking about this issue, as you can read in this mailing list discussion.

    Bringing your ideas to the future of Grid Layout

    When I’m not getting excited about new CSS features, my day job involves working on a software product - the CMS that is serving this very website, Perch. When we launched Perch there were many use cases that we had never thought of, despite having a good idea of what might be needed in a CMS and thinking through lots of use cases. The additional use cases brought to our attention by our customers and potential customers informed the development of the product from launch. The same will be true for Grid Layout.

    As a “product” grid has been well thought through by many people. Yet however hard we try there will be use cases we just didn’t think of. You may well have one in mind right now. That’s ok, because as with any CSS specification, once Level One of grid is complete, work can begin on Level Two. The feature set of Level Two will be informed by the use cases that emerge as people get to grips with what we have now.

    This is where you get to contribute to the future of layout on the web. When you hit up against the things you cannot do, don’t just mutter about how the CSS Working Group don’t listen to regular developers and code around the problem. Instead, take a few minutes and write up your use case. Post it to your blog, to Medium, create a CodePen and go to the CSS Working Group GitHub specs repository and post an issue there. Write some pseudo-code, draw a picture, just make sure that the use case is described in enough detail that someone can see what problem you want grid to solve. It may be that - as with any software development - your use case can’t be solved in exactly the way you suggest. However once we have a use case, collected with other use cases, methods of addressing that class of problems can be investigated.

    I opened this article by explaining I’d written about grid layout four years ago, and how we’re only now at a point where we will have Grid Layout available in the majority of browsers. Specification development, and implementation into browsers takes time. This is actually a good thing, as it’s impossible to take back CSS once it is out there and being used by production websites. We want CSS in the wild to be well thought through and that takes time. So don’t feel that because you don’t see your use case added to a spec immediately it has been ignored. Do your future self a favour and write down your frustrations or thoughts, and we can all make sure that the web platform serves the use cases we’re dealing with now and in the future.

    About the author

    Rachel Andrew is a Director of, a UK web development consultancy and creators of the small content management system, Perch; and a W3C Invited Expert to the CSS Working Group. She is the author of a number of books, a regular columnist for A List Apart and a Google Developer Expert for Web Technologies.

    She curates a popular email newsletter on CSS Layout, and is passing on her layout knowledge over at her CSS Layout Workshop.

    When not writing about business and technology on her blog at or speaking at conferences, you will usually find Rachel running up and down one of the giant hills in Bristol.

    More articles by Rachel

    Watch Your Language!

    posted: 09 Dec 2016

    Annie-Claude Côté gathers us round the hearth to tell a tale of many languages. Like choosing the right Christmas sweater to wear while building a snowman, we must choose the language we code in wisely. Make a poor choice and risk getting left out in the cold as the darkness draws in.

    I’m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can’t be expressed in some languages, while other languages express these concepts with ease.

    It also helped me understand the way we label languages. English: business. French: romance. Here’s an example of how words, or the absence thereof, can affect the way we think:

    In French we love everything. There’s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It’s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn’t grasped the difference. Needless to say, I’ve scared away plenty of first dates!

    Learning another language made me realize the limitations of my native language and revealed concepts I didn’t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas.

    When I lived in Montréal, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient.

    I’m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job.

    When I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn’t consider accessibility. All this made editing views a laborious process.

    Grep. Replace all. Test. Ship it. Repeat.

    This wasn’t sustainable at Shopify’s scale, so the newly-formed front end team was given two missions:

    • Make the app responsive (AKA Let’s Make This Thing Responsive ASAP)
    • Make the view layer scalable and maintainable (AKA Let’s Build a Pattern Library… in Ruby)

    Let’s make this thing responsive ASAP

    The year was 2015. The Shopify admin wasn’t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries.

    It seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn’t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool.

    Writing the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive. But with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn’t performant at all. Components would jarringly jump around the page before settling in on first paint.

    It was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn’t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout.

    In other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly.

    Let’s build a pattern library… in Ruby

    In order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping.

    When I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn’t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan.

    We put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks.

    Since the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn’t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn’t going to cut it.

    I don’t know the end of this story, because we haven’t written it yet. We’ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven’t determined the winner yet. Only time (and data gathering) will tell.

    Ruby is great but it doesn’t speak the browser’s language efficiently. It was not the right language for the job.

    Learning a new spoken language has had an impact on how I write code. It has taught me that you don’t know what you don’t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you’re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language.

    About the author

    Annie-Claude Côté is a Senior Developer who actively gives a shit about UX. She works at Shopify based out of Ottawa, Ontario, Canada. She’s serious about code quality, optimization and finding all the edge cases. All of them.

    More articles by Annie-Claude

    How to Make a Chrome Extension to Delight (or Trol

    posted: 09 Dec 2016

    Leslie Zacharkow presents the purrfect solution for anyone who’s ever dreamt of creating their own Chrome browser extension. So kick back, and while your chestnuts roast on an open fire, roast your friends and colleagues in an open tab.

    If you’re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius.

    But today is different. You’re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack… right?


    If there’s one thing 2016 has taught me, it’s that we—the self-serious, world-changing tech movers and shakers of the universe—haven’t changed one bit from our younger, more delightable selves.

    How do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like “Stinky Dinosaur” or “Tiny Potato”. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there’s still plenty of room in our big, grown-up hearts for fun.

    Today, we’re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension.

    Here’s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to.

    Along the way, we’ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful.

    What you’ll need

    • Adobe Illustrator (or a similar illustration program to export PNG)
    • Some images of your friends
    • A text editor

    Note: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you’ll find ways to do it.

    Getting started

    1. Download a local copy of the boilerplate for today’s tutorial here, and open it in a text editor. Inside, you’ll find a simple web app that you can run in Chrome.
    2. Open index.html in Chrome. You should see a grey page that says “Noname”.
    3. Open template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template.

    Note: We’re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn’t totally cross-browser compatible, but that’s okay.

    Step 1: Gather your friends

    The first thing to do is choose who your muses are. Since the holidays are upon us, I’d suggest finding inspiration in your family.

    Create your artwork

    For each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here’s my crew:

    As you can see, my use of the word “family” extends to cats. There’s no judgement here.

    Notice that some of my photos don’t completely fill the artboard–that’s fine. The images will be clipped into ovals when they’re rendered in the application.

    Now, export your images by following these steps:

    1. Turn the Template layer off and export the images as PNGs.
    2. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your faces.
    3. Export at 72ppi to keep things running fast.
    4. Save your images into the images/ folder in your project.

    Add your images to config.js

    Open scripts/config.js. This is where you configure your extension.

    Add key value pairs to the faces object. The key should be the person’s name, and the value should be the filepath to the image.

    faces: { leslie: 'images/face_leslie.png', kyle: 'images/face_kyle.png', beep: 'images/face_beep.png' }

    The application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that’s special about the faces object is that person’s name will also be displayed when their face is chosen.

    Now, when you refresh the project in Chrome, you should see one of your friends along with their name, like this:

    Congrats, you’re off and running!

    Step 2: Add adjectives

    Now that you’ve loaded your friends into the application, it’s time to call them names. This step definitely yields the most laughs for the least amount of effort.

    Add a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering…

    prefixes: [ 'Loving', 'Drunk', 'Chatty', 'Merry', 'Creepy', 'Introspective', 'Cheerful', 'Awkward', 'Unrelatable', 'Hungry', ... ]

    When you refresh Chrome, you should see one of these words prefixed before your friend’s name. Voila!

    Step 3: Choose your color palette

    Real talk: I’m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you’ve been blessed with the gift of color aptitude, skip ahead.

    How to choose colors

    To create a color palette, I start by going to a, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like.

    Copy these colors into your swatches in Adobe Illustrator. They’ll be the base for any illustrations you create later.

    Now you need a set of background colors. Here’s my trick to making these consistent with your illustration palette without completely blending in. Use the “Adjust Palette” tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors.

    Add your background colors to config.js

    Copy your hex codes into the bgColors array in config.js.

    bgColors: [ '#FFDD77', '#FF8E72', '#ED5E84', '#4CE0B3', '#9893DA', ... ]

    Now when you go back to Chrome and refresh the page, you’ll see your new palette!

    Step 4: Accessorize

    This is the fun part. We’re going to illustrate objects, accessories, lizards—whatever you want—and layer them on top of your friends.

    Your objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like “hats” or “glasses”. This will allow combinations of accessories to show at once, without showing two of the same type on the same person.

    Create a group of accessories

    To get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don’t feel like illustrating, you can use cut-out images instead.

    Next, follow the same steps as you did when you exported the faces. Here they are again:

    1. Turn the Template layer off and export the images as PNGs.
    2. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your hats.
    3. Export at 72ppi to keep things running fast.
    4. Save your images into the images/ folder in your project.

    Add your accessories to config.js

    In config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array:

    customProps: { hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ] }

    Refresh Chrome and behold, accessories!

    Create as many more accessories as you want

    Repeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this:

    The last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses:

    customProps: { hair: [ 'images/hair_bowl.png', 'images/hair_bob.png' ], hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ], glasses: [ 'images/glasses_aviators.png', 'images/glasses_monacle.png' ] }

    And, there you have it! Randomly generated friends with random accessories.

    Feel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress…

    Step 5: Publish it

    It’s time to put this in your new tabs! You have two options:

    1. Publish it as a Chrome extension in the Chrome Web Store.
    2. Host it as a website and point to it with the New Tab Redirect extension.

    Today, we’re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I’d suggest sticking to Option #2.

    How to make a simple Chrome extension to replace the new tab page

    All you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement:

    { "manifest_version": 2, "name": "Your extension name", "version": "1.0", "chrome_url_overrides" : { "newtab": "index.html" } }

    To test your extension, you’ll need to run it in Developer Mode. Here’s how to do that:

    1. Go to the Extensions page in Chrome by navigating to chrome://extensions/.
    2. Tick the checkbox in the upper-right corner labelled “Developer Mode”.
    3. Click “Load unpacked extension…” and select this project.
    4. If everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page.

    Voila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you.

    Share the love

    Now that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they’ve become a part of everyday life–just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats.

    At the end of the day, let’s make something that makes us happy. Cheers!

    About the author

    Leslie Zacharkow is a designer/developer and the creator of Tabby Cat. She likes spending time outside and contemplating where to get the best soft pretzel. Follow her on Twitter at @lslez.

    More articles by Leslie

    Get the Balance Right: Responsive Display Text

    posted: 09 Dec 2016

    Richard Rutter shepherds a tea towel onto our collective heads and describes a technique for responsively scaling display text to maintain a consistent feel in both landscape and portrait screen orientations. That should put things back into proportion.

    Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers’ attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout.

    When setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader’s preferences. You can take the same approach with display text by choosing larger sizes from the same scale.

    However, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window.

    Introducing viewport units

    With CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels:

    • vw - viewport width,
    • vh - viewport height,
    • vmin - viewport height or width, whichever is smaller
    • vmax - viewport height or width, whichever is larger

    In one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width:

    h1 { font-size: 13 vw; }

    So for a selection of widths, the rendered font size would be:

    Rendered font size (px)
    Viewport width 13 vw
    320 42
    768 100
    1024 133
    1280 166
    1920 250

    A problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation.

    Landscape text is much bigger than portrait text when using vw units.

    The proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression.

    However if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering.

    h1 { font-size: 13vmin; }

    Landscape text is consistent with portrait text when using vmin units.

    Comparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude:

    Rendered font size (px)
    Viewport 13 vw 13 vmin
    320 × 480 42 42
    414 × 736 54 54
    768 × 1024 100 100
    1024 × 768 133 100
    1280 × 720 166 94
    1366 × 768 178 100
    1440 × 900 187 117
    1680 × 1050 218 137
    1920 × 1080 250 140
    2560 × 1440 333 187

    Hybrid font sizing

    Using vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading:

    h1 { font-size: 13vmin; } h2 { font-size: 5vmin; }

    Using vmin alone for supporting text can scale it too quickly

    The balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens.

    A solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example:

    h2 { font-size: calc(0.5rem + 2.5vmin); }

    For a 320 px wide screen, the font size will be 16 px, calculated as follows:

    (0.5 × 16) + (320 × 0.025) = 8 + 8 = 16px

    For a 768 px wide screen, the font size will be 27 px:

    (0.5 × 16) + (768 × 0.025) = 8 + 19 = 27px

    This results in a more balanced subheading that doesn’t take emphasis away from the main heading:

    To give you an idea of the effect of using a hybrid approach, here’s a side-by-side comparison of hybrid and viewport text sizing:

    table.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em}

    top: calc() hybrid method; bottom: vmin only
    16 20 27 32 35 40 44
    16 24 38 48 54 64 72
    320 480 768 960 1080 1280 1440

    Over this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting.

    1. A reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays. ↩︎

    2. For even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller↩︎

    About the author

    Richard Rutter is a user experience consultant and director of Clearleft. In 2009 he cofounded the webfont service, Fontdeck. He runs an ongoing project called The Elements of Typographic Style Applied to the Web, where he extols the virtues of good web typography. Richard occasionally blogs at Clagnut, where he writes about design, accessibility and web standards issues, as well as his passion for music and mountain biking.

    More articles by Richard

    What the Heck Is Inclusive Design?

    posted: 06 Dec 2016

    Heydon Pickering questions whether accessibility is really the name of the game, and asks if perhaps inclusivity might be a more broadly acceptable term for the valuable design work we do. Would a cinnamon spiced latte by any other name smell as sweet? Someone has to call it.

    Naming things is hard. And I don’t just mean CSS class names and JSON properties. Finding the right term for what we do with the time we spend awake and out of bed turns out to be really hard too.

    I’ve variously gone by “front-end developer”, “user experience designer”, and “accessibility engineer”, all clumsy and incomplete terms for labeling what I do as an… erm… see, there’s the problem again.

    It’s tempting to give up entirely on trying to find the right words for things, but this risks summarily dispensing with thousands of years spent trying to qualify the world around us. So here we are again.

    Recently, I’ve been using the term “inclusive design” and calling myself an “inclusive designer” a lot. I’m not sure where I first heard it or who came up with it, but the terminology feels like a good fit for the kind of stuff I care to do when I’m not at a pub or asleep.

    This article is about what I think “inclusive design” means and why I think you might like it as an idea.

    Isn’t ‘inclusive design’ just ‘accessibility’ by another name?

    No, I don’t think so. But that’s not to say the two concepts aren’t related. Note the ‘design’ part in ‘inclusive design’ — that’s not just there by accident. Inclusive design describes a design activity; a way of designing things.

    This sets it apart from accessibility — or at least our expectations of what ‘accessibility’ entails. Despite every single accessibility expert I know (and I know a lot) recommending that accessibility should be integrated into design process, it is rarely ever done. Instead, it is relegated to an afterthought, limiting its effect.

    The term ‘accessibility’ therefore lacks the power to connote design process. It’s not that we haven’t tried to salvage the term, but it’s beginning to look like a lost cause. So maybe let’s use a new term, because new things take new names. People get that.

    The ‘access’ part of accessibility is also problematic. Before we get ahead of ourselves, I don’t mean access is a problem — access is good, and the more accessible something is the better. I mean it’s not enough by itself.

    Imagine a website filled with poorly written and lackadaisically organized information, including a bunch of convoluted and confusing functionality. To make this site accessible is to ensure no barriers prevent people from accessing the content.

    But that doesn’t make the content any better. It just means more people get to suffer it.


    Access is certainly a prerequisite of inclusion, but accessibility compliance doesn’t get you all the way there. It’s possible to check all the boxes but still be left with an unusable interface. And unusable interfaces are necessarily inaccessible ones. Sure, you can take an unusable interface and make it accessibility compliant, but that only placates stakeholders’ lawyers, not users. Users get little value from it.

    So where have we got to? Access is important, but inclusion is bigger than access. Inclusive design means making something valuable, not just accessible, to as many people as we can.

    So inclusive design is kind of accessibility + UX?

    Closer, but there are some problems with this definition.

    UX is, you will have already noted, a broad term encompassing activities ranging from conducting research studies to optimizing the perceived affordance of interface elements. But overall, what I take from UX is that it’s the pursuit of making interfaces understandable.

    As it happens, WCAG 2.0 already contains an ‘Understandable’ principle covering provisions such as readability, predictability and feedback. So you might say accessibility — at least as described by WCAG — already covers UX.

    Unfortunately, the criteria are limited, plus some really important stuff (like readability) is relegated to the AAA level; essentially “bonus points if you get the time (you won’t).”

    So better to let UX folks take care of this kind of thing. It’s what they do. Except, therein lies a danger. UX professionals don’t tend to be well versed in accessibility, so their ‘solutions’ don’t tend to work for that many people. My friend Billy Gregory coined the term SUX, or “Some UX”: if it doesn’t work for different users, it’s only doing part of the job it should be.

    SUX won’t do, but it’s not just a disability issue. All sorts of user circumstances go unchecked when you’re shooting straight for what people like, and bypassing what people need: device type, device settings, network quality, location, native language, and available time to name just a few.

    In short, inclusive design means designing things for people who aren’t you, in your situation. In my experience, mainstream UX isn’t very good at that. By bolting accessibility onto mainstream UX we labor under the misapprehension that most people have a ‘normal’ experience, a few people are exceptions, and that all of the exceptions pertain to disability directly.

    So inclusive design isn’t really about disability?

    It is about disability, but not in the same way as accessibility. Accessibility (as it is typically understood, anyway) aims to make sure things work for people with clinically recognized disabilities. Inclusive design aims to make sure things work for people, not forgetting those with clinically recognized disabilities. A subtle, but not so subtle, difference.

    Let’s go back to discussing readability, because that’s a good example. Now: everyone benefits from readable text; text with concise sentences and widely-understood words. It certainly helps people with cognitive impairments, but it doesn’t hinder folks who have less trouble with comprehension. In fact, they’ll more than likely be thankful for the time saved and the clarity. Readable text covers the whole gamut. It’s — you’ve got it — inclusive.

    Legibility is another one. A clear, well-balanced typeface makes the reading experience less uncomfortable and frustrating for all concerned, including those who have various forms of visual dyslexia. Again, everyone’s happy — so why even contemplate a squiggly, sketchy typeface? Leave well alone.

    Contrast too. No one benefits from low contrast; everyone benefits from high contrast. Simple. There’s no more work involved, it just entails better decision making. And that’s what design is really: decision making.

    How about zoom support? If you let your users pinch zoom on their phones they can compensate for poor eyesight, but they can also increase the touch area of controls, inspect detail in images, and compose better screen shots. Unobtrusively supporting options like zoom makes interfaces much more inclusive at very little cost.

    And when it comes to the underlying HTML code, you’re in luck: it has already been designed, from the outset, to be inclusive. HTML is a toolkit for inclusion. Using the right elements for the job doesn’t just mean the few who use screen readers benefit, but keyboard accessibility comes out-of-the-box, you can defer to browser behavior rather than writing additional scripts, the code is easier to read and maintain, and editors can create content that is effortlessly presentable.

    Wait… are you talking about universal design?

    Hmmm. Yes, I guess some folks might think of “universal design” and “inclusive design” as synonymous. I just really don’t like the term universal in this context.

    The thing is, it gives the impression that you should be designing for absolutely everyone in the universe. Though few would adopt a literal interpretation of “universal” in this context, there are enough developers who would deliberately misconstrue the term and decry universal design as an impossible task. I’ve actually had people push back by saying, “what, so I’ve got to make it work for people who are allergic to computers? What about people in comas?”

    For everyone’s sake, I think the term ‘inclusive’ is less misleading. Of course you can’t make things that everybody can use — it’s okay, that’s not the aim. But with everything that’s possible with web technologies, there’s really no need to exclude people in the vast numbers that we usually are.

    Accessibility can never be perfect, but by thinking inclusively from planning, through prototyping to production, you can cast a much wider net. That means more and happier users at very little if any more effort.

    If you like, inclusive design is the means and accessibility is the end — it’s just that you get a lot more than just accessibility along the way.


    That’s inclusive design. Or at least, that’s a definition for a thing I think is a good idea which I identify as inclusive design. I’ll leave you with a few tips.

    Involve code early

    Web interfaces are made of code. If you’re not working with code, you’re not working on the interface. That’s not to say there’s anything wrong with sketching or paper prototyping — in fact, I recommend paper prototyping in my book on inclusive design. Just work with code as soon as you can, and think about code even before that. Maintain a pattern library of coded solutions and omit any solutions that don’t adhere to basic accessibility guidelines.

    Respect conventions

    Your content should be fresh, inventive, radical. Your interface shouldn’t. Adopt accepted conventions in the appearance, placement and coding of interface elements. Users aren’t there to experience interface design; they’re there to use an interface. In other words: stop showing off (unless, of course, the brief is to experiment with new paradigms in interface design, for an audience of interface design researchers).

    Don’t be exact

    “Perfection is the enemy of good”. But the pursuit of perfection isn’t just to be avoided because nothing ever gets finished. Exacting design also makes things inflexible and brittle. If your design depends on elements retaining precise coordinates, they’ll break easily when your users start adjusting font settings or zooming. Choose not to position elements exactly or give them fixed, “magic number” dimensions. Make less decisions in the interface so your users can make more decisions for it.

    Enforce simplicity

    The virtue of simplicity is difficult to overestimate. The simpler an interface is, the easier it is to use for all kinds of users. Simpler interfaces require less code to make too, so there’s an obvious performance advantage. There are many design decisions that require user research, but keeping things simple is always the right thing to do. Not simplified or simple-seeming or simplistic, but simple.

    Do a little and do it well, for as many people as you can.

    About the author

    Heydon Pickering is the accessibility editor for Smashing Magazine and works with The Paciello Group as an accessibility and UX consultant. He invented the lobotomized owl selector, his grid system Fukol is just 93 bytes in size, and his book Inclusive design Patterns is available now.

    More articles by Heydon

    Public Speaking with a Buddy

    posted: 05 Dec 2016

    Lara Hogan stands up and goes it alone to expound on the benefits of presenting on stage with a buddy. Preparing and delivering a presentation to a room full of people can be a daunting task, and sometimes two heads are better that one. Not even Rudolph could pull that sleigh alone.

    My book Demystifying Public Speaking focuses on the variety of fears we each have about giving a talk. From presenting to a client, to leading a team standup, to standing on a conference stage, there are lots of things we can do to prepare ourselves for the spotlight and reduce those fears.

    Though it didn’t make it into the final draft, I wanted to highlight how helpful it can be to share that public speaking spotlight with another person, or a few more people. If you have fears about not knowing the answer to a question, fumbling your words, or making a mistake in the spotlight, then buddying up may be for you!

    To some, adding more people to a presentation sounds like a recipe for on-stage disaster. To others, having a friendly face nearby—a partner who can step in if you fumble—is incredibly reassuring. As design director Yesenia Perez-Cruz writes, “While public speaking is a deeply personal activity, you don’t have to go it alone. Nothing has helped my speaking career more than turning it into a group effort.”

    Co-presenting can level up a talk in two ways: an additional brain and presentation skill set can improve the content of the talk itself, and you may feel safer with the on-stage safety net of your buddy.

    For example, when I started giving lengthy workshops about building mobile device labs with my co-worker Destiny Montague, we brought different experience to the table. I was able to talk about the user experience of our lab, and the importance of testing across different screen sizes. Destiny spoke about the hardware aspects of the lab, like power consumption and networking. Our audience benefitted from the spectrum of insight we included in the talk.

    Moreover, Destiny and I kept each other energized and engaging while teaching our audience, having way more fun onstage. Partnering up alleviated the risk (and fear!) of fumbling; where one person makes a mistake, the other person is right there to help. Buddy presentations can be helpful if you fear saying “I don’t know” to a question, as there are other people around you who will be able to help answer it from the stage. By partnering with someone whom I trust and respect, and whose work and knowledge augments my own, it made the experience—and the presentation!—significantly better.

    Co-presenting won’t work if you don’t trust the person you’re onstage with, or if you don’t have good chemistry working together. It might also not work if there’s an imbalance of responsibilities, both in preparing the talk and giving it. Read on for how to make partner talks work to your advantage!


    If you want to explore co-presenting, make sure that your presentation partner is trustworthy and can carry their weight; it can be stressful if you find yourself trying to meet deadlines and prepare well and your partner isn’t being helpful. We’re all about reducing the fears and stress levels surrounding being in that spotlight onstage; make sure that the person you’re relying on isn’t making the process harder.

    Before you start working together, sketch out the breakdown of work and timeline you’re each committing to. Have a conversation about your preferred work style so you each have a concrete understanding of the best ways to communicate (in what medium, and how often) and how to check in on each other’s progress without micromanaging or worrying about radio silence. Ask your buddy how they prefer to receive feedback, and give them your own feedback preferences, so neither of you are surprised or offended when someone’s work style or deliverable needs to be tweaked.

    This should be a partnership in which you both feel supported; it’s healthy to set all these expectations up front, and create a space in which you can each tweak things as the work progresses.

    Talk flow and responsibilities

    There are a few different ways to organize the structure of your talk with multiple presenters. Start by thinking about the breakdown of the talk content—are there discrete parts you and the other presenters can own or deliver? Or does it feel more appropriate to deliver the entirety of the content together?

    If you’re finding that you can break down the content into discrete chunks, figure out who should own which pieces, and what ownership means. Will you develop the content together but have only one person present the information? Or will one person research and prepare each content section in addition to delivering it solo onstage?

    Rehearse how handoffs will go between sections so it feels natural, rather than stilted. I like breaking a presentation into “chapters” when I’m passionate about particular aspects of a topic and can speak on those, but know that there are other aspects to be shared and there’s someone else who can handle (and enjoy!) talking about them. When Destiny and I rehearsed our “chapter” handoffs, we developed little jingles that we’d both sing together onstage; it indicated to the audience that it was a planned transition in the content, and tied our independent work together into a partnership.

    .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

    Alternatively, you can give the presentation in a way that’s close to having a rehearsed conversation, rather than independently presenting discrete parts of the talk. In this case, you’ll both be sharing the spotlight at the same time, throughout the duration of the talk. Preparation is key, here, to make sure that you each understand what needs to be communicated, and you have a sense of who will be taking responsibility for communicating those different pieces of information. A poorly-prepared talk like this will look like the co-presenters are talking over each other, or hesitating awkwardly to give the other person more room to speak; the audience will feel how uncomfortable this is, and will probably be distracted from the talk content. Practice the talk the whole way through multiple times so you know what each person is planning on covering and how you want to interact with each other while you’re both holding microphones; also figure out how you’ll be standing in relation to each other. More on that next!

    Sharing the stage

    If you choose to give a talk with a partner, determine ahead of time how you’ll stand (or sit). For example, if you each take “chapters” or major sections of the presentation, ensure that it’s clear who the audience should focus their attention on. You could sit in a chair off to the side (or stand). I recommend placing yourself far enough away that you’re not distracting to the audience; you don’t want them watching you while your partner is speaking. If the audience can still see you, but their focus should be on your buddy, be sure to not look distracted; keep your eyes on your buddy, and don’t just open your laptop and ignore what’s happening! Feel free to smile, laugh, or react how the audience should be reacting as your partner is speaking.

    If you’re both sharing the spotlight at the same time and having a rehearsed conversation, make sure that your body language engages the audience and you’re not just speaking to each other, ignoring the folks watching. Watch this talk with Guy Podjarny and Assaf Hefetz who have partnered up to talk about security; they have clearly identified roles onstage, and remain engaged with the audience.

    Consider whether or not you will share a microphone, or if you will both be mic’d. (Be sure that the event organizer, or the A/V team, has a heads-up well in advance to ensure they have the equipment handy!) Also talk through how you’d like to handle Q&A time during or after the talk, especially if you have clear “chapters” where Q&A might happen naturally during a handoff. The more clarity you and your partner have about who is responsible for which pieces of information sharing, the more you can feel and appear prepared.

    Co-presenting does take a lot of preparation and requires a ton of communication between you and your partner. But the rewards can be awesome: double the brains onstage to help answer questions and communicate information, and a friendly face to help comfort you if you feel nervous.

    About the author

    Lara Hogan is an Engineering Director at Etsy and the author of Demystifying Public Speaking, Designing for Performance, and Building a device lab. She champions performance as a part of the overall user experience, striking a balance between aesthetics and speed, and building performance into company culture. She also believes it’s important to celebrate career achievements with donuts.

    More articles by Lara

    We Need to Talk About Technical Debt

    posted: 04 Dec 2016

    Harry Roberts addresses the hastily-coded elephant in the room and helps us to identify what is technical debt, and what is just bad code. Hark the herald Harry sings, time for some refactoring.

    In my work with clients, a lot of time is spent assessing old, legacy, sprawling systems and identifying good code, bad code, and technical debt.

    One thing that constantly strikes me is the frequency with which bad code and technical debt are conflated, so let me start by saying this:

    Not all technical debt is bad code, and not all bad code is technical debt.

    Sometimes your bad code is just that: bad code. Calling it technical debt often feels like a more forgiving and friendly way of referring to what may have just been a poor implementation or a substandard piece of work.

    It is an oft-misunderstood phrase, and when mistaken for meaning ‘anything legacy or old hacky or nasty or bad’, technical debt is swept under the carpet along with all of the other parts of the codebase we’d rather not talk about, and therein lies the problem.

    We need to talk about technical debt.

    What We Talk About When We Talk About Technical Debt

    The thing that separates technical debt from the rest of the hacky code in our project is the fact that technical debt, by definition, is something that we knowingly and strategically entered into. Debt doesn’t happen by accident: debt happens when we choose to gain something otherwise-unattainable immediately in return for paying it back (with interest) later on.

    An Example

    You’re a front-end developer working on a SaaS product, and your sales team is courting a large customer – a customer so large that you can’t really afford to lose them. The customer tells you that as long as you can allow them to theme your SaaS application according to their branding, they are willing to sign on the dotted line… the problem being that your CSS architecture was never designed to incorporate theming at all, and there isn’t currently a nice, clean way to incorporate a theme into the codebase.

    You and the business make the decision that you will hack a theme into the product in two days. It’s going to be messy, it’s going to be ugly, but you can’t afford to lose a huge customer just because your CSS isn’t quite right, right now. This is technical debt.

    You deliver the theme, the customer signs up, and everyone is happy. Except you (and the business, because you are one and the same) have a decision to make:

    1. Do we go back and build theming into the CSS architecture as a first-class citizen, porting the hacked theme back into a codified and formal framework?
    2. Do we carry on as we are? Things are working okay, and the customer paid up, so is there any reason to invest time and effort into things after we (and the customer) got what we wanted?

    Option 1 is choosing to pay off your debts; Option 2 is ignoring your repayments.

    With Option 1, you’re acknowledging that you did what you could given the constraints, but, free of constraints, you’d have done something different. Now, you are choosing to implement that something different.

    With Option 2, however, you are avoiding your responsibility to repay your debt, and you are letting interest accrue. The problem here is that…

    1. your SaaS product now offers theming to one of your customers;
    2. another potential customer might also demand the ability to theme their instance of your product;
    3. you can’t refuse them that request, nor can you quickly fulfil it;
    4. you hack in another theme, thus adding to the balance of your existing debt;
    5. and so on (plus interest) for every subsequent theme you need to implement.

    Here you have increased entropy whilst making little to no attempt to address what you already knew to be problems.

    Your second, third, fourth, fifth request for theming will be hacked on top of your hack, further accumulating debt whilst offering nothing by way of a repayment. After a long enough period, the code involved will get so unwieldy, so hard to work with, that you are forced to tear it all down and start again, and the most painful part of this is that you’re actually paying off even more than your debt repayments would have been in the first place. Two days of hacking plus, say, five days of subsequent refactoring, would still have been substantially less than the weeks you will now have to spend rewriting your CSS to fix and incorporate the themes properly. You’ve made a loss; your strategic debt ultimately became a loss-making exercise.

    The important thing to note here is that you didn’t necessarily write bad code. You knew there were two options: the quick way and the correct way. The decision to take the quick route was a definite choice, because you knew there was a better way. Implementing the better way is your repayment.

    Good Debt and Bad Debt

    Technical debt is acceptable as long as you have intentions to settle; it can be a valuable solution to a business problem, provided the right approach is taken afterwards. That doesn’t, however, mean that all debt is born equal. Just as in real life, there is good debt and there is bad debt.

    Good debt might be…

    • a mortgage;
    • a student loan, or;
    • a business loan.

    These are types of debt that will secure you the means of repaying them. These are well considered debts whose very reason for being will allow you to make the money to pay them off—they have real, tangible benefit.

    A business loan to secure some equipment and premises will allow you to start an enterprise whose revenue will allow you to pay that debt back; a student loan will allow you to secure the kind of job that has the ability to pay a student loan back.

    These kinds of debt involve a considered and well-balanced decision to acquire something in the short term in the knowledge that you will have the means, in the long term, to pay it back.

    Conversely, bad debt might be…

    • borrowing $1,000 from a loan shark so you can go to Vegas, or;
    • taking out a payday loan in order to buy a new television.

    Both of these kinds of debt will leave you paying for things that didn’t provide you a way of earning your own capital. That is to say, the loans taken did not secure anything that would help pay off said loans. These are bad debts that will usually provide a net loss. You really are only gaining the short term in exchange for a long term financial responsibility: i.e., was it worth it?

    A good litmus test for debt is to compare the gains of its immediate benefit with the cost of its long term commitment.

    The earlier example of theming a site is a good debt, provided we are keeping up our repayments (all debt is bad debt if you don’t). A calculated decision to do something ‘wrong’ in the short term with the promise of better payoffs later on.

    Bad Technical Debt

    The majority of my work is with front-end development teams—CSS is what I do. To that end, the most succinct example of technical debt for that audience is simply:


    All front-end developers know the horrors and dangers associated with using !important, yet we continue to use it. Why?

    It’s not necessarily because we’re bad developers, but because we see a shortcut. !important is usually implemented as a quick way out of a sticky specificity situation. We could spend the rest of the day refactoring our CSS to fix the issue at its source, or we can spend mere seconds typing the word !important and patch over the symptoms.

    This is us making an explicit decision to do something less than ideal now in exchange for immediate benefit. After all, refactoring our CSS will take a lot more time, and will still only leave us with the same outcome that the vastly quicker !important solution will, so it seems to make better business sense.

    However, this is a bad debt. !important takes seconds to implement but weeks to refactor. The cost of refactoring this back out later will be an order of magnitude higher than it would be to have done things properly the first time. The first !important usually sets a precedent, and subsequent developers are likely to have to use it themselves in order to get around the one that you left.

    So many CSS projects deteriorate because of this one simple word, and rewrites become more and more imminent. That makes it possibly the most costly 10 bytes a CSS developer could ever write.

    Bad Code

    Now we’ve got a good idea of what constitutes technical debt, let’s take a look at what constitutes bad code. Something I hear time and time again in my client work goes a little like this:

    We’ve amassed a lot of technical debt and we’d like to get a strategy in place to begin dealing with it.

    Whilst I genuinely admire their willingness to identify and desire to fix problems in their code, sometimes they’re not looking at technical debt at all—sometimes they’re just looking at bad code, plain and simple.

    Where technical debt is knowing that there’s a better way, but the quicker way makes more sense right now, bad code is not caring if there’s a better way at all.

    Again, looking at a CSS-specific world, a lot of bad code is contributed by non-front-end developers with little training, appreciation, or even respect for the front-end landscape. Writing code with reckless abandon should not be described as technical debt, because to do so would imply that…

    • the developers knew they were implementing a sub-par solution, but…
    • the developers also knew that a better solution was out there, which…
    • implies that it can be tidied up relatively simply.

    Developers writing bad code is a larger and more cultural problem that requires a lot more effort to fix. Hopefully—and usually—bad code is in the minority, but it helps to be objective in identifying and solving it. Bad code usually doesn’t happen for a good enough reason, and is therefore much harder to justify.

    Technical debt often represents ability in judgement, whereas bad code often represents a gap in skills.


    Take time to familiarise yourself with the true concepts underlying technical debt and why it exists. Understand that technical debt can be good or bad. Admit that sometimes code is just of poor quality.

    Understanding these points will allow you to make better calls around what you might need to refactor and when, and what skills gaps you might have in your team.

    • Sometimes it’s okay to cut corners if there is a tangible gain to be had in the immediate term.
    • Technical debt is okay provided it is a sensible debt and you have intentions to pay it off.
    • Technical debt is not necessarily synonymous with bad code, and bad code isn’t necessarily technical debt. Technical debt is code that was implemented given limited knowledge or resource, with the understanding that you would need to repay something in future.
    • Technical debt is not inherently bad—failure to make repayments is. Periodically, it is justifiable—encouraged, even—to enter a debt in order to fulfil a more pressing matter. However, it is imperative that we begin making repayments as soon as we are capable, be that based on newly available time or knowledge.
    • Bad code is worse than technical debt as it represents a lack of knowledge or quality control within a team. It needs a much more fundamental fix.

    About the author

    With a client list including Google, Unilever, and the United Nations, Harry is an award-winning Consultant Front-end Architect who helps organisations and teams across the globe to plan, build, and maintain product-scale UIs.

    A Google Developer Expert, he writes on the subjects of CSS architecture, performance, and scalability at; develops and maintains inuitcss; authored CSS Guidelines; and Tweets at @csswizardry.

    More articles by Harry

    Flexible Project Management in Inflexible Environm

    posted: 03 Dec 2016

    Gillian Sibthorpe rearranges the doors on our advent calendar by looking at how to manage those tougher projects. Like a game of limbo at the office holiday party, it pays to remain flexible in the face of adversity.

    Handling unforeseen circumstances is an inevitable part of any project. It’s also often the most uncomfortable, and there is no amount of skill or planning that will fully eradicate the need to adapt to change. The ability to be flexible, responsive, and unafraid of facing not only problems, but also potentially positive scope changes and new ideas, isn’t an easy one to master. I am by no means saying that I have, but what I have learned is that there is often the temptation to shut out anything that might derail your plan, even sometimes at the cost of the quality you’re committed to.

    The reality is that as someone leading a project you know there will be challenges, but, in general, it’s a hassle to try keep the landscape open. Problems are bridges we should cross when we come to them, but intentional changes to the plan, and adapting for the sake of improving your first idea, is harder. There are tight schedules, resource is planned miles ahead, and you’re already juggling twenty other things. If you’re passionate about the quality of work you deliver and are working somewhere that considers itself expert within the field of digital, then having an attitude of flexibility is extremely important. It’s important when you’re overcoming a challenge or problem, but it’s also important for allowing ideas to evolve and be refined as much as they can be throughout the course of a project.

    Where theory falls short

    The premise of any Agile methodology, Scrum for example, is based around being able to work efficiently, react quickly and deliver relevant chunks of a product in manageable increments. It’s often hailed as king of flexible management and it can work really well, especially for in-house software products developed over a long or even an indefinite period of time. It holds off defining scope too far ahead and lets teams focus on smaller amounts of work, and allows them to regularly reprioritise. Unfortunately though, not all environments lend themselves as easily to a fully Agile setup. Even the ones that do may be restrained from putting it fully into practice for an array of other internal reasons.

    Delivering digital services to clients—within an agency setting or as a freelancer—often demands a more rigid structure. You need clear sign-off points, there’s a lot less flexibility in defining features, or working within budgets and timeframes. To start with, for a project to warrant a fully Agile team working on it, and especially for agencies, you need clients big enough and rich enough to justify the resource. You also need a lot of client trust to propose defining features and scope as you go. Although this is achievable—and there are agencies that operate an agile setup—it takes a long journey to reach that scale in the full sense of the word. Building a reputation that commands unconditional trust and reaching the point where your projects are consistently of a certain size often requires backing by long journey of success and excellence.

    So there is a lot of room left for understanding how we can best strive to still deliver excellent projects within more constrained structures. We know that rigid waterfall planning, more often than not, falls over as soon as a project gets anything past a basic brochure site. There are many critiques of the system, but one of the main ones tends to be that nobody considers each other’s work properly, which can result in very expensive and inefficient development.

    Equally, for reasons we’ve already touched upon, running fully agile teams often isn’t the right answer. So many companies, individuals, and organisations look for a middle-ground that balances being flexible and adaptive, but also provides enough upfront commitment to agree budgets, get client/stakeholder sign off, and effectively coordinate internal resource across multiple parallel projects.

    Although I don’t have a perfect formula—and can very much assert there is no one perfect way of managing a project because every project is different to the next—I’ve identified a few different ways you can approach flexibility that have really helped me in running projects more smoothly within more realistic constraints.

    Planned Flexibility

    Drawing on some of the traditional methodologies such as PRINCE2, a good starting point for aspiring to be flexible is by planning for it from the start.

    Planning flexibility comes in a few forms. For one, you can regularly identify and log potential risks as a generally good, on-going habit over the course of the project. This essentially just involves scanning the horizon for potential blips on a regular basis (for example weekly) by consulting with your team and documenting it somewhere. It means you have a checkpoint when you sit down and make sure you’re minimising what will or may catch you by surprise. A good time to do this is in a weekly catch up meeting. It’s not going to fix all your problems, but it will make sure you have a head start on the ones you can see coming.

    On the subject of team meetings, setting up recurring project events, including a weekly call, a weekly team meeting and (depending on the size of the project) I like to try also do a stand-up as often as possible. Keeping everyone involved and bought in to a project is going to help you infinitely when you need to spot a problem or manage changes to the plan. It will be the difference between your designer spotting an issue and making a mental note to ‘tell you later’, and them actually coming over to tell you directly and immediately. Despite the overhead of meetings, and looping people into stages that they aren’t directly responsible for, the business benefits are chances for success are drastically increased. Planning in, and being aware of how important your team is, will help you be flexible.

    Building contingency (formally know as slack) into your project plan from the word go is another well-known and essential way of planning to be flexible. Your project plan will change a lot over the course of a project, but there are still the days that you estimate a job will take, and the days you should actually plan in. Most sensible management teams understand that budgets need to be agreed with this slack in mind or you will not be able to deliver a quality service. I believe that commercial awareness is one of the most valuable skills a project manager can have, but penny pinching will ruin client and team relationships, destroy buy-in and creativity, and often end you up with a much more expensive, hacky, and resented product.

    It’s not a justification to let budgets spiral out of control, but a way of thinking about the bigger picture and wider plan of the company itself. It’s unlikely you want high staff turnover because everyone fell out while you were screaming money at them and they didn’t feel like they could do a good job. It’s also unlikely that you will be able to deliver quality products, which will win you a strong reputation and subsequently bigger and better projects. Evaluating risk factors and building in the right amount of slack from the start will give you more wriggle room when you need to adapt and react. On the flip side, also keeping an overview of the wider workload (that you’re not necessarily responsible for), and knowing who to talk if resource is becoming free or needs filling, is another handy way of being able to react quickly and ensuring your management system is respected. You want pockets of backup time planned in, but you also want everyone being as productive as they can most of the time. Never run at 100% capacity: as soon as something does need to change, you’re left with nowhere to move.


    Having a client or stakeholder that trusts you is a really powerful aid in any regard, but especially so when you need to communicate an issue or new suggestion. Positioning yourself and your team as experts and taking the time to delve into the wider picture—and the goals surrounding your client’s reasons to commission the project in the first place—will make you more valuable to them. Clients and stakeholders will always be different, and sometimes you will get people who are just plain difficult, but more often than not people will listen if you’re willing to talk and explain things.

    As I’m sure all of us have realised at one point or another, a lot of people think they know what they want, and it’s usually the wrong thing. Managing key stakeholders in your project is arguably your biggest challenge, if they are on the your side and feel like the team is genuinely working to give them something of quality and value, then they will make your job easier. It’s often down to you to educate them, and to help them recognise and understand the work involved and you and your team’s reasoning behind your decisions.

    Being overly submissive or overly secretive will foster a dynamic in which they feel expected to steer the project. In this situation they may not respect the team’s suggestions or may come up with some unreasonable and counterproductive ideas that are likely to hinder progress and lower morale. Getting the stakeholder on board and making them feel a part of the wider picture will make things easier. Pushing back and challenging ideas or working hard to justify something they don’t quite understand will often work in your favour and protects your team. On quite a basic level it also shows you care and are invested; on another, it shows you feel confident in your expertise within your field and that is ultimately the reason they hired you.

    Taking the time to think about and be aware of this relationship, will make it easier to be flexible and handle new ideas or suggestions that pop up as the project goes along. Change doesn’t need to be ‘scope creep’ if it’s raised in a practical, value-orientated, and level headed discussion. There is usually a way forward for new ideas, as long as they’re valuable and support the wider goals. Maybe the deadline gets pushed back, maybe you get more budget, maybe the client is happy to forgo something else. As long as there’s value and reason, it shows integrity to the project and respect for its success. You can’t expect for this to go smoothly without having invested in the client relationship, so it’s a large point in paving the way to handling change well.

    Reactive Flexibility

    Finally, if you’ve been doing this for a while, you’ll know by now that you can’t anticipate everything. Sometimes you will have to react and change the plan under circumstances that aren’t easy. When an unexpected problem first rears its head—a client’s casual afterthought that’s threatening the scope of the project, an internal resource conflict, a junior member of staff that’s not grasping the ropes quite as quickly as you’d hoped—you have to react quickly.

    In his book, ‘Pitch Anything’, Oren Klaff talks about people’s first reactions being processed by their ‘crocodile brain’ before they’ve had a chance to refine and digest the information more intelligibly. As project managers, product owners, or scrum masters, it’s natural for our immediate reactions to an unexpected problem to cause a pang of stress. But after that initial jolt you need to turn to practical solutions and start racking your brain for different ways forward. It’s here you need to remember to not let your imagination get the better of you, especially if you’ve been putting in the legwork with your team and your client. There is always a way forward and moments like this can be a good opportunity to develop your negotiation and diplomacy skills. Don’t let your immediate reaction be shutting the problem down; instead, take a second to think about it before you decide on the best direction. In a stressful situation, your first idea probably won’t be your best one.

    From an internal point of view, it’s very important that whatever went wrong doesn’t turn into a finger pointing exercise and you don’t lose your cool. Getting caught up in a blame game or a witch hunt is never productive. Relationship cultivating can sometimes be the pillar that gets you through a stressful blip. Biggest tip for staying flexible when your reacting to a problem—apart form obviously thinking of ways forward—is to communicate. Don’t go quiet until you feel like you have a plan, you’ll often need to put everyone else at ease before you can move things forward. Problem solving is part of the job and will need to happen in even the most flexible of product delivery systems.

    In conclusion, being flexible is never simple but there are things you can do to make your life easier. Owning a position of expertise, putting together a team that’s involved in each other’s work and cultivating a client/stakeholder relationship that’s as transparent and respectful as possible will get you a long way. In times of crisis, believe in your skills and be open to adapting over getting frustrated.

    About the author

    Gill is an Anglo-Bulgarian Digital Project Manager based in Leeds, UK. With a background in International Relations and languages, she is now a Certified ScrumMaster® coordinating multiple teams to deliver projects at leading full-service agency Epiphany Solutions.

    More articles by Gillian

    A Favor for Your Future Self

    posted: 02 Dec 2016

    Alicia Sedlock embodies the Ghost of Code Reviews Yet-to-Come with a call to start testing. Do you know your unit from your integration, your acceptance from your visual regression? And will you pass the ultimate Christmas test; are you naughty or nice?

    We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens.

    We comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later.

    I want to talk about a situation that Past Alicia and Team couldn’t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn’t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it’s the ultimate “I really hope this doesn’t break something else” situation. It was a stressful and tedious effort of triple checking that the things we were removing weren’t dependencies elsewhere. To be honest, we wouldn’t have been able to do this with any amount of success or confidence without our test suite.

    Writing tests for code is one of those things that developers really, really don’t want to do. It’s one of the easiest things to cut in the development process, and there’s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can’t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren’t in a tough spot later on because we only thought of the best case scenarios.


    Regardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you’re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests.

    Unit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations.

    /* * This example uses a test framework called Jasmine */ describe("Calculator Operations", function () { it("Should add two numbers", function () { // Say we have a calculator Calculator.init(); // We can run the function that does our addition calculation... var result = Calculator.addNumbers(7,3); // ...and ensure we're getting the right output expect(result).toBe(10); }); });

    Even though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let’s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn’t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.).

    it(“Should remember the last calculation”, function () { // Run an operation Calculator.addNumbers(7,10); // Expect something else to have happened as a result expect(Calculator.updateCurrentValue).toHaveBeenCalled(); expect(Calculator.currentValue).toBe(17); });

    Unit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment.


    Regardless of how you’re building something, it most definitely has some kind of interface. Whether you’re using a very barebones structure, or you’re leveraging a whole design system, these things can be tested as well.

    Acceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable.

    For example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress.

    /* * This example uses Jasmine along with an add-on called jasmine-integration */ describe("Acceptance tests", function () { // Go to our signup page var page = visit("/signup"); // Fill our signup form with invalid information page.fill_in("input[name='email']", "Not An Email"); page.fill_in("input[name='name']", "Alicia");"button[type=submit]"); // Check that we get an expected error message it("Shouldn't allow signup with invalid information", function () { expect(page.find("#signupError").hasClass("hidden")).toBeFalsy(); }); // Now, fill our signup form with valid information page.fill_in("input[name='email']", ""); page.fill_in("input[name='name']", "Gerry");"button[type=submit]"); // Check that we get an expected success message and the error message is hidden it("Should allow signup with valid information", function () { expect(page.find("#signupError").hasClass("hidden")).toBeTruthy(); expect(page.find("#thankYouMessage").hasClass("hidden")).toBeFalsy(); }); });

    In terms of visual design, we’re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga.

    Tools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this:

    /* * This example uses PhantomCSS */ casper.start("/home").then(function(){ // Initial state of form phantomcss.screenshot("#signUpForm", "sign up form"); // Hit the sign up button (should trigger error)"button#signUp"); // Take a screenshot of the UI component phantomcss.screenshot("#signUpForm", "sign up form error"); // Fill in form by name attributes & submit casper.fill("#signUpForm", { name: "Alicia Sedlock", email: "" }, true); // Take a second screenshot of success state phantomcss.screenshot("#signUpForm", "sign up form success"); });

    You run this code before starting any development, to create your baseline set of screen captures. After you’ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements – your image diff would look something like this:

    This is a great tool for ensuring not just your site retains its expected styling, but it’s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It’s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored.


    The shape and size of what you’re testing for your site or app will vary. You may not need lots of unit or integration tests if you don’t write a lot of JavaScript. You may not need visual regression testing for a one page site. It’s important to assess your codebase to see which tests would provide the most benefit for you and your team.

    Writing tests isn’t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that’s broken breaks trust with our users, and it’s our responsibility as developers to make sure that trust isn’t broken. Testing shouldn’t be considered a “nice to have” - it should be an integral piece of our workflow and our day-to-day job.

    About the author

    Alicia Sedlock is a front-end web developer who encourages and teaches developer to build responsibly. She thought she’d be able to make a living building MySpace and Livejournal themes, but has since progressed to helping teams building out extensible front-end systems, and is currently a UI Architect at ThirdChannel. When not building things, Alicia is an avid gamer, and tweets a lot of pictures of her hedgehog, Mabel.

    More articles by Alicia

    Creating a Weekly Research Cadence

    posted: 01 Dec 2016

    Wren Lanier sets aside time to explore the benefits of a regular schedule for user research. Santa’s elves quickly discovered the benefits of working to a fixed schedule, which is of course why we don’t get presents at Easter.

    Working on a product team, it’s easy to get hyper-focused on building features and lose sight of your users and their daily challenges. User research can be time-consuming to set up, so it often becomes ad-hoc and irregular, only performed in response to a particular question or concern. But without frequent touch points and opportunities for discovery, your product will stagnate and become less and less relevant. Setting up an efficient cadence of weekly research conversations will re-focus your team on user problems and provide a steady stream of insights for product development.

    As my team transitioned into a Lean process earlier this year, we needed a way to get more feedback from users in a short amount of time. Our users are internet marketers—always busy and often difficult to reach. Scheduling research took days of emailing back and forth to find mutually agreeable times, and juggling one-off conversations made it difficult to connect with more than one or two people per week. The slow pace of research was allowing additional risk to creep into our product development.

    I wanted to find a way for our team to test ideas and validate assumptions sooner and more often—but without increasing the administrative burden of scheduling. The solution: creating a regular cadence of research and testing that required a minimum of effort to coordinate.

    Setting up a weekly user research cadence accelerated our learning and built momentum behind strategic experiments. By dedicating time every week to talk to a few users, we made ongoing research a painless part of every weekly sprint. But increasing the frequency of our research had other benefits as well. With only five working days between sessions, a weekly cadence forced us to keep our work small and iterative. Committing to testing something every week meant showing work earlier and more often than we might have preferred—pushing us out of your comfort zone into a process of more rapid experimentation.

    Best of all, frequent conversations with users helped us become more customer-focused. After just a few weeks in a consistent research cadence, I noticed user feedback weaving itself through our planning and strategy sessions. Comments like “Remember what Jenna said last week, about not being able to customize her lists?” would pop up as frequent reference points to guide our decisions. As discussions become less about subjective opinions and more about responding to user needs, we saw immediate improvement in the quality of our solutions.

    Establishing an efficient recruitment process

    The key to creating a regular cadence of ongoing user research is an efficient recruitment and scheduling process—along with a commitment to prioritize the time needed for research conversations. This is an invaluable tool for product teams (whether or not they follow a Lean process), but could easily be adapted for content strategy teams, agency teams, a UX team of one, or any other project that would benefit from short, frequent conversations with users.

    The process I use requires a few hours of setup time at the beginning, but pays off in better learning and better releases over the long run. Almost any team could use this as a starting point and adapt it to their own needs.

    Pick a dedicated time each week for research

    In order to make research a priority, we started by choosing a time each week when everyone on the product team was available. Between stand-ups, grooming sessions, and roadmap reviews, it wasn’t easy to do! Nevertheless, it’s important to include as many people as possible in conversations with your users. Getting a second-hand summary of research results doesn’t have the same impact as hearing someone describe their frustrations and concerns first-hand. The more people in the room to hear those concerns, the more likely they are to become priorities for your team.

    I blocked off 2 hours for research conversations every Thursday afternoon. We make this time sacred, and never schedule other meetings or work across those hours.

    Divide your time into several research slots

    After my weekly cadence was set, I divided the time into four 20-minute time slots. Twenty minutes is long enough for us to ask several open-ended questions or get feedback on a prototype, without being a burden on our users’ busy schedules. Depending on your work, you may need schedule longer sessions—but beware the urge to create blocks that last an hour or more. A weekly research cadence is designed to facilitate rapid, ongoing feedback and testing; it should force you to talk to users often and to keep your work small and iterative. Projects that require longer, more in-depth testing will probably need a dedicated research project of their own.

    I used the scheduling software Calendly to create interview appointments on a calendar that I can share with users, and customized the confirmation and reminder emails with information about how to access our video conferencing software. (Most of our research is done remotely, but this could be set up with details for in-person meetings as well.) Automating these emails and reminders took a little bit of time to set up, but was worth it for how much faster it made the process overall.

    Invite users to sign up for a time that’s convenient for them

    With a calendar set up and follow-up emails automated, it becomes incredibly easy to schedule research conversations. Each week, I send a short email out to a small group of users inviting them to participate, explaining that this is a chance to provide feedback that will improve our product or occasionally promoting the opportunity to get a sneak peek at new features we’re working on. The email includes a link to the Calendly appointments, allowing users who are interested to opt in to a time that fits their schedule.

    Setting up appointments the first go around involved a bit of educated guessing. How many invitations would it take to fill all four of my weekly slots? How far in advance did I need to recruit users? But after a few weeks of trial and error, I found that sending 12-16 invitations usually allows me to fill all four interview slots. Our users often have meetings pop up at short notice, so we get the best results when I send the recruiting email on Tuesday, two days before my research block.

    It may take a bit of experimentation to fine tune your process, but it’s worth the effort to get it right. (The worst thing that’s happened since I began recruiting this way was receiving emails from users complaining that there were no open slots available!) I can now fill most of an afternoon with back-to-back user research sessions just by sending just one or two emails each week, increasing our research pace while leaving plenty time to focus on discovery and design.

    Getting the most out of your research sessions

    As you get comfortable with the rhythm of talking to users each week, you’ll find more and more ways to get value out of your conversations. At first, you may prefer to just show work in progress—such as mockups or a simple prototype—and ask open-ended questions to measure user reaction. When you begin new projects, you may want to use this time to research behavior on existing features—either watching participants as they use part of your product or asking them to give an account of a recent experience in your app. You may even want to run more abstracted Lean experiments, if that’s the best way to validate the assumptions your team is working from.

    Whatever you do, plan some time a day or two later to come back together and review what you’ve learned each week. Synthesizing research outcomes as a group will help keep your team in alignment and allow each person to highlight what they took away from each conversation.

    Over time, you may find that the pace of weekly user research becomes more exhausting than energizing, especially if the responsibility for scheduling and planning falls on just one person. Don’t allow yourself to get burned out; a healthy research cadence should also include time to rest and reflect if the pace becomes too rapid to sustain. Take breaks as needed, then pick up the pace again as soon as you’re ready.

    About the author

    Wren Lanier is a designer with a passion for creating beautiful digital products. During her 12+ years of working on the web, she’s helped a diverse array of companies build delightful user experiences—from advising startups on UX best practices to pushing pixels for Fortune 500 companies. She lives in Durham, NC with her husband & daughter, where she leads design and UX at Windsor Circle.

    More articles by Wren

    Internet of Stranger Things

    posted: 30 Nov 2016

    Seb Lee-Delisle lights up our 2016 advent series with an illuminating guide to making your own Stranger Things style fairy lights to pick up messages from the upside-down (also known as the Internet).

    This year I’ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I’m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1.

    What you’ll see

    You can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the “Send a message” button and wait your turn.)

    It’s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I’m using WebSockets to communicate in real time between the browser, server and Raspberry Pi.

    What you’ll need

    • Raspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3
    • Micro SD card at least 4Gb Class 10 speed3
    • Micro USB power supply at least 2A
    • USB Wifi dongle (unless you have a Pi 3 - that has wifi built in).
    • Addressable fairy lights
    • Logic level shifter (with pins soldered unless you want to do it!)
    • Breadboard
    • Jumper wires (3x male to male and 4x female to male)

    Optional but recommended

    • Base board to hold the Pi and Breadboard (often comes with a breadboard!)

    Find links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004.

    Setting up the Raspberry Pi

    You’ll need to install the SD card for the Raspberry Pi. You’ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5.

    Next up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There’s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6.

    network={ ssid="mywifi" psk="mypassword" }

    Save the file, eject the card from your computer and plug it into the Raspberry Pi.

    If you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don’t plug it in yet!


    Time to wire everything up!

    First of all, push the Logic Level Converter into the middle of the breadboard:

    Logic Level Converter

    The logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top.

    Raspberry Pi pins only output 3.3v but the lights need 5v. That’s why we need the logic level converter in there to boost up the signal.

    Connect the first two wires between the Raspberry Pi pins and the breadboard:

    Note that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don’t have to match but it’s easier to follow (and check) if you use the same ones as in the diagram.

    Then the next two:

    This is what you should have so far:


    Now to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard.

    Make sure that you connect the right end of the lights, mine has a male connector at the wrong end so it’s impossible to do this, but double check.

    Also make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or ‘-’ or G) and DI (sometimes DIN for data in).

    You can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that’s what takes the data along to the chip in the next light. Make sure that you’re plugging into the data-in and not the data-out!

    That’s it! Everything’s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they’re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check!

    The Moment of Truth!

    Plug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that’s your confirmation that it’s connected to my WebSocket server and ready to receive messages from the upside-down!

    However, if the first light in the string is pulsing red, it means that you’re not connected to the internet. So check the Troubleshooting section of the support document. If it’s pulsing green then you’re connected to the internet but can’t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it’ll come back up.

    Rig up the lights!

    Fix the lights up on the wall however you want, pins, nails, tape. I’ve used cable clips. Just be careful! I’m using a 50 light string so I’ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi.

    Check the photo here to see how the lights line up, note that there are spare unused lights in-between each row:

    Now visit and you’ll see this :

    If you’re the only one online you’ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you’ll need to click the button to join the queue and wait your turn.

    How it works - the geeky details!


    The pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer).

    Addressable LEDs or “Neopixels”

    We’re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string.

    The chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project.

    Neopixels with the chip and the LED all in one - it’s the white square shaped component and the darker square inside is the chip. These are only 5mm wide!
    A Laser Light Synth! Covered with around 800 super bright neopixels!

    Logic Level Converter

    The logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip.


    Neopixels can often draw a lot of current so you need to be careful how you power them. I’ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you’ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit’s Neopixel Uberguide.


    There are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at for the Raspberry Pi and for the server. And they’re hosted on npm as stranger-lights and stranger-lights-server.

    The server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time.


    I’m using the excellent library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server.

    When you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9.

    The Raspberry Pi code

    The Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file:

    node /home/pi/strangerthings/client.js < /dev/null &

    Anything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn’t hold up the boot process.

    Working with the Raspberry Pi headless

    You might know that when a computer has no screen or keyboard, you would refer to it as “running headless”. So just like most web servers, you need to configure it over the network with ssh10. If you’re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you’ll need to find its IP address. There’s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you’re very new to the terminal, I highly recommend this great online Linux command line tutorial.


    This is quite an early experiment and I’m sure I’ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I’ve just rigged up my lights with Post-it notes. It’d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style.

    Where next?

    Finding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that’s something you’d be interested in, or sign up to my mailing list at

    There are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you’ll be spoilt for choice!

    Also take a look at Arduino - it’s an incredibly popular platform for electronics and the internet is literally bursting with resources.

    I hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly.

    1. Not a particularly original idea, but I don’t think I’ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things,, and loads of examples on Instructables ↩︎

    2. Video guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit ↩︎

    3. Slower cards will work but performance may suffer ↩︎

    4. Or £5,000 in UK money. Sorry, Brexit joke :) ↩︎

    5. You will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots.  ↩︎

    6. SSID and password should be all that you need but you can see all the config options on this wpa supplicant guide ↩︎

    7. Raspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle ↩︎

    8. Thanks to Adafruit who invented the term neopixels so we don’t have to refer to them as WS281x any more! ↩︎

    9. So you can see other people sending messages in the browser ↩︎

    10. ssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal. ↩︎

    11. You can change this default hostname using raspi-config ↩︎

    About the author

    Seb Lee-Delisle is a digital artist and speaker who uses computers to engage with people and inspire them.

    As an artist, he likes to make interesting things from code that encourage interaction and playfulness from the public. Notable projects include Lunar Trails, featuring a 3m wide drawing machine, and PixelPyros, the Arts Council funded digital fireworks display that toured nationwide in 2013.

    As a speaker he demystifies programming and explores its artistic possibilities. His presentations and workshops enable artists to overcome their fear of code and encourage programmers of all backgrounds to be more creative and imaginative.

    His recent work Laser Light Synths won the 2016 Lumen Interactive Prize. He won 3 Microsoft awards in 2013, and he was Technical Director on Big and Small, the BBC project that won a BAFTA in 2009.

    More articles by Seb

    Solve the Hard Problems

    posted: 23 Dec 2015

    Drew McLellan brings our 2015 calendar to a motivational close with some encouragement for the year ahead. Year’s end is a time for reflection and finding new purpose and enthusiasm for what we do. By tackling the thorniest design and development problems, we can make the greatest impact – and have the most fun. Merry Christmas and a happy New Year!

    So, here we find ourselves on the cusp of 2016. We’ve had a good year – the web is still alive, no one has switched it off yet. Clients still have websites, teenagers still have phone apps, and there continue to be plenty of online brands to meaningfully engage with each day. Good job team, high fives all round.

    As it’s the time to make resolutions, I wanted to share three small ideas to take into the new year.

    Get good at what you do

    “How do you get to Carnegie Hall?” the old joke goes. “Practise, practise, practise.”

    We work in an industry where there is an awful lot to learn. There’s a lot to learn to get started and then once you do, there’s a lot more to learn to keep your skills current. Just when you think you’ve mastered something, it changes.

    This is true of many industries, of course, but the sheer pace of change for us makes learning not an annual activity, but daily. Learning takes time, and while I’m not convinced that every skill takes the fabled ten thousand hours to master, there is certainly no escaping that to remain current we must reinvest time in keeping our skills up to date.

    Picking where to spend your time

    One of the hardest aspects of this thing of ours is just choosing what to learn. If you, like me, invested any time in learning the Less CSS preprocessor over the last few years, you’ll probably now be spending your time relearning Sass instead. If you spent time learning Grunt, chances are you’ll now be thinking about whether you should switch to Gulp. It’s not just that there are new types of tools, there are new tools and frameworks to do the things you’re already doing, but, well, differently.

    Deciding what to learn is hard and the costs of backing the wrong horse can seriously mount up; so much so that by the time you’ve learned and then relearned the tools everyone says you need for your job, there’s rarely enough time to spend really getting to know how best to use them.

     Practise, practise, practise

    Do you know how you don’t get to Carnegie Hall? By learning a new instrument each week. It takes time and experience to really learn something well. That goes for a new JavaScript framework as much as a violin. If you flit from one shiny new thing to another, you’re destined to produce amateurish work forever.

    Learn the new thing, but then stick with it long enough to get really good at it – even if Twitter trolls try to convince you it’s not cool. What’s really not cool is living as a forevernoob.

    If you’re still not sure what to learn, go back to basics. Considering a new CSS or JavaScript framework? Invest that time in learning the underlying CSS or JavaScript really well instead. Those skills will stand the test of time.

    Audience and purpose

    Back when I was in school, my English teacher (a nice Welsh lady, who I appreciate more now than I did back then) used to love to remind us that every piece of writing should have an audience and a purpose. So much so that audience and purpose almost became her catch phrase. For every essay, article or letter, we were reminded to consider who we were writing it for and what we were trying to achieve.

    It’s something I think about a lot; certainly when writing, but also in almost every other creative endeavour. Asking who is this for and what am I trying to achieve applies equally to designing a logo or website, through to composing music or writing software.

    Being productive

    It seems like everyone wants to have a product these days. As someone who used to do client services work and now has a product company, I often talk with people who are interested in taking something they’ve built in-house and turning it into a product. You know the sort of thing: a design agency with its own CMS or project management web app; the very logical thought process of: if this helps our business, maybe others will find it valuable too; the question that inevitably follows: could we turn this into a product?

    Whether consciously or not, the audience and purpose influence nearly every aspect of your creative process. Once written or designed or developed or created, revising a work to change the audience and purpose can be quite a challenge. No matter how much you want to turn the tension-building, atmospheric music for a horror film into a catchy chart hit, it’s going to be a struggle. Yes, it’s music, but that’s neither the audience nor purpose for which it was created.

    The same is absolutely true for your in-house tools – those were also designed for a specific audience and purpose. Your in-house CMS would have been designed with an audience of your own development team, who are busy implementing sites for clients. The purpose is to make that team more productive overall, taking into account considerations of maintaining multiple sites on a common codebase, training clients, a more mature and stable platform and all the other benefits of resuing the same code for each project. The audience is your team and the purpose increased productivity.

    That’s very different from a customer who wants to buy a polished system to use off-the-shelf. If their needs perfectly aligned with yours then they wouldn’t be in the market for your product – they would have built their own.

    Sometimes you hear the advice to “scratch your own itch” when it comes to product design. I don’t completely agree. Got an itch? Great. Find other itchy people and sell them a backscratcher.

    Building a product, like designing a website, is a lot of work. It requires knowing your audience and purpose inside out. You can’t fudge it and you can’t just hope you’ll find an audience for some old thing you have lying around.

    Always consider the audience and purpose for everything you create. It’s often the difference between success and failure.

    Solve the hard problems

    Human beings have a natural tendency to avoid hard problems. In digital design (websites, software, whatever) the received wisdom is often that we can get 80% of the way towards doing the hard thing by doing something that’s not very hard.

    Do you know what you get at the end of it? Paid. But nothing really great ever happens that way.

    I worked on a client project a while back where one of the big challenges was making full use of the massive image library they had built up over the years. The client had tens of thousands of photographs, along with a fair amount of video and a large MP3 audio library too. If it wasn’t managed carefully, storage sizes would get out of control, content would go unattributed, and everything would get very messy very quickly.

    I could tell from the outset that this aspect of the project was going to be a constant problem. So we tackled it head-on. We designed and built a media management system to hold and process all the assets, and added an API so the content management system could talk to it. Every time the site needed a photo at a new size, it made an API request to the system and everything was handled seamlessly.

    It was a daunting job to invest all the time and effort in building that dedicated system and API, but it really paid off. Instead of having the constant troubles of a vast library of media, it became one of the strongest parts of the project.

    Turn your hardest problems into your biggest strengths

    There’s a funny thing about hard problems. The hardest problems are the most fun to solve and have the biggest impact.

    Maybe you’re the sort of person who clocks in for work, does their job and clocks out at 5pm without another thought. But I don’t think you are, because you’re here reading this. If you really love what you do, I don’t think you can be satisfied in your work unless you’re seeking out and working on those hard problems. That’s where the magic is.

    The new year is a helpful time to think about breaking bad habits. Whether it’s smoking a bit less, or going to the gym a bit more, the ticking over of the calendar can provide the motivation for a new start. I have some suggestions for you.

    1. Get good at what you do. Practise your skills and don’t just flit from one shiny thing to the next.
    2. Remember who you’re doing it for and why. Consider the audience and purpose for everything you create.
    3. Solve the hard problems. It’s more interesting, more satisfying, and has a greater impact.

    As we move into 2016, these are the things I’m going to continue to work on. Maybe you’d like to join me.

    About the author

    Drew McLellan is lead developer on your favourite content management systems, Perch and Perch Runway. He is Director and Senior Developer at in Bristol, England, and is formerly Group Lead at the Web Standards Project. When not publishing 24 ways, Drew keeps a personal site covering web development issues and themes, takes photos, tweets a lot and tries to stay upright on his bicycle.

    More articles by Drew

    Blow Your Own Trumpet

    posted: 22 Dec 2015

    Andy Clarke encourages us to have confidence in the way we communicate with potential clients. Being open and genuine, and providing an insight into what working with you will be like can help prospective clients choose you over your competitors. So before you refresh your glass, refresh your website’s copy!

    Even if your own trumpet’s tiny and fell out of a Christmas cracker, blowing it isn’t something that everyone’s good at. Some people find selling themselves and what they do difficult. But, you know what? Boo hoo hoo. If you want people to buy something, the reality is you’d better get good at selling, especially if that something is you.

    For web professionals, the best place to tell potential business customers or possible employers about what you do is on your own website. You can write what you want and how you want, but that doesn’t make knowing what to write any easier. As a matter of fact, writing for yourself often proves harder than writing for someone else.

    I spent this autumn thinking about what I wanted to say about Stuff & Nonsense on the website we relaunched recently. While I did that, I spoke to other designers about how they struggled to write about their businesses.

    If you struggle to write well, don’t worry. You’re not on your own. Here are five ways to hit the right notes when writing about yourself and your work.

    Be genuine about who you are

    I’ve known plenty of talented people who run a successful business pretty much single-handed. Somehow they still feel awkward presenting themselves as individuals. They wonder whether describing themselves as a company will give them extra credibility. They especially agonise over using “we” rather than “I” when describing what they do. These choices get harder when you’re a one-man band trading as a limited company or LLC business entity.

    If you mainly work alone, don’t describe yourself as anything other than “I”. You might think that saying “we” makes you appear larger and will give you a better chance of landing bigger and better work, but the moment a prospective client asks, “How many people are you?” you’ll have some uncomfortable explaining to do. This will distract them from talking about your work and derail your sales process. There’s no need to be anything other than genuine about how you describe yourself. You should be proud to say “I” because working alone isn’t something that many people have the ability, business acumen or talent to do.

    Explain what you actually do

    How many people do precisely the same job as you? Hundreds? Thousands? The same goes for companies. If yours is a design studio, development team or UX consultancy, there are countless others saying exactly what you’re saying about what you do. Simply stating that you code, design or – God help me – “handcraft digital experiences” isn’t enough to make your business sound different from everyone else. Anyone can and usually does say that, but people buy more than deliverables. They buy something that’s unique about you and your business.

    Potentially thousands of companies deliver code and designs the same way as Stuff & Nonsense, but our clients don’t just buy page designs, prototypes and websites from us. They buy our taste for typography, colour and layout, summed up by our “It’s the taste” tagline and bowler hat tip to the PG Tips chimps. We hope that potential clients will understand what’s unique about us. Think beyond your deliverables to what people actually buy, and sell the uniqueness of that.

    Describe work in progress

    It’s sad that current design trends have made it almost impossible to tell one website from another. So many designers now demonstrate finished responsive website designs by pasting them onto iMac, MacBook, iPad and iPhone screens that their portfolios don’t fare much better. Every designer brings their own experience, perspective and process to a project. In my experience, it’s understanding those differences which forms a big part of how a prospective client makes a decision about who to work with. Don’t simply show a prospective client the end result of a previous project; explain your process, the development of your thinking and even the wrong turns you took.

    Traditional case studies, like the one I’ve just written about Stuff & Nonsense’s work for WWF UK, can take a lot of time. That’s probably why many portfolios get out of date very quickly. Designers make new work all the time, so there must be a better way to show more of it more often, to give prospective clients a clearer understanding of what we do. At Stuff & Nonsense our solution was to create a feed where we could post fragments of design work throughout a project. This also meant rewriting our Contract Killer to give us permission to publish work before someone signs it off.

    Outline a client’s experience

    Recently a client took me to one side and offered some valuable advice. She told me that our website hadn’t described anything about the experience she’d had while working with us. She said that knowing more about how we work would’ve helped her make her buying decision.

    When a client chooses your business, they’re hoping for more than a successful outcome. They want their project to run smoothly. They want to feel that they made a correct decision when they chose you. If they work for an organisation, they’ll want their good judgement to be recognised too. Our client didn’t recognise her experience because we hadn’t made our own website part of it. Remember, the challenge of creating a memorable user experience starts with selling to the people paying you for it.

    Address your ideal client

    It’s important to understand that a portfolio’s job isn’t to document your work, it’s to attract new work from clients you want. Make sure that work you show reflects the work you want, because what you include in your portfolio often leads to more of the same.

    When you’re writing for your portfolio and elsewhere on your website, imagine that you’re addressing your ideal client. Picture them sitting opposite and answer the questions they’d ask as you would in conversation. Be direct, funny if that’s appropriate and serious when it’s not. If it helps, ask a friend to read the questions aloud and record what you say in response. This will help make what you write sound natural. I’ve found this technique helps clients write copy too.

    Toot your own horn

    Some people confuse expressing confidence in yourself and your work as boastfulness, but in a competitive world the reality is that if you are to succeed, you need to show confidence so that others can show their confidence in you. If you want people to hear you, pick up your trumpet and blow it.

    About the author

    Andy Clarke is an art director and web designer at the UK website design studio ‘Stuff & Nonsense.’ There he designs websites and applications for clients from around the world. Based in North Wales, Andy’s also the author of two web design books, ‘Transcending CSS’ and the new ‘Hardboiled Web Design Fifth Anniversary Edition’ and is well known for his many conference presentations and over ten years of contributions to the web design industry. Jeffrey Zeldman once called him a “triple talented bastard.” If you know of Jeffrey, you’ll know how happy that made him.

    More articles by Andy

    How Tabs Should Work

    posted: 21 Dec 2015

    Remy Sharp picks that old chestnut – tabs – and roasts it afresh on the open fire of JavaScript to see how a fully navigable, accessible and clickable set of tabs can work. Everybody knows some scripting and some CSS can help to make your website bright. Although it’s been said many times, many ways, please be careful to do it right.

    Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They’ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented.

    So this post is my definition of how a tabbing system should work, and one approach of implementing that.

    But… tabs are easy, right?

    I’ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system:

    var tabs = $('.tab').click(function () { tabs.hide().filter(this.hash).show(); }).map(function () { return $(this.hash)[0]; }); $('.tab:first').click();

    Simple, right? Nearly fits in a tweet (ignoring the whole jQuery library…). Still, it’s riddled with problems that make it a far from perfect solution.

    Requirements: what makes the perfect tab?

    1. All content is navigable and available without JavaScript (crawler-compatible and low JS-compatible).
    2. ARIA roles.
    3. The tabs are anchor links that:
      • are clickable
      • have block layout
      • have their href pointing to the id of the panel element
      • use the correct cursor (i.e. cursor: pointer).
    4. Since tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open.
    5. Right-clicking (and Shift-clicking) doesn’t cause the tab to be selected.
    6. Native browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place).

    The first three points are all to do with the semantics of the markup and how the markup has been styled. I think it’s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work.

    The last three points are JavaScript problems. Let’s investigate that.

    The shitmus test

    Like a litmus test, here’s a couple of quick ways you can tell if a tabbing system is poorly implemented:

    • Change tab, then use the Back button (or keyboard shortcut) and it breaks
    • The tab isn’t a link, so you can’t open it in a new tab

    These two basic things are, to me, the bare minimum that a tabbing system should have.

    Why is this important?

    The people who push their so-called native apps on users can’t have more reasons why the web sucks. If something as basic as a tab doesn’t work, obviously there’s more ammo to push a closed native app or platform on your users.

    If you’re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn’t mean don’t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. </rant> :breath:

    URI fragment, absolute URL or query string?

    A URI fragment (AKA the # hash bit) would be using to show the content panel. A fully addressable URL would be Using a query string (by way of filtering the page):

    This decision really depends on the context of your tabbing system. FOr something like GitHub’s tabs to view a pull request, it makes sense that the full URL changes.

    For our problem though, I want to solve the issue when the page doesn’t do a full URL update; that is, your regular run-of-the-mill tabbing system.

    I used to be from the school of using the hash to show the correct tab, but I’ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don’t work, and comma-separated hash fragments don’t make any sense to control multiple tabs (since it doesn’t actually link to anything).

    For this article, I’ll keep focused on using a single tabbing system and a hash on the URL to control the tabs.


    I’m going to assume subcontent, so my markup would look like this (yes, this is a cat demo…):

    <ul class="tabs"> <li><a class="tab" href="#dizzy">Dizzy</a></li> <li><a class="tab" href="#ninja">Ninja</a></li> <li><a class="tab" href="#missy">Missy</a></li> </ul> <div id="dizzy"> <!-- panel content --> </div> <div id="ninja"> <!-- panel content --> </div> <div id="missy"> <!-- panel content --> </div>

    It’s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we’ll see once we’re on to writing the code.

    URL-driven tabbing systems

    Instead of making the code responsive to the user’s input, we’re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free.

    With that in mind, let’s start building up our code. I’ll assume we have the jQuery library, but I’ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support).

    Note that I’ll start with the simplest solution, and I’ll refactor the code as I go along, like in places where I keep calling jQuery selectors.

    function show(id) { // remove the selected class from the tabs, // and add it back to the one the user selected $('.tab').removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it $('.panel').hide().filter(id).show(); } $(window).on('hashchange', function () { show(location.hash); }); // initialise by showing the first panel show('#dizzy');

    This works pretty well for such little code. Notice that we don’t have any click handlers for the user and the Back button works right out of the box.

    However, there’s a number of problems we need to fix:

    1. The initialised tab is hard-coded to the first panel, rather than what’s on the URL.
    2. If there’s no hash on the URL, all the panels are hidden (and thus broken).
    3. If you scroll to the bottom of the example, you’ll find a “top” link; clicking that will break our tabbing system.
    4. I’ve purposely made the page long, so that when you click on a tab, you’ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying.

    From our criteria at the start of this post, we’ve already solved items 4 and 5. Not a terrible start. Let’s solve items 1 through 3 next.

    Using the URL to initialise correctly and protect from breakage

    Instead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it’s available.

    The problem is: what if the hash on the URL isn’t actually for a tab?

    The solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won’t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached.

    So now the code will collect all the tabs, then find the related panels from those tabs, and we’ll use that list to double the values we give the show function (during initialisation, for instance).

    // collect all the tabs var tabs = $('.tab'); // get an array of the panel ids (from the anchor hash) var targets = () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')); function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', function () { var hash = location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } }); // initialise show(targets.indexOf(location.hash) !== -1 ? location.hash : '');

    The core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab.

    The second breakage we saw in the original demo was that clicking the “top” link would break our tabs. This was due to the hashchange event firing and the code didn’t validate the hash that was passed. Now this happens, the panels don’t break.

    So far we’ve got a tabbing system that:

    • Works without JavaScript.
    • Supports right-click and Shift-click (and doesn’t select in these cases).
    • Loads the correct panel if you start with a hash.
    • Supports native browser navigation.
    • Supports the keyboard.

    The only annoying problem we have now is that the page jumps when a tab is selected. That’s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it’s all for a good cause.

    Removing the jump to tab

    You’d be forgiven for thinking you just need to hook a click handler and return false. It’s what I started with. Only that’s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support.

    There may be another way to solve this, but what follows is the way I found – and it works. It’s just a bit… hairy, as I said.

    We’re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won’t jump the page.

    The change involves the following:

    1. Add a click handle that removes the id from the target panel, and cache this in a target variable that we’ll use later in hashchange (see point 4).
    2. In the same click handler, set the location.hash to the current link’s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line).
    3. For each panel, put a backup copy of the id attribute in a data property (I’ve called it old-id).
    4. When the hashchange event fires, if we have a target value, let’s put the id back on the panel.

    These changes result in this final code:

    /*global $*/ // a temp value to cache *what* we're about to show var target = null; // collect all the tabs var tabs = $('.tab').on('click', function () { target = $(this.hash).removeAttr('id'); // if the URL isn't going to change, then hashchange // event doesn't fire, so we trigger the update manually if (location.hash === this.hash) { // but this has to happen after the DOM update has // completed, so we wrap it in a setTimeout 0 setTimeout(update, 0); } }); // get an array of the panel ids (from the anchor hash) var targets = () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')).each(function () { // keep a copy of what the original was $(this).data('old-id',; }); function update() { if (target) { target.attr('id','old-id')); target = null; } var hash = window.location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } } function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', update); // initialise if (targets.indexOf(window.location.hash) !== -1) { update(); } else { show(); }

    This version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add.

    ARIA roles

    This article on ARIA tabs made it very easy to get the tabbing system working as I wanted.

    The tasks were simple:

    1. Add aria-role set to tab for the tabs, and panel for the panels.
    2. Set aria-controls on the tabs to point to their related panel (by id).
    3. I use JavaScript to add tabindex=0 to all the tab elements.
    4. When I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false.
    5. When I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false.

    And that’s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible.

    Check out the final version (and the non-jQuery version as promised).

    In conclusion

    There’s a lot of tab implementations out there, but there’s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there’s a special hell for those tab systems that don’t even use links, but I think it’s clear that even in something that’s relatively simple, it’s the small details that make or break the user experience.

    Obviously there are corners I’ve not explored, like when there’s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that’s for another year!

    About the author

    Remy Sharp is the founder and curator of Full Frontal, the UK based JavaScript conference. He also ran jQuery for Designers, co-authored Introducing HTML5 (adding all the JavaScripty bits) and likes to grumble on Twitter.

    Whilst he’s not writing articles or running and speaking at conferences, he runs his own development and training company in Brighton called Left Logic. And he built these too: Confwall,,,,, nodemon, inliner,,, 5 minute fork and!

    More articles by Remy

    What¬タルs Ahead for Your Data in 2016?

    posted: 20 Dec 2015

    Heather Burns outlines the most important international legal issues whose effects will ripple through our work on the web in 2016 and beyond. Like the Ghost of Christmas Yet To Come, these trade agreements have approached slowly, gravely, silently. Perhaps now’s the time to take action.

    Who owns your data? Who decides what can you do with it? Where can you store it? What guarantee do you have over your data’s privacy? Where can you publish your work? Can you adapt software to accommodate your disability? Is your tiny agency subject to corporate regulation? Does another country have rights over your intellectual property?

    If you aren’t the kind of person who is interested in international politics, I hate to break it to you: in 2016 the legal foundations which underpin our work on the web are being revisited in not one but three major international political agreements, and every single one of those questions is up for grabs. These agreements – the draft EU Data Protection Regulation (EUDPR), the Trans-Pacific Partnership (TPP), and the draft Transatlantic Trade and Investment Partnership (TTIP) – stand poised to have a major impact on your data, your workflows, and your digital rights. While some proposed changes could protect the open web for the future, other provisions would set the internet back several decades.

    In this article we will review the issues you need to be aware of as a digital professional. While each of these agreements covers dozens of topics ranging from climate change to food safety, we will focus solely on the aspects which pertain to the work we do on the web.

    The Trans-Pacific Partnership

    The Trans-Pacific Partnership (TPP) is a free trade agreement between the US, Japan, Malaysia, Vietnam, Singapore, Brunei, Australia, New Zealand, Canada, Mexico, Chile and Peru – a bloc comprising 40% of the world’s economy. The agreement is expected to be signed by all parties, and thereby to come into effect, in 2016. This agreement is ostensibly about the bloc and its members working together for their common interests. However, the latest draft text of the TPP, which was formulated entirely in secret, has only been made publicly available on a Medium blog published by the U.S. Trade Representative which features a patriotic banner at the top proclaiming “TPP: Made in America.” The message sent about who holds the balance of power in this agreement, and whose interests it will benefit, is clear.

    By far the most controversial area of the TPP has centred around the provisions on intellectual property. These include copyright terms of up to 120 years, mandatory takedowns of allegedly infringing content in response to just one complaint regardless of that complaint’s validity, heavy and disproportionate penalties for alleged violations, and – most frightening of all – government seizures of equipment allegedly used for copyright violations. All of these provisions have been raised without regard for the fact that a trade agreement is not the appropriate venue to negotiate intellectual property law.

    Other draft TPP provisions would restrict the digital rights of people with disabilities by banning the workarounds they use every day. These include no exemptions for the adaptations of copywritten works for use in accessible technology (such as text-to-speech in ebook readers), a ban on circumventing DRM or digital locks in order to convert a file to an accessible format, and requiring the takedown of adapted works, such as a video with added subtitles, even if that adaptation would normally have fallen under the definition of fair use.

    The e-commerce provisions would prohibit data localisation, the practice of requiring data to be physically stored on servers within a country’s borders. Data localisation is growing in popularity following the Snowden revelations, and some of your own personal data may have been recently “localised” in response to the Safe Harbor verdict. Prohibiting data localisation through the TPP would address the symptom but not the cause.

    The Electronic Frontier Foundation has published an excellent summary of the digital rights issues raised by the agreement along with suggested actions American readers can take to speak out.

    Transatlantic Trade and Investment Partnership

    TTIP stands for the Transatlantic Trade and Investment Partnership, a draft free trade agreement between the United States and the EU. The plan has been hugely controversial and divisive, and the internet and digital provisions of the draft form just a small part of that contention.

    The most striking digital provision of TTIP is an attempt to circumvent and override European data protection law. As EDRI, a European digital rights organisation, noted:

    “the US proposal would authorise the transfer of EU citizens’ personal data to any country, trumping the EU data protection framework, which ensures that this data can only be transferred in clearly defined circumstances. For years, the US has been trying to bypass the default requirement for storage of personal data in the EU. It is therefore not surprising to see such a proposal being {introduced} in the context of the trade negotiations.”

    This draft provision was written before the Safe Harbor data protection agreement between the EU and US was invalidated by the Court of Justice of the European Union. In other words, there is no longer any protective agreement in place, and our data is as vulnerable as this political situation. However, data protection is a matter of its own law, the acting Data Protection Directive and the draft EU Data Protection Reform. A trade agreement, be it the TTIP or the TPP, is not the appropriate place to revamp a law on data protection.

    Other digital law issues raised by TTIP include the possibility of renegotiating standards on encryption (which in practice means lowering them) and renegotiating intellectual property rights such as copyright. The spectre of net neutrality has even put in an appearance, with an attempt to introduce rules on access to the internet itself being introduced as provisions.

    TTIP is still under discussion, and this month the EU trade representative said that “we agreed to further intensify our work during 2016 to help negotiations move forward rapidly.” This has been cleverly worded: this means the agreement has little chance of being passed or coming into effect in 2016, which buys civil society more precious time to speak out.

    The EU Data Protection Regulation

    On 15 December 2015 the European Commission announced their agreement on the text of the draft General Data Protection Regulation. This law will replace its predecessor, the EU Data Protection Regulation of 1995, which has done a remarkable job of protecting data privacy across the continent throughout two decades of constant internet evolution.

    The goal of the reform process has been to return power over data, and its uses, to citizens. Users will have more control over what data is captured about them, how it is used, how it is retained, and how it can be deleted. Businesses and digital professionals, in turn, will have to restructure their relationships with client and customer data. Compliance obligations will increase, and difficult choices will have to be made. However, this time should be seen as an opportunity to rethink our relationship with data. After Snowden, Schrems, and Safe Harbor, it is clear that we cannot go back to the way things were before. In an era of where every one of our heartbeats is recorded on a wearable device and uploaded to a surveilled data centre in another country, the need for reform has never been more acute.

    While texts of the draft GDPR are available, there is not enough mulled wine in the world that will help you get through them. Instead, the law firm Fieldfisher Waterhouse has produced this helpful infographic which will give you a good idea of the changes we can expect to see (view full size):

    The most surprising outcome announced on 15 December was the new regulation’s teeth. Under the new law, companies that fail to heed the updated data protection rules will face fines of up to 4% of their global turnover. Additionally, the law expands the liability for data protection to both the controller (the company hosting the data) and the data processor (the company using the data). The new law will also introduce a one-stop shop for resolving concerns over data misuse. Companies will no longer be able to headquarter their European operations in countries which are perceived to have relatively light-touch data protection enforcement (that means you, Ireland) as a means of automatically rejecting any complaints filed by citizens outside that country.

    For digital professionals, the most immediate concern is analytics. In fact, I am going to make a prediction: in 2016 we will begin to see the same misguided war on analytics that we saw on cookies. By increasing the legal liabilities for both data processors and controllers – in other words, the company providing the analytics as well as the site administrator studying them – the new regulation risks creating disproportionate burdens as well as the same “guilt by association” risks we saw in 2012. There have already been statements made by some within the privacy community that analytics are tracking, and tracking is surveillance, therefore analytics are evil. Yet “just don’t use analytics,” as was suggested by one advocate, is simply not an option. European regulators should consult with the web community to gain a clear understanding of why analytics are vital to everyday site administrators, and must find a happy medium that protects users’ data without criminalising every website by default. No one wants a repeat of the crisis of consent, as well as the scaremongering, caused by the cookie law.

    Assuming the text is adopted in 2016, the new EU Data Protection Regulation would not come into effect until 2018. We have a considerable challenge ahead, but we also have plenty of time to get it right.

    About the author

    Heather Burns is a digital law specialist in Glasgow, Scotland. Her focus is researching, writing, and speaking about internet laws and policies which impact the professions of web design and development. She has been a professional web designer since 2007 and earned a postgraduate certification in internet law in 2015.

    More articles by Heather

    Make a Comic

    posted: 19 Dec 2015

    Rebecca Cottrell sharpens her trusty HB pencil and sketches out the steps to making a comic, before inking and colouring the whole deal with a few Photoshop tips for anyone unwilling to part with a screen over the festive season. Put that eggnog down and get drawing!

    For something slightly different over Christmas, why not step away from your computer and make a comic?

    Definitely not the author working on a comic in the studio, with the desk displaying some of the things you need to make a comic on paper.

    Why make a comic?

    First of all, it’s truly fun and it’s not that difficult. If you’re a designer, you can use skills you already have, so why not take some time to indulge your aesthetic whims and make something for yourself, rather than for a client or your company. And you can use a computer – or not.

    If you’re an interaction designer, it’s likely you’ve already made a storyboard or flow, or designed some characters for personas. This is a wee jump away from that, to the realm of storytelling and navigating human emotions through characters who may or may not be human. Similar medium and skills, different content.

    It’s not a client deliverable but something that stands by itself, and you’ve nobody’s criteria to meet except those that exist in your imagination!

    Thanks to your brain and the alchemy of comics, you can put nearly anything in a sequence and your brain will find a way to make sense of it. Scott McCloud wrote about the non sequitur in comics:

    “There is a kind of alchemy at work in the space between panels which can help us find meaning or resonance in even the most jarring of combinations.”

    Here’s an example of a non sequitur from Scott McCloud’s Understanding Comics – the images bear no relation to one another, but since they’re in a sequence our brains do their best to understand it:

    Once you know this it takes the pressure off somewhat. It’s a fun thing to keep in mind and experiment with in your comics!

    Materials needed

    • A4 copy/printing paper
    • HB pencil for light drawing
    • Dip pen and waterproof Indian ink
    • Bristol board (or any good quality card with a smooth, durable surface)

    Step 1: Get ideas

    You’d be surprised where you can take a small grain of an idea and develop it into an interesting comic. Think about a funny conversation you had, or any irrational fears, habits, dreams or anything else. Just start writing and drawing. Having ideas is hard, I know, but you will get some ideas when you start working.

    One way to keep track of ideas is to keep a sketch diary, capturing funny conversations and other events you could use in comics later.

    You might want to just sketch out the whole comic very roughly if that helps. I tend to sketch the story first, but it usually changes drastically during step 2.

    Step 2: Edit your story using thumbnails

    How thumbnailing works.

    Why use thumbnails? You can move them around or get rid of them!

    Drawings are harder and much slower to edit than words, so you need to draw something very quick and very rough. You don’t have to care about drawing quality at this point.

    You might already have a drafted comic from the previous step; now you can split each panel up into a thumbnail like the image above.

    Get an A4 sheet of printing paper and tear it up into squares. A thumbnail equals a comic panel. Start drawing one panel per thumbnail. This way you can move scenes and parts of the story around as you work on the pacing. It’s an extremely useful tip if you want to expand a moment in time or draw out a dialogue, or if you want to just completely cut scenes.

    Step 3: Plan a layout

    So you’ve got the story more or less down: you now need to know how they’ll look on the page. Sketch a layout and arrange the thumbnails into the layout.

    The simplest way to do this is to divide an A4 page into equal panels — say, nine. But if you want, you can be more creative than that. The Gigantic Beard That Was Evil by Stephen Collins is an excellent example of the scope for using page layout creatively. You can really push the form: play with layout, scale, story and what you think of as a comic.

    Step 4: Draw the comic

    I recommend drawing on A4 Bristol board paper since it has a smooth surface, can tolerate a lot of rubbing out and holds ink well. You can get it from any art shop.

    Using your thumbnails for reference, draw the comic lightly using an HB pencil. Don’t make the line so heavy that it can’t be erased (since you’ll ink over the lines later).

    Step 5: Ink the comic

    Image before colour was added.

    You’ve drawn your story. Well done!

    Now for the fun part. I recommend using a dip pen and some waterproof ink. Why waterproof? If you want, you can add an ink wash later, or even paint it.

    If you don’t have a dip pen, you could also use any quality pen. Carefully go over your pencilled lines with the pen, working from top left to right and down, to avoid smudging it. It’s unfortunately easy to smudge the ink from the dip pen, so I recommend practising first.

    You’ve made a comic!

    Step 6: Adding colour

    Comics traditionally had a limited colour palette before computers (here’s an in-depth explanation if you’re curious). You can actually do a huge amount with a restricted colour palette. Ellice Weaver’s comics show how very nicely how you can paint your work using a restricted palette. So for the next step, resist the temptation to add ALL THE COLOURS and consider using a limited palette.

    Once the ink is completely dry, erase the pencilled lines and you’ll be left with a beautiful inked black and white drawing.

    You could use a computer for this part. You could also photocopy it and paint straight on the copy. If you’re feeling really brave, you could paint straight on the original. But I’d suggest not doing this if it’s your first try at painting!

    What follows is an extremely basic guide for painting using Photoshop, but there are hundreds of brilliant articles out there and different techniques for digital painting.

    How to paint your comic using Photoshop

    • Scan the drawing and open it in Photoshop. You can adjust the levels (ImageAdjustmentsLevels) to make the lines darker and crisper, and the paper invisible. At this stage, you can erase any smudges or mistakes. With a Wacom tablet, you could even completely redraw parts! Computers are just amazing. Keep the line art as its own layer.
    • Add a new layer on top of the lines, and set the layer state from normal to multiply. This means you can paint your comic without obscuring your lines. Rename the layer something else, so you can keep track.
    • Start blocking in colour. And once you’re happy with that, experiment with adding tone and texture.

    Christmas comic challenge!

    Why not challenge yourself to make a short comic over Christmas? If you make one, share it in the comments. Or show me on Twitter — I’d love to see it.

    Credit: Many of these techniques were learned on the Royal Drawing School’s brilliant ‘Drawing the Graphic Novel’ course.

    About the author

    Rebecca Cottrell is an independent interaction designer, currently working with the Government Digital Service on GOV.UK. She lives in London, owns two adorable lovebirds, and likes drawing comics. Some of which you can see on her tumblr. She is on twitter.

    More articles by Rebecca

    Being Responsive to the Small Things

    posted: 18 Dec 2015

    Jonathan Snook considers the problem of container (or element) queries in the context of responsive web design and looks at the approach taken by current open source JavaScript solutions. Remember, no matter what size of box your Christmas gift comes in, it’s the thought that counts.

    Brought to you by The CSS Layout Workshop. Does developing layouts with CSS seem like hard work? How much time could you save without all the trial and error? Are you ready to really learn CSS layout?

    It’s that time of the year again to trim the tree with decorations. Or maybe a DOM tree?

    Any web page is made of HTML elements that lay themselves out in a tree structure. We start at the top and then have multiple branches with branches that branch out from there.

    To decorate our tree, we use CSS to specify which branches should receive the tinsel we wish to adorn upon it. It’s all so lovely.

    In years past, this was rather straightforward. But these days, our trees need to be versatile. They need to be responsive!

    Responsive web design is pretty wonderful, isn’t it? Based on our viewport, we can decide how elements on the page should change their appearance to accommodate various constraints using media queries.

    Clearleft have a delightfully clean and responsive site

    Alas, it’s not all sunshine, lollipops, and rainbows.

    With complex layouts, we may have design chunks — let’s call them components — that appear in different contexts. Each context may end up providing its own constraints on the design, both in its default state and in its possibly various responsive states.

    Media queries, however, limit us to the context of the entire viewport, not individual containers on the page. For every container our component lives in, we need to specify how to rearrange things in that context. The more complex the system, the more contexts we need to write code for.

    @media (min-width: 800px) { .features > .component { } .sidebar > .component {} .grid > .component {} }

    Each new component and each new breakpoint just makes the entire system that much more difficult to maintain.

    @media (min-width: 600px) { .features > .component { } .grid > .component {} } @media (min-width: 800px) { .features > .component { } .sidebar > .component {} .grid > .component {} } @media (min-width: 1024px) { .features > .component { } }

    Enter container queries

    Container queries, also known as element queries, allow you to specify conditional CSS based on the width (or maybe height) of the container that an element lives in. In doing so, you no longer have to consider the entire page and the interplay of all the elements within.

    With container queries, you’ll be able to consider the breakpoints of just the component you’re designing. As a result, you end up specifying less code and the components you develop have fewer dependencies on the things around them. (I guess that makes your components more independent.)

    Awesome, right?

    There’s only one catch.

    Browsers can’t do container queries. There’s not even an official specification for them yet. The Responsive Issues (née Images) Community Group is looking into solving how such a thing would actually work.

    See, container queries are tricky from an implementation perspective. The contents of a container can affect the size of the container. Because of this, you end up with troublesome circular references.

    For example, if the width of the container is under 500px then the width of the child element should be 600px, and if the width of the container is over 500px then the width of the child element should be 400px.

    Can you see the dilemma? When the container is under 500px, the child element resizes to 600px and suddenly the container is 600px. If the container is 600px, then the child element is 400px! And so on, forever. This is bad.

    I guess we should all just go home and sulk about how we just got a pile of socks when we really wanted the Millennium Falcon.

    Our saviour this Christmas: JavaScript

    The three wise men — Tim Berners-Lee, Håkon Wium Lie, and Brendan Eich — brought us the gifts of HTML, CSS, and JavaScript.

    To date, there are a handful of open source solutions to fill the gap until a browser implementation sees the light of day.

    Using any of these can sometimes feel like your toy broke within ten minutes of unwrapping it.

    Each take their own approach on how to specify the query conditions. For example, Elementary, the smallest of the group, only supports min-width declarations made in a :before selector.

    .mod-foo:before { content: “300 410 500”; }

    The script loops through all the elements that you specify, reading the content property and then setting an attribute value on the HTML element, allowing you to use CSS to style that condition.

    .mod-foo[data-minwidth~="300"] { background: blue; }

    To get the script to run, you’ll need to set up event handlers for when the page loads and for when it resizes.

    window.addEventListener( "load", window.elementary, false ); window.addEventListener( "resize", window.elementary, false );

    This works okay for static sites but breaks down on pages where elements can expand or contract, or where new content is dynamically inserted.

    In the case of EQ.js, the implementation requires the creation of the breakpoints in the HTML. That means that you have implementation details in HTML, JavaScript, and CSS. (Although, with the JavaScript, once it’s in the build system, it shouldn’t ever be much of a concern unless you’re tracking down a bug.)

    Another problem you may run into is the use of content delivery networks (CDNs) or cross-origin security issues. The ElementQuery and CSS Element Queries libraries need to be able to read the CSS file. If you are unable to set up proper cross-origin resource sharing (CORS) headers, these libraries won’t help.

    At Shopify, for example, we had all of these problems. The admin that store owners use is very dynamic and the CSS and JavaScript were being loaded from a CDN that prevented the JavaScript from reading the CSS.

    To go responsive, the team built their own solution — one similar to the other scripts above, in that it loops through elements and adds or removes classes (instead of data attributes) based on minimum or maximum width.

    The caveat to this particular approach is that the declaration of breakpoints had to be done in JavaScript.

    elements = [ { ‘module’: “.carousel”, “className”:’alpha’, minWidth: 768, maxWidth: 1024 }, { ‘module’: “.button”, “className”:’beta’, minWidth: 768, maxWidth: 1024 } , { ‘module’: “.grid”, “className”:’cappa’, minWidth: 768, maxWidth: 1024 } ]

    With that done, the script then had to be set to run during various events such as inserting new content via Ajax calls. This sometimes reveals itself in flashes of unstyled breakpoints (FOUB). An unfortunate side effect but one largely imperceptible.

    Using this approach, however, allowed the Shopify team to make the admin responsive really quickly. Each member of the team was able to tackle the responsive story for a particular component without much concern for how all the other components would react.

    Each element responds to its own breakpoint that would amount to dozens of breakpoints using traditional breakpoints. This approach allows for a truly fluid and adaptive interface for all screens.

    Christmas is over

    I wish I were the bearer of greater tidings and cheer. It’s not all bad, though. We may one day see browsers implement container queries natively. At which point, we shall all rejoice!

    About the author

    Jonathan Snook writes about tips, tricks, and bookmarks on his blog at He has also written for A List Apart and .net magazine, and has co-authored two books, and . He has also authored and received world-wide acclaim for the self-published book, sharing his experience and best practices on CSS architecture.

    Photo: Patrick H. Lauke

    More articles by Jonathan

    Cooking Up Effective Technical Writing

    posted: 17 Dec 2015

    Sally Jenkinson sets out her recipe for helpful documentation, suggesting you serve up structured, easily digestible and nutritious information for your readers. With a dusting of screencasts and GIFs, it’s a recipe for success at Christmas and beyond.

    Merry Christmas! May your preparations for this festive season of gluttony be shaping up beautifully. By the time you read this I hope you will have ordered your turkey, eaten twice your weight in Roses/Quality Street (let’s not get into that argument), and your Christmas cake has been baked and is now quietly absorbing regular doses of alcohol.

    Some of you may be reading this and scoffing Of course! I’ve also made three batches of mince pies, a seasonal chutney and enough gingerbread men to feed the whole street! while others may be laughing Bake? Oh no, I can’t cook to save my life.

    For beginners, recipes are the step-by-step instructions that hand-hold us through the cooking process, but even as a seasoned expert you’re likely to refer to a recipe at some point. Recipes tell us what we need, what to do with it, in what order, and what the outcome will be. It’s the documentation behind our ideas, and allows us to take the blueprint for a tasty morsel and to share it with others so they can recreate it. In fact, this is a little like the open source documentation and tutorials that we put out there, similarly aiming to guide other developers through our creations.

    The ‘just’ification of documentation

    Lately it feels like we’re starting to consider the importance of our words, and the impact they can have on others. Brad Frost warned us of the dangers of “Just” when it comes to offering up solutions to queries:

    Just use this software/platform/toolkit/methodology…”

    “Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. “Just” is a dangerous word.

    “Just” by Brad Frost

    I can really empathise with these sentiments. My relationship with code started out as many good web tales do, with good old HTML, CSS and JavaScript. University years involved some time with Perl, PHP, Java and C. In my first job I worked primarily with ColdFusion, a bit of ActionScript, some classic  ASP and pinch of Java. I’d do a bit of PHP outside work every now and again. .NET came in, but we never really got on, and eventually I started learning some Ruby, Python and Node. It was a broad set of learnings, and I enjoyed the similarities and differences that came with new languages. I don’t develop day in, day out any more, and my interests and work have evolved over the years, away from full-time development and more into architecture and strategy. But I still make things, and I still enjoy learning.

    I have often found myself bemoaning the lack of tutorials or courses that cater for the middle level – someone who may be learning a new language, but who has enough programming experience under their belt to not need to revise the concepts of how loops or objects work, and is perfectly adept at googling the syntax for getting a substring. I don’t want snippets out of context; I want an understanding of architectural principles, of the strengths and weaknesses, of the type of applications that work well with the language.

    I’m caught in the place between snoozing off when ‘Using the Instagram API with Ruby’ hand-holds me through what REST is, and feeling like I’m stupid and need to go back to dev school when I can’t get my environment and dependencies set up, let alone work out how I’m meant to get any code to run.

    It’s seems I’m not alone with this – Erin McKean seems to have been here too:

    “Some tutorials (especially coding tutorials) like to begin things in media res. Great for a sense of dramatic action, bad for getting to “Step 1” without tears. It can be really discouraging to fire up a fresh terminal window only to be confronted by error message after error message because there were obligatory steps 0.1.0 through 0.9.9 that you didn’t even know about.”

    Tips for Learning What You Don’t Know You Don’t Know” by Erin McKean

    I’m sure you’ve been here too. Many tutorials suffer badly from the fabled ‘how to draw an owl’-itis.

    It’s the kind of feeling you can easily get when sifting through recipes as well as with code. Far from being the simple instructions that let us just follow along, they too can be a minefield. Fall in too low and you may be skipping over an explanation of what simmering is, or set your sights too high and you may get stuck at the point where you’re trying to sous vide a steak using your bathtub and a Ziploc bag.

    Don’t be a turkey, use your loaf!

    My mum is a great cook in my eyes (aren’t all mums?). I love her handcrafted collection of gathered recipes from over the years, including the one below, which is a great example of how something may make complete sense to the writer, but could be impermeable to a reader.

    Depending on your level of baking knowledge, you may ask: What’s SR flour? What’s a tsp? Should I use salted or unsalted butter? Do I use sticks of cinnamon or ground? Why is chopped chocolate better? How do I cream things? How big should the balls be? How well is “well spaced”? How much leeway do I have for “(ish!!)”? Does the “20” on the other cookie note mean I’ll end up with twenty? At any point, making a wrong call could lead to rubbish cookies, and lead to someone heading down the path of an I can’t cook mentality.

    You may be able to cook (or follow recipes), but you may not understand the local terms for ingredients, may not be able to acquire something and need to know what kind of substitutes you can use, or may need to actually do some prep before you jump into the main bit.

    However, if we look at good examples of recipes, I think there’s a lot we can apply when it comes to technical writing on the web. I’ve written before about the benefit of breaking documentation into small, reusable parts, and this will help us, but we can also take it a bit further. Here are my five top tips for better technical writing.

    1. Structure and standardise your information

    Think of the structure of a recipe. We very often have some common elements and they usually follow roughly the same format. We have standards and conventions that allow us to understand very quickly what a recipe is and how it should be used. 

    Great recipes help their chefs know what they need to get ready in advance, both in terms of buying ingredients and putting together their kit. They then talk through the process, using appropriate language, and without making assumptions that the person can fill in any gaps for themselves; they explain why things are done the way they are. The best recipes may also suggest how you can take what you’ve done and put your own spin on it. For instance, a good recipe for the simple act of boiling an egg will explain cooking time in relation to your preference for yolk gooiness. There are also different flavour combinations to try, accompaniments, or presentation suggestions. 

    By breaking down your technical writing into similar sections, you can help your audience understand the elements they’ll be working with, what they need to do once they have these, and how they can move on from your self-contained illustration.


    Ensure your title is suitably descriptive and representative of the result. Getting Started with Python perhaps isn’t as helpful as Learn Python: General Syntax and Basics.


    Many recipes include a couple of lines as an overview of what you’ll end up with, and many include a photo of the finished dish. With our technical writing we can do the same:

    In this tutorial we’re going to learn how to set up our development environment, and we’ll then undertake some exercises to explore the general syntax, finishing by building a mini calculator.


    What are the components we’ll be working with, whether in terms of versions, environment, languages or the software packages and libraries you’ll need along the way? Listing these up front gives the reader a great summary of the things they’ll be using, and any gotchas.

    Being able to provide a small amount of supporting information will also help less experienced users. Ideally, explain briefly what things are and why we’re using it.


    As we heard from Erin above, not fully understanding the prep needed can be a huge source of frustration. Attempting to run a code snippet without context will often lead to failure when the prerequisites and process aren’t clear. Be sure to include information around any environment set-up, installation or config you’ll need to have done before you start.

    Stu Robson’s Simple Sass documentation aims to do this before getting into specifics, although ideally this would also include setting up Sass itself.


    The body of the tutorial itself is the whole point of our writing. The next four tips will hopefully make your tutorial much more successful.


    Like our ingredients section, as important as explaining why we’re using something in this context is, it’s also great to explain alternatives that could be used instead, and the impact of doing so.

    Perhaps go a step further, explaining ways that people can change what you have done in your tutorial/readme for use in different situations, or to provide further reading around next steps. What happens if they want to change your static array of demo data to use JSON, for instance? By giving some thought to follow-up questions, you can better support your readers.

    While not in a separate section, the source code for GreenSock’s GSAP JS basics explains:

    We’ll use a window.onload for simplicity, but typically it is best to use either jQuery’s $(document).ready() or $(window).load() or cross-browser event listeners so that you’re not limited to one.

    Keep in mind to both:

    • Explain what variations are possible.
    • Explain why certain options may be more desirable than others in different situations.

    2. Small, reusable components

    Reusable components are for life, not just for Christmas, and they’re certainly not just for development. If you start to apply the structure above to your writing, you’re probably going to keep coming across the same elements: Do I really have to explain how to install Sass and Node.js again, Sally? The danger with more clarity is that our writing becomes bloated and overly convoluted for advanced readers, those who don’t need to be told how to beat an egg for the hundredth time. 

    Instead, by making our writing reusable and modular, and by creating smaller, central resources, we can provide context and extra detail where needed without diluting our core message. These could be references we create, or those already created well by others.

    This recipe for katsudon makes use of this concept. Rather than explaining how to make tonkatsu or dashi stock, these each have their own page. Once familiar, more advanced readers will likely skip over the instructions for the component parts.

    3. Provide context to aid accessibility

    Here I’m talking about accessibility in the broadest sense. Small, isolated snippets can be frustrating to those who don’t fully understand the wider context of how our examples work.

    Showing an exciting standalone JavaScript function is great, but giving someone the full picture of how and when this is called, and how it should be included in relation to other HTML and CSS is even better. Giving your readers the ability to view a big picture version, and ideally the ability to download a full version of the source, will help to reduce some of the frustrations of trying to get your component to work in their set-up. 

    4. Be your own tech editor

    A good editor can be invaluable to your work, and wherever possible I’d recommend that you try to get a neutral party to read over your writing. This may not always be possible, though, and you may need to rely on yourself to cast a critical eye over your work.

    There are many tips out there around general editing, including printing out your work onto paper, or changing the font size: both will force your eyes to review it in a new light. Beyond this, I’d like to encourage you to think about the following:

    • Explain what things are. For example, instead of referencing Grunt, in the first instance perhaps reference “Grunt (a JavaScript task runner that minimises repetitive activities through automation).”
    • Explain how you get things, even if this is a link to official installers and documentation. Don’t leave your readers having to search.
    • Why are you using this approach/technology over other options?
    • What happens if I use something else? What depends on this?
    • Avoid exclusionary lingo or acronyms.

    Airbnb’s JavaScript Style Guide includes useful pointers around their reasoning:

    Use computed property names when creating objects with dynamic property names.

    Why? They allow you to define all the properties of an object in one place.

    The language we use often makes assumptions, as we saw with “just”. An article titled “ES6 for Beginners” is hugely ambiguous: is this truly for beginner coders, or actually for people who have a good pre-existing understanding of JavaScript but are new to these features? Review your writing with different types of readers in mind. How might you confuse or mislead them? How can you better answer their questions?

    This doesn’t necessarily mean supporting everyone – your audience may need to have advanced skills – but even if you’re providing low-level, deep-dive, reference material, trying not to make assumptions or take shortcuts will hopefully lead to better, clearer writing.

    5. A picture is worth a thousand words…

    …or even better: use a thousand pictures, stitched together into a quick video or animated GIF. People learn in different ways. Just as recipes often provide visual references or a video to work along with, providing your technical information with alternative demonstrations can really help get your point across. Your audience will be able to see exactly what you’re doing, what they should expect as interaction responses, and what the process looks like at different points.

    There are many, many options for recording your screen, including QuickTime Player on Mac OS X (FileNew Screen Recording), GifGrabber, or Giffing Tool on Windows.

    Paul Swain, a UX designer, uses GIFs to provide additional context within his documentation, improving communication:

    “My colleagues (from across the organisation) love animated GIFs. Any time an interaction is referenced, it’s accompanied by a GIF and a shared understanding of what’s being designed. The humble GIF is worth so much more than a thousand words; and it’s great for cats.”

    Paul Swain

    Next time you’re cooking up some instructions for readers, think back to what we can learn from recipes to help make your writing as accessible as possible. Use structure, provide reusable bitesize morsels, give some context, edit wisely, and don’t scrimp on the GIFs. And above all, have a great Christmas!

    About the author

    Sally Jenkinson is a consultant and digital solutions architect based in the UK, who, through her company Records Sound the Same, helps businesses from big to small with their discovery, requirements, and strategic digital decisions.

    Central to this are Sally’s views of responsibly using technology to enhance experiences, improving older systems and processes through transformation work, and talking about technical things in a way that isn’t scary or boring to her clients. She has worked with people including Inghams, Nokia, Macmillan Cancer Support, and Electronic Arts, and is also a speaker, an author, and overenthusiastic jasmine tea drinker.

    You can find out more about Sally’s work at, and she tweets as @sjenkinson when she’s not got her head stuck in a comic book or her hands wrapped around an Xbox One controller.

    More articles by Sally

    The Accessibility Mindset

    posted: 16 Dec 2015

    Eric Eggert celebrates the simplicity of making websites accessible and, when accessibility is as fundamental to a project as performance and code quality, how it can improve the experience for all users. The web, like a gleeful cheer of “Merry Christmas” at yuletide, is for everyone.

    Accessibility is often characterized as additional work, hard to learn and only affecting a small number of people. Those myths have no logical foundation and often stem from outdated information or misconceptions.

    Indeed, it is an additional skill set to acquire, quite like learning new JavaScript frameworks, CSS layout techniques or new HTML elements. But it isn’t particularly harder to learn than those other skills.

    A World Health Organization (WHO) report on disabilities states that,

    [i]ncluding children, over a billion people (or about 15% of the world’s population) were estimated to be living with disability.

    Being disabled is not as unusual as one might think. Due to chronic health conditions and older people having a higher risk of disability, we are also currently paving the cowpath to an internet that we can still use in the future.

    Accessibility has a very close relationship with usability, and advancements in accessibility often yield improvements in the usability of a website. Websites are also more adaptable to users’ needs when they are built in an accessible fashion.

    Beyond the bare minimum

    In the time of table layouts, web developers could create code that passed validation rules but didn’t adhere to the underlying semantic HTML model. We later developed best practices, like using lists for navigation, and with HTML5 we started to wrap those lists in nav elements. Working with accessibility standards is similar. The Web Content Accessibility Guidelines (WCAG) 2.0 can inform your decision to make websites accessible and can be used to test that you met the success criteria. What it can’t do is measure how well you met them.

    W3C developed a long list of techniques that can be used to make your website accessible, but you might find yourself in a situation where you need to adapt those techniques to be the most usable solution for your particular problem.

    The checkbox below is implemented in an accessible way: The input element has an id and the label associated with the checkbox refers to the input using the for attribute. The hover area is shown with a yellow background and a black dotted border:

    The label is clickable and the checkbox has an accessible description. Job done, right? Not really. Take a look at the space between the label and the checkbox:

    The gutter is created using a right margin which pushes the label to the right. Users would certainly expect this space to be clickable as well. The simple solution is to wrap the label around the checkbox and the text:

    You can also set the label to display:block; to further increase the clickable area:

    And while we’re at it, users might expect the whole box to be clickable anyway. Let’s apply the CSS that was on a wrapping div element to the label directly:

    The result enhances the usability of your form element tremendously for people with lower dexterity, using a voice mouse, or using touch interfaces. And we only used basic HTML and CSS techniques; no JavaScript was added and not one extra line of CSS.

    <form action="#"> <label for="uniquecheckboxid"> <input type="checkbox" name="checkbox" id="uniquecheckboxid" /> Checkbox 4 </label> </form>

    Button Example

    The button below looks like a typical edit button: a pencil icon on a real button element. But if you are using a screen reader or a braille keyboard, the button is just read as “button” without any indication of what this button is for.

    A screen reader announcing a button. Contains audio.

    The code snippet shows why the button is not properly announced:

    <button> <span class="icon icon-pencil"></span> </button>

    An icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users:

    A screen reader announcing a button with a title.

    However, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn’t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts:

    The mobile GitHub view with and without external fonts.

    Even if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn’t load.

    <button> <img src="icon-pencil.svg" alt="Edit"> </button>

    Providing always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons.

    <button> <span class="icon icon-pencil"></span> Edit </button>

    This also reads just fine in screen readers:

    A screen reader announcing the revised button.

    Clever usability enhancements don’t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the “captioned videos” or “audio description” categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don’t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study.

    More information

    W3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out “How People with Disabilities Use the Web”, there are “Tips for Getting Started” for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way.


    You can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don’t notice when those fundamental aspects of good website design and development are done right. But they’ll always know when they are implemented poorly.

    If you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn’t know they’d need it:

    In this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks “What did she say?” the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.

    About the author

    Eric Eggert is an accessibility advocate living in Essen, Germany, currently working for the W3C’s Web Accessibility Initiative, helping the WCAG and EO Working Groups to make accessibility information easier to find and use. He is co-editor of the W3C Web Accessibility Tutorials. He also co-owns a small agency for web development and consulting, called outline. When he is not working or giving talks about web topics, he enjoys a game of pool, a festival, or a movie.

    More articles by Eric

    Beyond the Style Guide

    posted: 15 Dec 2015

    Paul Robert Lloyd runs his finger along the seam between interface patterns and design systems, exploring how a visual design language can underpin and inform a web style guide, with judicious use of CSS preprocessing. Like a good Christmas jumper, sometimes you need to get creative with the rules.

    Much like baking a Christmas cake, designing for the web involves creating an experience in layers. Starting with a solid base that provides the core experience (the fruit cake), we can add further layers, each adding refinement (the marzipan) and delight (the icing).

    Don’t worry, this isn’t a misplaced cake recipe, but an evaluation of modular design and the role style guides can play in acknowledging these different concerns, be they presentational or programmatic.

    The auteur’s style guide

    Although trained as a graphic designer, it was only when I encountered the immediacy of the web that I felt truly empowered as a designer. Given a desire to control every aspect of the resulting experience, I slowly adopted the role of an auteur, exploring every part of the web stack: front-end to back-end, and everything in between. A few years ago, I dreaded using the command line. Today, the terminal is a permanent feature in my Dock.

    In straddling the realms of graphic design and programming, it’s the point at which they meet that I find most fascinating, with each dicipline valuing the creation of effective systems, be they for communication or code efficiency. Front-end style guides live at this intersection, demonstrating both the modularity of code and the application of visual design.

    Painting by numbers

    In our rush to build modular systems, design frameworks have grown in popularity. While enabling quick assembly, these come at the cost of originality and creative expression – perhaps one reason why we’re seeing the homogenisation of web design.

    In editorial design, layouts should accentuate content and present it in an engaging manner. Yet on the web we see a practice that seeks templated predictability. In ‘Design Machines’ Travis Gertz argued that (emphasis added):

    Design systems still feel like a novelty in screen-based design. We nerd out over grid systems and modular scales and obsess over style guides and pattern libraries. We’re pretty good at using them to build repeatable components and site-wide standards, but that’s sort of where it ends. […] But to stop there is to ignore the true purpose and potential of a design system.

    Unless we consider how interface patterns fully embrace the design systems they should be built upon, style guides may exacerbate this paint-by-numbers approach, encouraging conformance and suppressing creativity.

    Anatomy of a button

    Let’s take a look at that most canonical of components, the button, and consider what we might wish to document and demonstrate in a style guide.

    The different layers of our button component.


    The most variable aspect of any component. Content guidelines will exert the most influence here, dictating things like tone of voice (whether we should we use stiff, formal language like ‘Submit form’, or adopt a more friendly tone, perhaps ‘Send us your message’) and appropriate language. For an internationalised interface, this may also impact word length and text direction or orientation.


    HTML provides a limited vocabulary which we can use to structure content and add meaning. For interactive elements, the choice of element can also affect its behaviour, such as whether a button submits form data or links to another page:

    <button type="submit">Button text</button>

    <a href="/index.html">Button text</a>

    Note: One of the reasons I prefer to use <button> instead of <input type=“button”>, besides allowing the inclusion of content other than text, is that it has a markup structure similar to links, therefore keeping implementation differences to a minimum.

    We should also think about each component within the broader scope of our particular product. For this we can employ a further vocabulary, which can be expressed by adding values to the class attribute. For a newspaper, we might use names like lede, standfirst and headline, while a social media application might see us reach for words like stream, action or avatar.


    The appearance of a component can never be considered in isolation. Informed by its relationship to other elements, style guides may document different stylistic variations of a component, even if the underlying function remains unchanged: primary and secondary button styles, for example.


    A component can exhibit various states: blank, loading, partial, error and ideal, and a style guide should reflect that. Our button component is relatively simple, yet even here we need to consider hover, focused, active and disabled states.

    Transcending layers

    This overview reinforces Ethan’s note from earlier in this series:

    I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web.

    While it’s tempting to describe a component as series of layers, certain aspects will transcend several of these. The accessibility of a component, for example, may influence the choice of language, the legibility of text, colour contrast and which affordances are provided in different states.

    Visual design language: documenting the missing piece

    Even given this small, self-contained component, we can see several concerns at play, and in reviewing our button it seems we have most things covered. However, a few questions remain unanswered. Why does it have a blue background? Why are the borders 2px thick, with a radius of 4px? Why are we using that font, at that size and with that weight?

    These questions can be answered by our visual design language. More than a set of type choices and colour palettes, a design language can dicate common measures, ratios and the resulting grid(s) these influence. Ideally governed by a set of broader design principles, it can also inform an illustration style, the type of photography sourced or commissioned, and the behaviour of any animations.

    Whereas a style guide ensures conformity, having it underpinned by an effective design language will allow for flexibility; only by knowing the rules can you know how to break them!

    Type pairings in the US Web Design Standards guide.

    For a style guide to thoroughly articulate a visual design system, the spectrum of choices it allows for should be acknowledged. A fantastic example of this can be found in the US Web Design Standards. By virtue of being a set of standards designed to apply to a number of different sites, this guide offers a range of type pairings (that take into account performance considerations) and provides primary, secondary and tertiary palette relationships, with shades and tones thereof:

    Colour palettes in the US Web Design Standards guide.

    A visual language in code form

    Properly documenting our design language in a style guide is a good start, yet even better if it can be expressed in code. This is where CSS preprocessors become a powerful ally.

    In Sass, methods like mixins and maps can help us represent relationships between values. Variables (and CSS variables) extend the vocabulary provided natively by CSS, meaning we can describe patterns in terms of our own visual language. These tools effectively become an interface to our design system. Furthermore, they help maintain a separation of concerns, with visual presentation remaining where it should be: in our style sheets.

    Take this simple example, an article summary on a website counting down the best Christmas movies:

    The design for our simple component example.

    Our markup is as follows, using appropriate semantic HTML elements and incorporating the vocabulary from our collection of design patterns (expressed using the BEM methodology):

    <article class="summary"> <h1 class="summary__title"> <a href="scrooged.html"> <span class="summary__position">12</span> Scrooged (1988) </a> </h1> <div class="summary__body"> <p>It’s unlikely that Bill Murray could ever have got through his career without playing a version of Scrooge…</p> </div> <footer class="summary__meta"> <strong>Director:</strong> Richard Donner<br/> <strong>Stars:</strong> Bill Murray, Buddy Hackett, Karen Allen </footer> </article>

    We can then describe the presentation of this HTML by using Sass maps to define our palettes, mixins to include predefined font metrics, and variables to recall common measurements:

    .summary { margin-bottom: ($baseline * 4) } .summary__title { @include font-family(display-serif); @include font-size(title); color: palette(neutral, dark); margin-bottom: ($baseline * 4); border-top: $rule-height solid palette(primary, purple); padding-top: ($baseline * 2); } .summary__position { @include font-family(display-sans, 300); color: palette(neutral, mid); } .summary__body { @include font-family(text-serif); @include font-size(body); margin-bottom: ($baseline * 2); } .summary__meta { @include font-family(text-sans); @include font-size(caption); }

    Of course, this is a simplistic example for the purposes of demonstration. However, such thinking was employed at a much larger scale at the Guardian. Using a set of Sass components, complex patterns could be described using a language familar to everyone on the product team, be they a designer, developer or product owner:

    The design of a component on the Guardian website, described in terms of its Sass-powered design system.

    Unlocking possibility

    Alongside tools like preprocessors, newer CSS layout modules like flexbox and grid layout mean the friction we’ve long been accustomed to when creating layouts on the web is no longer present, and the full separation of presentation from markup is now possible. Now is the perfect time for graphic designers to advocate design systems that these developments empower, and ensure they’re fully represented in both documentation and code. That way, together, we can build systems that allow for greater visual expression. After all, there’s more than one way to bake a Christmas cake.

    About the author

    Paul Robert Lloyd is an independent designer, writer and speaker who helps organisations like the Guardian, UNICEF and Mozilla create purposeful digital products.

    If not indulging in his love of train travel, Paul can be found in a coffee shop, either writing for his blog, or blathering on Twitter.

    More articles by Paul

    Grid, Flexbox, Box Alignment: Our New System for L

    posted: 14 Dec 2015

    Rachel Andrew unwraps the new paradigms of web layout, comparing their features and showing how they can free us from grid-based, div-infested frameworks. Beautifully wrapped boxes look lovely under the Christmas tree, but we need to think and break out of them.

    Brought to you by The CSS Layout Workshop. Does developing layouts with CSS seem like hard work? How much time could you save without all the trial and error? Are you ready to really learn CSS layout?

    Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers.

    Grid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work.

    Another new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported.

    Owing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks.

    If there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we’ll be easily able to date a website to circa 2015 by seeing <div class="row"> or <div class="col-md-3"> in the markup.

    In this article, I’m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them.

    To see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here.


    Items only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It’s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout.

    At a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox:

    See the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen.

    And in grid layout (requires a CSS grid-supporting browser):

    See the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen.


    Full-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you’d like to have under your tree this Christmas, then this is the box you’ll want to unwrap first.

    The box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things.

    Our examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property.

    The examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self.

    Take a look at an example in flexbox:

    See the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen.

    And also in grid layout (requires a CSS grid-supporting browser):

    See the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen.

    The alignment properties used with CSS grid layout.

    Fluid grids

    A cornerstone of responsive design is the concept of fluid grids.

    “[…]every aspect of the grid—and the elements laid upon it—can be expressed as a proportion relative to its container.”
    —Ethan Marcotte, “Fluid Grids

    The method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element.

    h1 { margin-left: 14.575%; /* 144px / 988px = 0.14575 */ width: 70.85%; /* 700px / 988px = 0.7085 */ }

    In more recent years, we’ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple.

    The most basic of flexbox demos shows this fluidity in action. The justify-content property – another property defined in the box alignment module – can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion.

    In this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between.

    See the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen.

    For true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion.

    The flexbox flex property is a shorthand for:

    • flex-grow
    • flex-shrink
    • flex-basis

    The flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value.

    • flex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis.
    • flex: 0 0 200px: a box that will be 200px and cannot grow or shrink.
    • flex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller.

    In this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis.

    See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.

    What I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out.

    If I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo:

    See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.

    Once you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes.

    In this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr.

    grid-template-columns: 1fr 2fr 2fr 2fr;

    See the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen.

    The four-track grid.

    Separation of concerns

    My younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup.

    Browsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another.

    Here is a snippet from the Bootstrap grid examplestwo columns with two nested columns:

    <div class="row"> <div class="col-md-8"> .col-md-8 <div class="row"> <div class="col-md-6"> .col-md-6 </div> <div class="col-md-6"> .col-md-6 </div> </div> </div> <div class="col-md-4"> .col-md-4 </div> </div>

    Not a million miles away from something I might have written in 1999.

    <table> <tr> <td class="col-md-8"> .col-md-8 <table> <tr> <td class="col-md-6"> .col-md-6 </td> <td class="col-md-6"> .col-md-6 </td> </tr> </table> </td> <td class="col-md-4"> .col-md-4 </td> </tr> </table>

    Grid and flexbox layouts do not need to be described in markup. The layout description happens entirely in the CSS, meaning that elements can be moved around from within the presentation layer.

    Flexbox gives us the ability to reverse the flow of elements, but also to set the order of elements with the order property. This is demonstrated here, where Widget the cat is in position 1 in the source, but I have used the order property to display him after the things that are currently unimpressive to him.

    See the Pen Flexbox: order by rachelandrew (@rachelandrew) on CodePen.

    Grid layout takes this a step further. Where flexbox lets us set the order of items in a single dimension, grid layout gives us the ability to position things in two dimensions: both rows and columns. Defined in the CSS, this positioning can be changed at any breakpoint without needing additional markup. Compare the source order with the display order in this example (requires a CSS grid-supporting browser):

    See the Pen Grid positioning in two dimensions by rachelandrew (@rachelandrew) on CodePen.

    Laying out our items in two dimensions using grid layout.

    As these demos show, a straightforward way to decide if you should use grid layout or flexbox is whether you want to position items in one dimension or two. If two, you want grid layout.

    A note on accessibility and reordering

    The issues arising from this powerful ability to change the way items are ordered visually from how they appear in the source have been the subject of much discussion. The current flexbox editor’s draft states

    “Authors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.”
    CSS Flexible Box Layout Module Level 1, Editor’s Draft (3 December 2015)

    This is to ensure that non-visual user agents (a screen reader, for example) can rely on the document source order as being correct. Take care when reordering that you do so from the basis of a sound document that makes sense in terms of source order. Avoid using visual order to convey meaning.

    Automatic content placement with rules

    Having control over the order of items, or placing items on a predefined grid, is nice. However, we can often do that already with one method or another and we have frameworks and tools to help us. Tools such as Susy mean we can even get away from stuffing our markup full of grid classes. However, our new layout methods give us some interesting new possibilities.

    Something that is useful to be able to do when dealing with content coming out of a CMS or being pulled from some other source, is to define a bunch of rules and then say, “Display this content, using these rules.”

    As an example of this, I will leave you with a Christmas poem displayed in a document alongside Widget the cat and some of the decorations that are bringing him no Christmas cheer whatsoever.

    The poem is displayed first in the source as a set of paragraphs. I’ve added a class identifying each of the four paragraphs but they are displayed in the source as one text. Below that are all my images, some landscape and some portrait; I’ve added a class of landscape to the landscape ones.

    The mobile-first grid is a single column and I use line-based placement to explicitly position my poem paragraphs. The grid layout auto-placement rules then take over and place the images into the empty cells left in the grid.

    At wider screen widths, I declare a four-track grid, and position my poem around the grid, keeping it in a readable order.

    I also add rules to my landscape class, stating that these items should span two tracks. Once again the grid layout auto-placement rules position the rest of my images without my needing to position them. You will see that grid layout takes items out of source order to fill gaps in the grid. It does this because I have set the property grid-auto-flow to dense. The default is sparse meaning that grid will not attempt this backfilling behaviour.

    Take a look and play around with the full demo (requires a CSS grid layout-supporting browser):

    See the Pen Grid auto-flow with rules by rachelandrew (@rachelandrew) on CodePen.

    The final automatic placement example.

    My wish for 2016

    I really hope that in 2016, we will see CSS grid layout finally emerge from behind browser flags, so that we can start to use these features in production — that we can start to move away from using the wrong tools for the job.

    However, I also hope that we’ll see developers fully embracing these tools as the new system that they are. I want to see people exploring the possibilities they give us, rather than trying to get them to behave like the grid systems of 2015. As you discover these new modules, treat them as the new paradigm that they are, get creative with them. And, as you find the edges of possibility with them, take that feedback to the CSS Working Group. Help improve the layout systems that will shape the look of the future web.

    Some further reading

    About the author

    Rachel Andrew is a Director of, a UK web development consultancy and creators of the small content management system, Perch. She is the author of a number of books, and is a regular columnist for A List Apart.

    She curates a popular email newsletter on CSS Layout, and will be launching a CSS Layout online workshop in early 2016.

    When not writing about business and technology on her blog at or speaking at conferences, you will usually find Rachel running up and down one of the giant hills in Bristol.

    More articles by Rachel

    What I Learned about Product Design This Year

    posted: 13 Dec 2015

    Meagan Fisher casts a thoughtful glance back over her work this year and considers what it means to design a product rather than a marketing experience. Pour yourself a cup of eggnog, grab a mince pie or two and gather around the fireside. Are you sitting comfortably? Good, then we shall begin.

    2015 was a humbling year for me. In September of 2014, I joined a tiny but established startup called SproutVideo as their third employee and first designer. The role interests me because it affords the opportunity to see how design can grow a solid product with a loyal user-base into something even better.

    The work I do now could also have a real impact on the brand and user experience of our product for years to come, which is a thrilling prospect in an industry where much of what I do feels small and temporary. I got in on the ground floor of something special: a small, dedicated, useful company that cares deeply about making video hosting effortless and rewarding for our users.

    I had (and still have) grand ideas for what thoughtful design can do for a product, and the smaller-scale product design work I’ve done or helped manage over the past few years gave me enough eager confidence to dive in head first. Readers who have experience redesigning complex existing products probably have a knowing smirk on their face right now. As I said, it’s been humbling. A year of focused product design, especially on the scale we are trying to achieve with our small team at SproutVideo, has taught me more than any projects in recent memory. I’d like to share a few of those lessons.

    Product design is very different from marketing design

    The majority of my recent work leading up to SproutVideo has been in marketing design. These projects are so fun because their aim is to communicate the value of the product in a compelling and memorable way. In order to achieve this goal, I spent a lot of time thinking about content strategy, responsive design, and how to create striking visuals that tell a story. These are all pursuits I love.

    Product design is a different beast. When designing a homepage, I can employ powerful imagery, wild gradients, and somewhat-quirky fonts. When I began redesigning the SproutVideo product, I wanted to draw on all the beautiful assets I’ve created for our marketing materials, but big gradients, textures, and display fonts made no sense in this new context.

    That’s because the product isn’t about us, and it isn’t about telling our story. Product design is about getting out of the way so people can do their job. The visual design is there to create a pleasant atmosphere for people to work in, and to help support the user experience. Learning to take “us” out of the equation took some work after years of creating gorgeous imagery and content for the sales-driven side of businesses.

    I’ve learned it’s very valuable to design both sides of the experience, because marketing and product design flex different muscles. If you’re currently in an environment where the two are separate, consider switching teams in 2016. Designing for product when you’ve mostly done marketing, or vice versa, will deepen your knowledge as a designer overall. You’ll face new unexpected challenges, which is the only way to grow.

    Product design can not start with what looks good on Dribbble

    I have an embarrassing confession: when I began the redesign, I had a secret goal of making something that would look gorgeous in my portfolio. I have a collection of product shots that I admire on Dribbble; examples of beautiful dashboards and widgets and UI elements that look good enough to frame. I wanted people to feel the same way about the final outcome of our redesign. Mistakenly, this was a factor in my initial work. I opened Photoshop and crafted pixel-perfect static buttons and form elements and color palettes that — when applied to our actual product — looked like a toddler beauty pageant. It added up to a lot of unusable shininess, noise, and silliness.

    I was disappointed; these elements seemed so lovely in isolation, but in context, they felt tacky and overblown. I realized: I’m not here to design the world’s most beautiful drop down menu. Good design has nothing to do with ego, but in my experience designers are, at least a little bit, secret divas. I’m no exception. I had to remind myself that I am not working in service of a bigger Dribbble following or to create the most Pinterest-ing work. My function is solely to serve the users — to make life a little better for the good people who keep my company in business.

    This meant letting go of pixel-level beauty to create something bigger and harder: a system of elements that work together in harmony in many contexts. The visual style exists to guide the users. When done well, it becomes a language that users understand, so when they encounter a new feature or have a new goal, they already feel comfortable navigating it. This meant stripping back my gorgeous animated menu into something that didn’t detract from important neighboring content, and could easily fit in other parts of the app. In order to know what visual style would support the users, I had to take a wider view of the product as a whole.

    Just accept that designing a great product – like many worthwhile pursuits – is initially laborious and messy

    Once I realized I couldn’t start by creating the most Dribbble-worthy thing, I knew I’d have to begin with the unglamorous, frustrating, but weirdly wonderful work of mapping out how the product’s content could better be structured. Since we’re redesigning an existing product, I assumed this would be fairly straightforward: the functionality was already in place, and my job was just to structure it in a more easily navigable way.

    I started by handing off a few wireframes of the key screens to the developer, and that’s when the questions began rolling in: “If we move this content into a modal, how will it affect this similar action here?” “What happens if they don’t add video tags, but they do add a description?” “What if the user has a title that is 500 characters long?” “What if they want their video to be private to some users, but accessible to others?”.

    How annoying (but really, fantastic) that people use our product in so many ways. Turns out, product design isn’t about laying out elements in the most ideal scenario for the user that’s most convenient for you. As product designers, we have to foresee every outcome, and anticipate every potential user need.

    Which brings me to another annoying epiphany: if you want to do it well, and account for every user, product design is so much more snarly and tangled than you’d expect going in. I began with a simple goal: to improve the experience on just one of our key product pages. However, every small change impacts every part of the product to some degree, and that impact has to be accounted for. Every decision is based on assumptions that have to be tested; I test my assumptions by observing users, talking to the team, wireframing, and prototyping. Many of my assumptions are wrong. There are days when it’s incredibly frustrating, because an elegant solution for users with one goal will complicate life for users with another goal. It’s vital to solve as many scenarios as possible, even though this is slow, sometimes mind-bending work.

    As a side bonus, wireframing and prototyping every potential state in a product is tedious, but your developers will thank you for it. It’s not their job to solve what happens when there’s an empty state, error, or edge case. Showing you’ve accounted for these scenarios will win a developer’s respect; failing to do so will frustrate them.

    When you’ve created and tested a system that supports user needs, it will be beautiful

    Remember what I said in the beginning about wanting to create a Dribbble-worthy product? When I stopped focusing on the visual details of the design (color, spacing, light and shadow, font choices) and focused instead on structuring the content to maximize usability and delight, a beautiful design began to emerge naturally.

    I began with grayscale, flat wireframes as a strategy to keep me from getting pulled into the visual style before the user experience was established. As I created a system of elements that worked in harmony, the visual style choices became obvious. Some buttons would need to be brighter and sit off the page to help the user spot important actions. Some elements would need line separators to create a hierarchy, where others could stand on their own as an emphasized piece of content. As the user experience took shape, the visual style emerged naturally to support it. The result is a product that feels beautiful to use, because I was thoughtful about the experience first.

    A big takeaway from this process has been that my assumptions will often be proven wrong. My assumptions about how to design a great product, and how users will interact with that product, have been tested and revised repeatedly. At SproutVideo we’re about to undertake the biggest test of our work; we’re going to launch a small part of the product redesign to our users. If I’ve learned anything, it’s that I will continue to be humbled by the ongoing effort of making the best product I can, which is a wonderful thing.

    Next year, I hope you all get to do work that takes you out of our comfort zone. Be regularly confounded and embarrassed by your wrong assumptions, learn from them, and come back and tell us what you learned in 2016.

    About the author

    Meagan Fisher is passionate about owls, coffee, and web design. In her ongoing mission to make the web a better place, she’s partnered with some of the best designers in the industry, such as SimpleBits, Happy Cog, and Crush + Lovely. When she’s not creating interfaces, she’s speaking, tweeting, writing on Owltastic, or posting coffee art photography to Art in my Coffee.

    More articles by Meagan

    Designing with Contrast

    posted: 12 Dec 2015

    Mark Mitchell casts coarse salt upon the pale icy sheen of recent web design aesthetics to sound a warning that we may be on thin ice. The tension between low contrast tastes and high contrast needs is a story as old as the <font> tag, and yet it bears frequent retelling. For snow has fallen snow on snow.

    When an appetite for aesthetics over usability becomes the bellwether of user interface design, it’s time to reconsider who we’re designing for.

    Over the last few years, we have questioned the signifiers that gave obvious meaning to the function of interface elements. Strong textures, deep shadows, gradients — imitations of physical objects — were discarded. And many, rightfully so. Our audiences are now more comfortable with an experience that feels native to the technology, so we should respond in kind.

    Yet not all of the changes have benefitted users. Our efforts to simplify brought with them a trend of ultra-minimalism where aesthetics have taken priority over legibility, accessibility and discoverability. The trend shows no sign of losing popularity — and it is harming our experience of digital content.

    A thin veneer

    We are in a race to create the most subdued, understated interface. Visual contrast is out. In its place: the thinnest weights of a typeface and white text on bright color backgrounds. Headlines, text, borders, backgrounds, icons, form controls and inputs: all grey.

    While we can look back over the last decade and see minimalist trends emerging on the web, I think we can place a fair share of the responsibility for the recent shift in priorities on Apple. The release of iOS 7 ushered in a radical change to its user interface. It paired mobile interaction design to the simplicity and eloquence of Apple’s marketing and product design. It was a catalyst. We took what we saw, copied and consumed the aesthetics like pick-and-mix.

    New technology compounds this trend. Computer monitors and mobile devices are available with screens of unprecedented resolutions. Ultra-light type and subtle hues, difficult to view on older screens, are more legible on these devices. It would be disingenuous to say that designers have always worked on machines representative of their audience’s circumstances, but the gap has never been as large as it is now. We are running the risk of designing VIP lounges where the cost of entry is a Mac with a Retina display.

    Minimalist expectations

    Like progressive enhancement in an age of JavaScript, many good and sensible accessibility practices are being overlooked or ignored. We’re driving unilateral design decisions that threaten accessibility. We’ve approached every problem with the same solution, grasping on to the integrity of beauty, focusing on expression over users’ needs and content.

    Someone once suggested to me that a client’s website should include two states. The first state would be the ideal experience, with low color contrast, light font weights and no differentiation between links and text. It would be the default. The second state would be presented in whatever way was necessary to meet accessibility standards. Users would have to opt out of the default state via a toggle if it wasn’t meeting their needs. A sort of first-class, upper deck cabin equivalent of graceful degradation. That this would divide the user base was irrelevant, as the aesthetics of the brand were absolute.

    It may seem like an unusual anecdote, but it isn’t uncommon to see this thinking in our industry. Again and again, we place the burden of responsibility to participate in a usable experience on others. We view accessibility and good design as mutually exclusive. Taking for granted what users will tolerate is usually the forte of monopolistic services, but increasingly we apply the same arrogance to our new products and services.

    Imitation without representation

    All of us are influenced in one way or another by one another’s work. We are consciously and unconsciously affected by the visual and audible activity around us. This is important and unavoidable. We do not produce work in a vacuum. We respond to technology and culture. We channel language and geography. We absorb the sights and sounds of film, television, news. To mimic and copy is part and parcel of creating something an audience of many can comprehend and respond to. Our clients often look first to their competitors’ products to understand their success.

    However, problems arise when we focus on style without context; form without function; mimicry as method. Copied and reused without any of the ethos of the original, stripped of deliberate and informed decision-making, the so-called look and feel becomes nothing more than paint on an empty facade.

    The typographic and color choices so in vogue today with our popular digital products and services have little in common with the brands they are meant to represent.

    For want of good design, the message was lost

    The question to ask is: does the interface truly reflect the product? Is it an accurate characterization of the brand and organizational values? Does the delivery of the content match the tone of voice?

    The answer is: probably not. Because every organization, every app or service, is unique. Each with its own personality, its own values and wonderful quirks. Design is communication. We should do everything in our role as professionals to use design to give voice to the message. Our job is to clearly communicate the benefits of a service and unreservedly allow access to information and content. To do otherwise, by obscuring with fashionable styles and elusive information architecture, does a great disservice to the people who chose to engage with and trust our products.

    We can achieve hierarchy and visual rhythm without resorting to extreme reduction. We can craft a beautiful experience with fine detail and curiosity while meeting fundamental standards of accessibility (and strive to meet many more).

    Standards of excellence

    It isn’t always comfortable to step back and objectively question our design choices. We get lost in the flow of our work, using patterns and preferences we’ve tried and tested before. That our decisions often seem like second nature is a gift of experience, but sometimes it prevents us from finding our blind spots.

    I was first caught out by my own biases a few years ago, when designing an interface for the Bank of England. After deciding on the colors for the typography and interactive elements, I learned that the site had to meet AAA accessibility standards. My choices quickly fell apart. It was eye-opening. I had to start again with restrictions and use size, weight and placement instead to construct the visual hierarchy.

    Even now, I make mistakes. On a recent project, I used large photographs on an organization’s website to promote their products. Knowing that our team had control over the art direction, I felt confident that we could compose the photographs to work with text overlays. Despite our best effort, the cropped images weren’t always consistent, undermining the text’s legibility. If I had the chance to do it again, I would separate the text and image.

    So, what practical things can we consider to give our users the experience they deserve?

    Put guidelines in place

    • Think about your brand values. Write down keywords and use them as a framework when choosing a typeface. Explore colors that convey the organization’s personality and emotional appeal.
    • Define a color palette that is web-ready and meets minimum accessibility standards. Note which colors are suitable for use with text. Only very dark hues of grey are consistently legible so keep them for non-essential text (for example, as placeholders in form inputs).
    • Find which background colors you can safely use with white text, and consider integrating contrast checks into your workflow.
    • Use roman and medium weights for body copy. Reserve lighter weights of a typeface for very large text. Thin fonts are usually the first to break down because of aliasing differences across platforms and screens.
    • Check that the size, leading and length of your type is always legible and readable. Define lower and upper limits. Small text is best left for captions and words in uppercase.
    • Avoid overlaying text on images unless it’s guaranteed to be legible. If it’s necessary to optimize space in the layout, give the text a container. Scrims aren’t always reliable: the text will inevitably overlap a part of the photograph without a contrasting ground.

    Test your work

    • Review legibility and contrast on different devices. It’s just as important as testing the layout of a responsive website. If you have a local device lab, pay it a visit.
    • Find a computer monitor near a window when the sun is shining. Step outside the studio and try to read your content on a mobile device with different brightness levels.
    • Ask your friends and family what they use at home and at work. It’s one way of making sure your feedback isn’t always coming from a closed loop.

    Push your limits

    • You define what the user sees. If you’ve inherited brand guidelines, question them. If you don’t agree with the choices, make the case for why they should change.
    • Experiment with size, weight and color to find contrast. Objects with low contrast appear similar to one another and undermine the visual hierarchy. Weak relationships between figure and ground diminish visual interest. A balanced level of contrast removes ambiguity and creates focal points. It captures and holds our attention.
    • If you’re lost for inspiration, look to graphic design in print. We have a wealth of history, full of examples that excel in using contrast to establish visual hierarchy.
    • Embrace limitations. Use boundaries as an opportunity to explore possibilities.

    More than just a facade

    Designing with standards encourages legibility and helps to define a strong visual hierarchy. Design without exclusion (through neither negligence or intent) gets around discussions of demographics, speaks to a larger audience and makes good business sense. Following the latest trends not only weakens usability but also hinders a cohesive and distinctive brand.

    Users will make means when they need to, by increasing browser font sizes or enabling system features for accessibility. But we can do our part to take as much of that burden off of the user and ask less of those who need it most.

    In architecture, it isn’t buildings that mimic what is fashionable that stand the test of time. Nor do we admire buildings that tack on separate, poorly constructed extensions to meet a bare minimum of safety regulations. We admire architecture that offers well-considered, remarkable, usable spaces with universal access.

    Perhaps we can take inspiration from these spaces. Let’s give our buildings a bold voice and make sure the doors are open to everyone.

    About the author

    Mark Mitchell is a freelance digital designer & front-end developer in London. Find him on Twitter at @withoutnations and online at

    More articles by Mark

    Be Fluid with Your Design Skills: Build Your Own S

    posted: 11 Dec 2015

    Ros Horner rings out a Christmas message for designers far and near of peace and goodwill to all, especially if they’re developers. With a rallying cry to take back control to see your own designs realised, young or old, merry or sober, the story is clear; as you design, so should you build.

    Just five years ago in 2010, when we were all busy trying to surprise and delight, learning CSS3 and trying to get whole websites onto one page, we had a poster on our studio wall. It was entitled ‘Designers Vs Developers’, an infographic that showed us the differences between the men(!) who created websites.

    Designers wore skinny jeans and used Macs and developers wore cargo pants and brought their own keyboards to work. We began to learn that designers and developers were not only doing completely different jobs but were completely different people in every way. This opinion was backed up by hundreds of memes, millions of tweets and pages of articles which used words like void and battle and versus.

    Thankfully, things move quickly in this industry; the wide world of web design has moved on in the last five years. There are new devices, technologies, tools – and even a few women. Designers have been helped along by great apps, software, open source projects, conferences, and a community of people who, to my unending pride, love to share their knowledge and their work.

    So the world has moved on, and if Miley Cyrus, Ruby Rose and Eliot Sumner are identifying as gender fluid (an identity which refers to a gender which varies over time or is a combination of identities), then I would like to come out as discipline fluid!

    OK, I will probably never identify as a developer, but I will identify as fluid! How can we be anything else in an industry that moves so quickly? That’s how we should think of our skills, our interests and even our job titles. After all, Steve Jobs told us that “Design is not just what it looks like and feels like. Design is how it works.” Sorry skinny-jean-wearing designers – this means we’re all designing something together. And it’s not just about knowing the right words to use: you have to know how it feels. How it feels when you make something work, when you fix that bug, when you make it work on IE.

    Like anything in life, things run smoothly when you make the effort to share experiences, empathise and deeply understand the needs of others. How can designers do that if they’ve never built their own site? I’m not talking the big stuff, I’m talking about your portfolio site, your mate’s business website, a website for that great idea you’ve had. I’m talking about doing it yourself to get an unique insight into how it feels.

    We all know that designers and developers alike love an <ol>, so here it is.

    Ten reasons designers should be fluid with their skills and build their own sites

    1. It’s never been easier

    Now here’s where the definition of ‘build’ is going to get a bit loose and people are going to get angry, but when I say it’s never been easier I mean because of the existence of apps and software like WordPress, Squarespace, Tumblr, et al. It’s easy to make something and get it out there into the world, and these are all gateway drugs to hard coding!

    2. You’ll understand how it feels

    How it feels to be so proud that something actually works that you momentarily don’t notice if the kerning is off or the padding is inconsistent. How it feels to see your site appear when you’ve redirected a URL. How it feels when you just can’t work out where that one extra space is in a line of PHP that has killed your whole site.

    3. It makes you a designer

    Not a better designer, it makes you a designer when you are designing how things look and how they work.

    4. You learn about movement

    Photoshop and Sketch just don’t cut it yet. Until you see your site in a browser or your app on a phone, it’s hard to imagine how it moves. Building your own sites shows you that it’s not just about how the content looks on the screen, but how it moves, interacts and feels.

    5. You make techie friends

    All the tutorials and forums in the world can’t beat your network of techie friends. Since I started working in web design I have worked with, sat next to, and co-created with some of the greatest developers. Developers who’ve shared their knowledge, encouraged me to build things, patiently explained HTML, CSS, servers, divs, web fonts, iOS development. There has been no void, no versus, very few battles; just people who share an interest and love of making things.

    6. You will own domain names

    When something is paid for, online and searchable then it’s real and you’ve got to put the work in. Buying domains has taught me how to stop procrastinating, but also about DNS, FTP, email, and how servers work.

    7. People will ask you to do things

    Learning about code and development opens a whole new world of design. When you put your own personal websites and projects out there people ask you to do more things. OK, so sometimes those things are “Make me a website for free”, but more often it’s cool things like “Come and speak at my conference”, “Write an article for my magazine” and “Collaborate with me.”

    8. The young people are coming!

    They love typography, they love print, they love layout, but they’ve known how to put a website together since they started their first blog aged five and they show me clever apps they’ve knocked together over the weekend! They’re new, they’re fluid, and they’re better than us!

    9. Your portfolio is your portfolio

    OK, it’s an obvious one, but as designers our work is our CV, our legacy! We need to show our skill, our attention to detail and our creativity in the way we showcase our work. Building your portfolio is the best way to start building your own websites. (And please be that designer who’s bothered to work out how to change the Squarespace favicon!)

    10. It keeps you fluid!

    Building your own websites is tough. You’ll never be happy with it, you’ll constantly be updating it to keep up with technology and fashion, and by the time you’ve finished it you’ll want to start all over again. Perfect for forcing you to stay up-to-date with what’s going on in the industry.


    About the author

    Ros Horner is a London based (print turned digital) Design Director and speaker, working on apps and websites for clients such as Adidas, Reebok, McLaren, and Volvo.

    More articles by Ros

    Upping Your Web Security Game

    posted: 10 Dec 2015

    Guy Podjarny sounds a sober warning during our festivities, and gathers some winter fuel to help secure your apps and users from the web’s occasionally cruel frost. So mark his footsteps good, my friend, and tread thou in them boldly. Thou shalt find the hacker’s rage freeze thy site less coldly.

    When I started working in web security fifteen years ago, web development looked very different. The few non-static web applications were built using a waterfall process and shipped quarterly at best, making it possible to add security audits before every release; applications were deployed exclusively on in-house servers, allowing Info Sec to inspect their configuration and setup; and the few third-party components used came from a small set of well-known and trusted providers. And yet, even with these favourable conditions, security teams were quickly overwhelmed and called for developers to build security in.

    If the web security game was hard to win before, it’s doomed to fail now. In today’s web development, every other page is an application, accepting inputs and private data from users; software is built continuously, designed to eliminate manual gates, including security gates; infrastructure is code, with servers spawned with little effort and even less security scrutiny; and most of the code in a typical application is third-party code, pulled in through open source repositories with rarely a glance at who provided them.

    Security teams, when they exist at all, cannot solve this problem. They are vastly outnumbered by developers, and cannot keep up with the application’s pace of change. For us to have a shot at making the web secure, we must bring security into the core. We need to give it no less attention than that we give browser compatibility, mobile design or web page load times. More broadly, we should see security as an aspect of quality, expecting both ourselves and our peers to address it, and taking pride when we do it well.

    Where To Start?

    Embracing security isn’t something you do overnight.

    A good place to start is by reviewing things you’re already doing – and trying to make them more secure. Here are three concrete steps you can take to get going.


    Threats begin when your system interacts with the outside world, which often means HTTP. As is, HTTP is painfully insecure, allowing attackers to easily steal and manipulate data going to or from the server. HTTPS adds a layer of crypto that ensures the parties know who they’re talking to, and that the information exchanged can be neither modified nor sniffed.

    HTTPS is relevant to any site. If your non-HTTPS site holds opinions, reading it may get your users in trouble with employers or governments. If your users believe what you say, attackers can modify your non-HTTPS to take advantage of and abuse that trust. If you want to use new browser technologies like HTTP2 and service workers, your site will need to be HTTPS. And if you want to be discovered on the web, using HTTPS can help your Google ranking. For more details on why I think you should make the switch to HTTPS, check out this post, these slides and this video.

    Using HTTPS is becoming easier and cheaper. Here are a few free tools that can help:

    Other vendors and platforms are rapidly simplifying and reducing the cost of their HTTPS offering, as demand and importance grows.

    Two-Factor Authentication

    The most sensitive data is usually stored behind a login, and the authentication process is the primary gate in front of this data. Making this process secure has many aspects, including using HTTPS when accepting credentials, having a strong password policy, never storing the password, and more.

    All of these are important, but the best single step to boost your authentication security is to introduce two-factor authentication (2FA). Adding 2FA usually means prompting users for an additional one-time code when logging in, which they get via SMS or a mobile app (e.g. Google Authenticator). This code is short-lived and is extremely hard for a remote attacker to guess, thus vastly reducing the risk a leaked or easily guessed password presents.

    The typical algorithm for 2FA is based on an IETF standard called the time-based one-time password (TOTP) algorithm, and it isn’t that hard to implement. Joel Franusic wrote a great post on implementing 2FA; modules like speakeasy make it even easier; and you can swap SMS with Google Authenticator or your own app if you prefer. If you don’t want to build 2FA support yourself, you can purchase two/multi-factor authentication services from vendors such as DuoSecurity, Auth0, Clef, Hypr and others.

    If implementing 2FA still feels like too much work, you can also choose to offload your entire authentication process to an OAuth-based federated login. Many companies offer this today, including Facebook, Google, Twitter, GitHub and others. These bigger players tend to do authentication well and support 2FA, but you should consider what data you’re sharing with them in the process.

    Tracking Known Vulnerabilities

    Most of the code in a modern application was actually written by third parties, and pulled into your app as frameworks, modules and libraries. While using these components makes us much more productive, along with their functionality we also adopt their security flaws. To make things worse, some of these flaws are well-known vulnerabilities, making it easy for hackers to take advantage of them in an attack.

    This is a real problem and happens on pretty much every platform. Do you develop in Java? In 2014, over 6% of Java modules downloaded from Maven had a known severe security issue, the typical Java applications containing 24 flaws. Are you coding in Node.js? Roughly 14% of npm packages carry a known vulnerability, and over 60% of dev shops find vulnerabilities in their code. 30% of Docker Hub containers include a high priority known security hole, and 60% of the top 100,000 websites use client-side libraries with known security gaps.

    To find known security issues, take stock of your dependencies and match them against language-specific lists such as Snyk’s vulnerability DB for Node.js, rubysec for Ruby, victims-db for Python and OWASP’s Dependency Check for Java. Once found, you can fix most issues by upgrading the component in question, though that may be tricky for indirect dependencies.

    This process is still way too painful, which means most teams don’t do it. The Snyk team and I are hoping to change that by making it as easy as possible to find, fix and monitor known vulnerabilities in your dependencies. Snyk’s wizard will help you find and fix these issues through guided upgrades and patches, and adding Snyk’s test to your continuous integration and deployment (CI/CD) will help you stay secure as your code evolves.

    Note that newly disclosed vulnerabilities usually impact old code – the one you’re running in production. This means you have to stay alert when new vulnerabilities are disclosed, so you can fix them before attackers can exploit them. You can do so by subscribing to vulnerability lists like US-CERT, OSVDB and NVD. Snyk’s monitor will proactively let you know about new disclosures relevant to your code, but only for Node.js for now – you can register to get updated when we expand.

    Securing Yourself

    In addition to making your application secure, you should make the contributors to that application secure – including you. Earlier this year we’ve seen attackers target mobile app developers with a malicious Xcode. The real target, however, wasn’t these developers, but rather the users of the apps they create. That you create. Securing your own work environment is a key part of keeping your apps secure, and your users from being compromised.

    There’s no single step that will make you fully secure, but here are a few steps that can make a big impact:

    1. Use 2FA on all the services related to the application, notably source control (e.g. GitHub), cloud platform (e.g. AWS), CI/CD, CDN, DNS provider and domain registrar. If an attacker compromises any one of those, they could modify or replace your entire application. I’d recommend using 2FA on all your personal services too.
    2. Use a password manager (e.g. 1Password, LastPass) to ensure you have a separate and complex password for each service. Some of these services will get hacked, and passwords will leak. When that happens, don’t let the attackers access your other systems too.
    3. Secure your workstation. Be careful what you download, lock your screen when you walk away, change default passwords on services you install, run antivirus software, etc. Malware on your machine can translate to malware in your applications.
    4. Be very wary of phishing. Smart attackers use ‘spear phishing’ techniques to gain access to specific systems, and can trick even security savvy users. There are even phishing scams targeting users with 2FA. Be alert to phishy emails.
    5. Don’t install things through curl <somewhere-on-the-web> | sudo bash, especially if the URL is on GitHub, meaning someone else controls it. Don’t do it on your machines, and definitely don’t do it in your CI/CD systems. Seriously.

    Staying secure should be important to you personally, but it’s doubly important when you have privileged access to an application. Such access makes you a way to reach many more users, and therefore a more compelling target for bad actors.

    A Culture of Security

    Using HTTPS, enabling two-factor authentication and fixing known vulnerabilities are significant steps in building security at your core. As you implement them, remember that these are just a few steps in a longer journey.

    The end goal is to embrace security as an aspect of quality, and accept we all share the responsibility of keeping ourselves – and our users – safe.

    About the author

    Guy is the CEO & Founder of, a Web Security company. Before that, he was the CTO of Akamai’s Web Performance business, following its acquisition of (which he co-founded). He has spent over a decade working on Web Application Security, and more specifically the first Web App Firewall (AppShield) and the market leading Web Application Security scanner (AppScan).

    He is also the author of Mobitest, an open-source mobile web performance testing tool, and is on the programming committee of the Velocity conference, and wrote Responsive & Fast, a short book about RWD Performance.

    More articles by Guy

    Putting My Patterns through Their Paces

    posted: 09 Dec 2015

    Ethan Marcotte dashes through the wintry landscape, his sled of flexboxed content drawn faithfully by a team of well-ordered hierarchical HTML huskies. For when it comes to structure and presentation, we must take care not to put the sleigh before the hounds.

    Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.

    The thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.

    Meet the Teaser

    Here’s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)

    In its simplest form, a teaser contains a headline, which links to an article:

    Fairly straightforward, sure. But it’s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types – some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.

    Nearly every element visible on this page is built out of our generic “teaser” pattern.

    But the teaser variation I’d like to call out is the one that appears on The Toast’s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.

    The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.

    And this is, as it happens, the teaser variation that gave me pause. Back in the old days – you know, like six months ago – I probably would’ve marked this module up to match the design. In other words, I would’ve looked at the module’s visual hierarchy (metadata up top, headline and content below) and written the following HTML:

    <div class="teaser"> <p class="article-byline">By <a href="#">Author Name</a></p> <a class="comment-count" href="#">126 <i>comments</i></a> <h1 class="article-title"><a href="#">Article Title</a></h1> <p class="teaser-excerpt">Lorem ipsum dolor sit amet, consectetur…</p> </div>

    But then I caught myself, and realized this wasn’t the best approach.

    Moving Beyond Layout

    Since I’ve started working responsively, there’s a question I work into every step of my design process. Whether I’m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:

    What if someone doesn’t browse the web like I do?

    …Okay, that doesn’t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it’s been invaluable to so many aspects of my practice. If I’m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I’m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It’s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.

    And that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it’d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they’re associated.

    That’s why I find it’s helpful to begin designing a pattern’s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I’m trying to support. In other words, if someone’s encountering my design without the CSS I’ve written, what should their experience be?

    So I took a step back, and came up with a different approach:

    <div class="teaser"> <h1 class="article-title"><a href="#">Article Title</a></h1> <h2 class="article-byline">By <a href="#">Author Name</a></h2> <p class="teaser-excerpt"> Lorem ipsum dolor sit amet, consectetur… <a class="comment-count" href="#">126 <i>comments</i></a> </p> </div>

    Much, much better. This felt like a better match for the content I was designing: the headline – easily most important element – was at the top, followed by the author’s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that’s why it’s at the very end of the excerpt, the last element within our teaser. And with some light styling, we’ve got a respectable-looking hierarchy in place:

    Yeah, you’re right – it’s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we’ll bolster the markup with an extra element around our title and byline:

    <div class="teaser"> <div class="teaser-hed"> <h1 class="article-title"><a href="#">Article Title</a></h1> <h2 class="article-byline">By <a href="#">Author Name</a></h2> </div> … </div>

    With that in place, we can use flexbox to tweak our layout, like so:

    .teaser-hed { display: flex; flex-direction: column-reverse; }

    flex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.

    Getting closer! But as great as flexbox is, it doesn’t do anything for elements outside our container, like our little comment count, which is, as you’ve probably noticed, still stranded at the very bottom of our teaser.

    Flexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser’s using a sufficiently modern version of flexbox. Here’s the one we used:

    var doc = document.body || document.documentElement; var style =; if ( style.webkitFlexWrap == '' || style.msFlexWrap == '' || style.flexWrap == '' ) { doc.className += " supports-flex"; }

    Eagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser’s design:

    .supports-flex .teaser-hed { display: flex; flex-direction: column-reverse; } .supports-flex .teaser .comment-count { position: absolute; right: 0; top: 1.1em; }

    If the supports-flex class is present, we can apply our flexbox layout to the title area, sure – but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don’t meet our threshold for our advanced styles are left with an attractive design that matches our HTML’s content hierarchy; but the ones that pass our test receive the finished, final design.

    And with that, our teaser’s complete.

    Diving Into Device-Agnostic Design

    This is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I’d recommend Heydon Pickering’s “Flexbox Grid Finesse”, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you’d be right! In fact, it’s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web. And what’s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.

    Even a simple search form can be conditionally enhanced, given a little layered thinking.

    This more layered approach to interface design isn’t a new one, mind you: it’s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I’ve found to practice responsive design. As Trent Walton once wrote,

    Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.

    We have a weird job, working on the web. We’re designing for the latest mobile devices, sure, but we’re increasingly aware that our definition of “smartphone” is much too narrow. Browsers have started appearing on our wrists and in our cars’ dashboards, but much of the world’s mobile data flows over sub-3G networks. After all, the web’s evolution has never been charted along a straight line: it’s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don’t yet know about, a more device-agnostic, more layered design process can better prepare our patterns – and ourselves – for the future.

    (It won’t help you get enough to eat at holiday parties, though.)

    About the author

    Ethan Marcotte is an independent designer and developer, and the fellow who coined the term “responsive web design”. He is the author of two books on the topic, Responsive Web Design and Responsive Design: Patterns and Principles, and has been known to give a conference talk or two. Ethan is passionate about digital design, emerging markets, and ensuring global access, and has been known to link to the odd GIF now and again.

    More articles by Ethan

    Animation in Responsive Design

    posted: 08 Dec 2015

    Val Head squeezes more animation into responsive design’s Christmas stocking with some strategies for getting the most out of animation at any screen size. Set your robin a-rockin’, no matter what size the dance floor.

    Animation and responsive design can sometimes feel like they’re at odds with each other. Animation often needs space to do its thing, but RWD tells us that the amount of space we’ll have available is going to change a lot. Balancing that can lead to some tricky animation situations.

    Embracing the squishiness of responsive design doesn’t have to mean giving up on your creative animation ideas. There are three general techniques that can help you balance your web animation creativity with your responsive design needs. One or all of these approaches might help you sneak in something just a little extra into your next project.

    Focused art direction

    Smaller viewports mean a smaller stage for your motion to play out on, and this tends to amplify any motion in your animation. Suddenly 100 pixels is really far and multiple moving parts can start looking like they’re battling for space. An effect that looked great on big viewports can become muddled and confusing when it’s reframed in a smaller space.

    Making animated movements smaller will do the trick for simple motion like a basic move across the screen. But for more complex animation on smaller viewports, you’ll need to simplify and reduce the number of moving parts. The key to this is determining what the vital parts of the animation are, to zone in on the parts that are most important to its message. Then remove the less necessary bits to distill the motion’s message down to the essentials.

    For example, Rally Interactive’s navigation folds down into place with two triangle shapes unfolding each corner on larger viewports. If this exact motion was just scaled down for narrower spaces the two corners would overlap as they unfolded. It would look unnatural and wouldn’t make much sense.

    The main purpose of this animation is to show an unfolding action. To simplify the animation, Rally unfolds only one side for narrower viewports, with a slightly different animation. The action is still easily interpreted as unfolding and it’s done in a way that is a better fit for the available space. The message the motion was meant to convey has been preserved while the amount of motion was simplified.

    Si Digital does something similar. The main concept of the design is to portray the studio as a creative lab. On large viewports, this is accomplished primarily through an animated illustration that runs the full length of the site and triggers its animations based on your scroll position. The illustration is there to support the laboratory concept visually, but it doesn’t contain critical content.

    At first, it looks like Si Digital just turned off the animation of the illustration for smaller viewports. But they’ve actually been a little cleverer than that. They’ve also reduced the complexity of the illustration itself. Both the amount of motion (reduced down to no motion) and the illustration were simplified to create a result that is much easier to glean the concept from.

    The most interesting thing about these two examples is that they’re solved more with thoughtful art direction than complex code. Keeping the main concept of the animations at the forefront allowed each to adapt creative design solutions to viewports of varying size without losing the integrity of their design.

    Responsive choreography

    Static content gets moved around all the time in responsive design. A three-column layout might line up from left to right on wide viewports, then stack top to bottom on narrower viewports. The same approach can be used to arrange animated content for narrower views, but the animation’s choreography also needs to be adjusted for the new layout. Even with static content, just scaling it down or zooming out to fit it into the available space is rarely an ideal solution. Rearranging your animations’ choreography to change which animation starts when, or even which animations play at all, keeps your animated content readable on smaller viewports.

    In a recent project I had three small animations that played one after the other, left to right, on wider viewports but needed to be stacked on narrower viewports to be large enough to see. On wide viewports, all three animations could play one right after the other in sequence because all three were in the viewable area at the same time. But once these were stacked for the narrower viewport layouts, that sequence had to change.

    What was essentially one animation on wider viewports became three separate animations when stacked on narrower viewports. The layout change meant the choreography had to change as well. Each animation starts independently when it comes into view in the stacked layout instead of playing automatically in sequence. (I’ve put the animated parts in this demo if you want to peek under the hood.)

    I choose to use the GreenSock library, with the choreography defined in two different timelines for this particular project. But the same goals could be accomplished with other JavaScript options or even CSS keyframe animations and media queries.

    Even more complex responsive choreography can be pulled off with SVG. Media queries can be used to change CSS animations applied to SVG elements at specific breakpoints for starters. For even more responsive power, SVG’s viewBox property, and the positioning of the objects within it, can be adjusted at JavaScript-defined breakpoints. This lets you set rules to crop the viewable area and arrange your animating elements to fit any space.

    Sarah Drasner has some great examples of how to use this technique with style in this responsive infographic and this responsive interactive illustration. On the other hand, if smart scalability is what you’re after, it’s also possible to make all of an SVG’s shapes and motion scale with the SVG canvas itself. Sarah covers both these clever responsive SVG techniques in detail. Creative and complex animation can easily become responsive thanks to the power of SVG!

    Bake performance into your design decisions

    It’s hard to get very far into a responsive design discussion before performance comes up. Performance goes hand in hand with responsive design and your animation decisions can have a big impact on the overall performance of your site.

    The translate3D “hack”, backface-visibility:hidden, and the will-change property are the heavy hitters of animation performance. But decisions made earlier in your animation design process can have a big impact on rendering performance and your performance budget too.

    Pick a technology that matches your needs

    One of the biggest advantages of the current web animation landscape is the range of tools we have available to us. We can use CSS animations and transitions to add just a dash of interface animation to our work, go all out with webGL to create a 3D experience, or anywhere in between. All within our browsers! Having this huge range of options is amazing and wonderful but it also means you need to be cognizant of what you’re using to get the job done.

    Loading in the full weight of a robust JavaScript animation library is going to be overkill if you’re only animating a few small elements here and there. That extra overhead will have an impact on performance. Performance budgets will not be pleased.

    Always match the complexity of the technology you choose to the complexity of your animation needs to avoid unnecessary performance strain. For small amounts of animation, stick to CSS solutions since it’s the most lightweight option. As your animations grow in complexity, or start to require more robust logic, move to a JavaScript solution that can accomplish what you need.

    Animate the most performant properties

    Whether you’re animating in CSS or JavaScript, you’re affecting specific properties of the animated element. Browsers can animate some properties more efficiently than others based on how many steps need to happen behind the scenes to visually update those properties.

    Browsers are particularly efficient at animating opacity, scale, rotation, and position (when the latter three are done with transforms). This article from Paul Irish and Paul Lewis gives the full scoop on why. Conveniently, those are also the most common properties used in motion design. There aren’t many animated effects that can’t be pulled off with this list. Stick to these properties to set your animations up for the best performance results from the start. If you find yourself needing to animate a property outside of this list, check CSS Triggers… to find out how much of an additional impact it might have.

    Offset animation start times

    Offsets (the concept of having a series of similar movements execute one slightly after the other, creating a wave-like pattern) are a long-held motion graphics trick for creating more interesting and organic looking motion. Employing this trick of the trade can also be smart for performance. Animating a large number of objects all at the same time can put a strain on the browser’s rendering abilities even in the best cases. Adding short delays to offset these animations in time, so they don’t all start at once, can improve rendering performance.

    Go explore the responsive animation possibilities for yourself!

    With smart art direction, responsive choreography, and an eye on performance you can create just about any creative web animation you can think up while still being responsive. Keep these in mind for your next project and you’ll pull off your animations with style at any viewport size!

    About the author

    Val is a designer and web animation consultant with a talent for getting designers and developers alike excited about the power of animation. She is the author of The Pocket Guide to CSS Animations and the upcoming Designing Interface Animations.

    She curates the UI Animation Newsletter, hosts the All The Right Moves screencast, and co-hosts the Motion and Meaning podcast. Val leads workshops at companies and conferences around the world on motion design for the web and loves every minute of it.

    More articles by Val

    Helping VIPs Care About Performance

    posted: 07 Dec 2015

    Lara Hogan ignites a little cognac over your web performance pudding by considering ways you can keep stakeholders invested in a site’s metrics and KPIs. More than just the budget, it’s the thought you put in that counts.

    Making a site feel super fast is the easy part of performance work. Getting people around you to care about site speed is a much bigger challenge. How do we keep the site fast beyond the initial performance work? Keeping very important people like your upper management or clients invested in performance work is critical to keeping a site fast and empowering other designers and developers to contribute.

    The work to get others to care is so meaty that I dedicated a whole chapter to the topic in my book Designing for Performance. When I speak at conferences, the majority of questions during Q&A are on this topic. When I speak to developers and designers who care about performance, getting other people at one’s organization or agency to care becomes the most pressing question.

    My primary response to folks who raise this issue is the question: “What metric(s) do your VIPs care about?” This is often met with blank stares and raised eyebrows. But it’s also our biggest clue to what we need to do to help empower others to care about performance and work on it. Every organization and executive is different. This means that three major things vary: the primary metrics VIPs care about; the language they use about measuring success; and how change is enacted. By clueing in to these nuances within your organization, you can get a huge leg up on crafting a successful pitch about performance work.

    Let’s start with the metric that we should measure. Sure, (most) everybody cares about money - but is that really the metric that your VIPs are looking at each day to measure the success or efficacy of your site? More likely, dollars are the end game, but the metrics or key peformance indicators (KPIs) people focus on might be:

    • rate of new accounts created/signups
    • cost of acquiring or retaining a customer
    • visitor return rate
    • visitor bounce rate
    • favoriting or another interaction rate

    These are just a few examples, but they illustrate how wide-ranging the options are that people care about. I find that developers and designers haven’t necessarily investigated this when trying to get others to care about performance. We often reach for the obvious – money! – but if we don’t use the same kind of language our VIPs are using, we might not get too far. You need to know this before you can make the case for performance work.

    To find out these metrics or KPIs, start reading through the emails your VIPs are sending within your company. What does it say on company wikis? Are there major dashboards internally that people are looking at where you could find some good metrics? Listen intently in team meetings or thoroughly read annual reports to see what these metrics could be.

    The second key here is to pick up on language you can effectively copy and paste as you make the case for performance work. You need to be able to reflect back the metrics that people already find important in a way they’ll be able to hear. Once you know your key metrics, it’s time to figure out how to communicate with your VIPs about performance using language that will resonate with them.

    Let’s start with visit traffic as an example metric that a very important person cares about. Start to dig up research that other people and companies have done that correlates performance and your KPI. For example, cite studies:

    “When the home page of Google Maps was reduced from 100KB to 70–80KB, traffic went up 10% in the first week, and an additional 25% in the following three weeks.” (source).

    Read through websites like WPOStats, which collects the spectrum of studies on the impact of performance optimization on user experience and business metrics. Tweet and see if others have done similar research that correlates performance and your site’s main KPI.

    Once you have collected some research that touches on the same kind of language your VIPs use about the success of your site, it’s time to present it. You can start with something simple, like a qualitative description of the work you’re actively doing to improve the site that translates to improved metrics that your VIPs care about. It can be helpful to append a performance budget to any proposal so you can compare the budget to your site’s reality and how it might positively impact those KPIs folks care about.

    Words and graphs are often only half the battle when it comes to getting others to care about performance. Often, videos appeal to folks’ emotions in a way that is missed when glancing through charts and graphs. On A List Apart I recently detailed how to create videos of how fast your site loads. Let’s say that your VIPs care about how your site loads on mobile devices; it’s time to show them how your site loads on mobile networks.

    You can use these videos to make a number of different statements to your VIPs, depending on what they care about:

    • Look at how slow our site loads versus our competitor!
    • Look at how slow our site loads for users in another country!
    • Look at how slow our site loads on mobile networks!

    Again, you really need to know which metrics your VIPs care about and tune into the language they’re using. If they don’t care about the overall user experience of your site on mobile devices, then showing them how slow your site loads on 3G isn’t going to work. This will be your sales pitch; you need to practice and iterate on the language and highlights that will land best with your audience.

    To make your sales pitch as solid as possible, gut-check your ideas on how to present it with other co-workers to get their feedback. Read up on how to construct effective arguments and deliver them; do some research and see what others have done at your company when pitching to VIPs. Are slides effective? Memos or emails? Hallway conversations? Sometimes the best way to change people’s minds is by mentioning it in informal chats over coffee. Emulate the other leaders in your organization who are successful at this work.

    Every organization and very important person is different. Learn what metrics folks truly care about, study the language that they use, and apply what you’ve learned in a way that’ll land with those individuals. It may take time to craft your pitch for performance work over time, but it’s important work to do. If you’re able to figure out how to mirror back the language and metrics VIPs care about, and connect the dots to performance for them, you will have a huge leg up on keeping your site fast in the long run.

    About the author

    Lara Hogan is a Senior Engineering Manager at Etsy and the author of Designing for Performance and Building a device lab. She champions performance as a part of the overall user experience, striking a balance between aesthetics and speed, and building performance into company culture. She also believes it’s important to celebrate career achievements with donuts.

    More articles by Lara

    Git Rebasing: An Elfin Workshop Workflow

    posted: 06 Dec 2015

    Emma Jane Westby cracks open the door on Santa’s workshop to investigate what lessons can be learned about git rebase from the elves (or subordinate clauses if you will). Take what you can from their workflow, but don’t feed them too many mince pies; it’s bad for their ’elf.

    This year Santa’s helpers have been tasked with making a garland. It’s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they’re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn’t. It’s worse; it’s about Git.)

    For the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they’re able to work independently, but towards the common goal of making a beautiful garland.

    At first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they’re going to need a system to change the beads out when mistakes are made in the chain.

    The first common mistake is not looking to see what the latest chain is that’s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It’s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It’s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command

    git pull --rebase=preserve

    (They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they’ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available.

    The next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I’ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it’s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands:

    git checkout master git checkout -b 1234_pattern-name

    1234 represents the work order number and pattern-name describes the pattern they’re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland.

    To combine their work with the main garland, the elves need to make a few decisions. If they’re making a single strand, they issue the following commands:

    git checkout master git merge --ff-only 1234_pattern-name

    To share their work they publish the new version of the main garland to the workshop spool with the command git push origin master.

    Sometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order.

    To update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command:

    git reset --merge ORIG_HEAD

    This works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf’s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally.

    With these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf’s beads will be restrung on the tip of the main workshop garland.

    Sometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf’s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they’re ready to close out their work order.

    Once their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order:

    git checkout master git merge --ff-only 1234_pattern-name git push origin master

    Generally this process works well for the elves. Sometimes, though, when they’re tired or bored or a little drunk on festive cheer, they realise there’s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options.

    Let’s pretend the elf has a sequence of five beads she’s been working on. The work order says the pattern should be red-blue-red-blue-red.

    If the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5.

    If she’s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5.

    If one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence.

    Using a tool that’s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available.

    Start the garland surgery process with the command:

    git rebase --interactive HEAD~4

    A new screen comes up with the following information (the oldest bead is on top):

    pick c2e4877 Red bead pick 9b5555e Blue bead pick 7afd66b Blue bead pick e1f2537 Red bead

    The elf adjusts the list, changing “pick” to “edit” next to the first blue bead:

    pick c2e4877 Red bead edit 9b5555e Blue bead pick 7afd66b Blue bead pick e1f2537 Red bead

    She then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead.

    She needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes.

    Once she’s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message “Red bead – added” so she can easily find it.

    The garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue.

    The new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool.

    She knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads.

    With the conflict resolved, the elf saves her changes and quits the mergetool.

    Back at the command line, the elf checks the status of her work using the command git status.

    rebase in progress; onto 4a9cb9d You are currently rebasing branch '2_RBRBR' on '4a9cb9d'. (all conflicts fixed: run "git rebase --continue") Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: beads.txt Untracked files: (use "git add <file>..." to include in what will be committed) beads.txt.orig

    She removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands:

    git add beads.txt git commit --message "Blue bead -- resolved conflict"

    With the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files.

    She incorporates the changes from the left and right column to ensure her bead sequence is correct.

    Once the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland:

    git add beads.txt git commit --message "Red bead -- resolved conflict"

    and then runs the final verification steps in the rebase process (git rebase --continue).

    The verification process runs through to the end, and the elf checks her work using the command git log --oneline.

    9269914 Red bead -- resolved conflict 4916353 Blue bead -- resolved conflict aef0d5c Red bead -- added 9b5555e Blue bead c2e4877 Red bead

    She knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct.

    Sometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It’s made her more confident at restringing beads when she’s found real mistakes. And she doesn’t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don’t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked.

    Our lessons from the workshop:

    • By using rebase to update your branches, you avoid merge commits and keep a clean commit history.
    • If you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard.
    • If you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch.
    • If you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.

    About the author

    Emma Jane Westby is an author, an educator, and a part-time beekeeper. Her latest book, Git for Teams, is now available from O’Reilly. You can follow her adventures on Twitter at @emmajanehw.

    More articles by Emma Jane

    Bringing Your Code to the Streets

    posted: 05 Dec 2015

    Ruth John breaks out of the browser and projects a Christmas sound and light show with some JavaScript, the Web MIDI API and a portable A/V pack. Don’t make a spectacle of yourself at the party this year – take it outside!

    — or How to Be a Street VJ

    Our amazing world of web code is escaping out of the browser at an alarming rate and appearing in every aspect of the environment around us. Over the past few years we’ve already seen JavaScript used server-side, hardware coded with JavaScript, a rise of native style and desktop apps created with HTML, CSS and JavaScript, and even virtual reality (VR) is getting its fair share of front-end goodness.

    You can go ahead and play with JavaScript-powered hardware such as the Tessel or the Espruino to name a couple. Just check out the Tessel project page to see JavaScript in the world of coffee roasting or sleep tracking your pet. With the rise of the internet of things, JavaScript can be seen collecting information on flooding among other things. And if that’s not enough ‘outside the browser’ implementations, Node.js servers can even be found in aircraft!

    I previously mentioned VR and with three.js’s extra StereoEffect.js module it’s relatively simple to get browser 3D goodness to be Google Cardboard-ready, and thus set the stage for all things JavaScript and VR. It’s been pretty popular in the art world too, with interactive works such as Seb Lee-Delisle’s Lunar Trails installation, featuring the old arcade game Lunar Lander, which you can now play in your browser while others watch (it is the web after all). The Science Museum in London held Chrome Web Lab, an interactive exhibition featuring five experiments, showcasing the magic of the web. And it’s not even the connectivity of the web that’s being showcased; we can even take things offline and use web code for amazing things, such as fighting Ebola.

    One thing is for sure, JavaScript is awesome. Hell, if you believe those telly programs (as we all do), JavaScript can even take down the stock market, purely through the witchcraft of canvas! Go JavaScript!

    Now it’s our turn

    So I wanted to create a little project influenced by this theme, and as it’s Christmas, take it to the streets for a little bit of party fun! Something that could take code anywhere. Here’s how I made a portable visual projection pack, a piece of video mixing software and created some web-coded street art.

    Step one: The equipment

    You will need:

    • One laptop: with HDMI output and a modern browser installed, such as Google Chrome.
    • One battery-powered mini projector: I’ve used a Texas Instruments DLP; for its 120 lumens it was the best cost-to-lumens ratio I could find.
    • One MIDI controller (optional): mine is an ICON iDJ as it suits mixing visuals. However, there is more affordable hardware on the market such as an Akai LPD8 or a Korg nanoPAD2. As you’ll see in the article, this is optional as it can be emulated within the software.
    • A case to carry it all around in.

    Step two: The software

    The projected visuals, I imagined, could be anything you can create within a browser, whether that be simple HTML and CSS, images, videos, SVG or canvas. The only requirement I have is that they move or change with sound and that I can mix any one visual into another.

    You may remember a couple of years ago I created a demo on this very site, allowing audio-triggered visuals from the ambient sounds your device mic was picking up. That was a great starting point – I used that exact method to pick up the audio and thus the first requirement was complete. If you want to see some more examples of visuals I’ve put together for this, there’s a showcase on CodePen.

    The second requirement took a little more thought. I needed two screens, which could at any point show any of the visuals I had coded, but could be mixed from one into the other and back again. So let’s start with two divs, both absolutely positioned so they’re on top of each other, but at the start the second screen’s opacity is set to zero.

    Now all we need is a slider, which when moved from one side to the other slowly sets the second screen’s opacity to 1, thereby fading it in.

    See the Pen Mixing Screens (Software Version) by Rumyra (@Rumyra) on CodePen.

    Mixing Screens (CodePen)

    As you saw above, I have a MIDI controller and although the software method works great, I’d quite like to make use of this nifty piece of kit. That’s easily done with the Web MIDI API. All I need to do is call it, and when I move one of the sliders on the controller (I’ve allocated the big cross fader in the middle for this), pick up on the change of value and use that to control the opacity instead.

    var midi, data; // start talking to MIDI controller if (navigator.requestMIDIAccess) { navigator.requestMIDIAccess({ sysex: false }).then(onMIDISuccess, onMIDIFailure); } else { alert(“No MIDI support in your browser.”); } // on success function onMIDISuccess(midiData) { // this is all our MIDI data midi = midiData; var allInputs = midi.allInputs.values(); // loop over all available inputs and listen for any MIDI input for (var input =; input && !input.done; input = { // when a MIDI value is received call the onMIDIMessage function input.value.onmidimessage = onMIDIMessage; } } function onMIDIMessage(message) { // data comes in the form [command/channel, note, velocity] data =; // Opacity change for screen. The cross fader values are [176, 8, {0-127}] if ( (data[0] === 176) && (data[1] === 8) ) { // this value will change as the fader is moved var opacity = data[2]/127; = opacity; } }

    The final code was slightly more complicated than this, as I decided to switch the two screens based on the frequencies of the sound that was playing, and use the cross fader to depict the frequency threshold value. This meant they flickered in and out of each other, rather than just faded. There’s a very rough-and-ready first version of the software on GitHub.

    Phew, Great! Now we need to get all this to the streets!

    Step three: Portable kit

    Did you notice how I mentioned a case to carry it all around in? I wanted the case to be morphable, so I could use the equipment from it too, a sort of bag-to-usherette-tray-type affair. Well, I had an unused laptop bag…

    I strengthened it with some MDF, so when I opened the bag it would hold like a tray where the laptop and MIDI controller would sit. The projector was Velcroed to the external pocket of the bag, so when it was a tray it would project from underneath. I added two durable straps, one for my shoulders and one round my waist, both attached to the bag itself. There was a lot of cutting and trimming. As it was a laptop bag it was pretty thick to start and sewing was tricky. However, I only broke one sewing machine needle; I’ve been known to break more working with leather, so I figured I was doing well. By the way, you can actually buy usherette trays, but I just couldn’t resist hacking my own :)

    Step four: Take to the streets

    First, make sure everything is charged – everything – a lot! The laptop has to power both the MIDI controller and the projector, and although I have a mobile phone battery booster pack, that’ll only charge the projector should it run out. I estimated I could get a good hour of visual artistry before I needed to worry, though.

    I had a couple of ideas about time of day and location. Here in the UK at this time of year, it gets dark around half past four, so I could easily head out in a city around 5pm and it would be dark enough for the projections to be seen pretty well. I chose Bristol, around the waterfront, as there were some interesting locations to try it out in. The best was Millennium Square: busy but not crowded and plenty of surfaces to try projecting on to.

    My first time out with the portable audio/visual pack (PAVP as it will now be named) was brilliant. I played music and projected visuals, like a one-woman band of A/V!

    You might be thinking what the point of this was, besides, of course, it being a bit of fun. Well, this project got me to look at canvas and SVG more closely. The Web MIDI API was really interesting; MIDI as a data format has some great practical uses. I think without our side projects we may not have all these wonderful uses for our everyday code. Not only do they remind us coding can, and should, be fun, they also help us learn and grow as makers.

    My favourite part? When I was projecting into a water feature in Millennium Square. For those who are familiar, you’ll know it’s like a wall of water so it produced a superb effect. I drew quite a crowd and a kid came to stand next to me and all I could hear him say with enthusiasm was, ‘Oh wow! That’s so cool!’

    Yes… yes, kid, it was cool. Making things with code is cool.

    Massive thanks to the lovely Drew McLellan for his incredibly well-directed photography, and also Simon Johnson who took a great hand in perfecting the kit while it was attached.

    About the author

    Ruth John wireframes, designs and codes for The Lab at O2 (Telefonica). She also tweets and blogs a bit too. You can often find her chatting about web development, building apps and how an extra div is not the answer to your styling problems. Either that or the lesser known Thundercats characters.

    More articles by Ruth

    Universal React

    posted: 04 Dec 2015

    Jack Franklin darns the holes left in our applications by exploring how our client-side JavaScript frameworks might also be run on the server to provide universal support for all types of user. How will you react when you see mommy kissing Server Claus?

    One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications.

    More generally we’ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google’s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed.

    The good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows:

    • The user visits and the server executes your JavaScript to generate the HTML it needs to render the page.
    • In the background, the client-side JavaScript is executed and takes over the duty of rendering the page.
    • The next time a user clicks, rather than being sent to the server, the client-side app is in control.
    • If the user doesn’t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again.

    This means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that’s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem.

    It’s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you’d like a refresher, the ReactJS docs are a good place to start.

     Getting started

    We’re going to create a tiny ReactJS application that will work on the server and the client. First we’ll need to create a new project and install some dependencies. In a new, blank directory, run:

    npm init -y npm install --save ejs express react react-router react-dom

    That will create a new project and install our dependencies:

    • ejs is a templating engine that we’ll use to render our HTML on the server.
    • express is a small web framework we’ll run our server on.
    • react-router is a popular routing solution for React so our app can fully support and respect URLs.
    • react-dom is a small React library used for rendering React components.

    We’re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too.

    npm install --save-dev babel-cli babel-preset-es2015 babel-preset-react

    Then, create a .babelrc file that contains the following:

    { "presets": ["es2015", "react"] }

    What we’ve done here is install Babel’s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We’ll need the React transforms when we start writing JSX when working with React.

    Creating a server

    For now, our ExpressJS server is pretty straightforward. All we’ll do is render a view that says ‘Hello World’. Here’s our server code:

    import express from 'express'; import http from 'http'; const app = express(); app.use(express.static('public')); app.set('view engine', 'ejs'); app.get('*', (req, res) => { res.render('index'); }); const server = http.createServer(app); server.listen(3003); server.on('listening', () => { console.log('Listening on 3003'); });

    Here we’re using ES6 modules, which I wrote about on 24 ways last year, if you’d like a reminder. We tell the app to render the index view on any GET request (that’s what app.get('*') means, the wildcard matches any route).

    We now need to create the index view file, which Express expects to be defined in views/index.js:

    <!DOCTYPE html> <html> <head> <title>My App</title> </head> <body> Hello World </body> </html>

    Finally, we’re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command:

    ./node_modules/.bin/babel-node server.js

    And you should now be able to visit http://localhost:3003 and see ‘Hello World’ right there:

    Building the React app

    Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs.

    Firstly, let’s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so:

    <body> <%- markup %> </body>

    Next, we’ll define the routes we want our app to have using React Router. For now we’ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it’s clearer to define them as an object. Here’s what we’re starting with:

    const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent } ] }

    These are just placed at the top of server.js, after the import statements. Later we’ll move these into a separate file, but for now they are fine where they are.

    Notice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let’s quickly define components/app.js and components/index.js. app.js looks like so:

    import React from 'react'; export default class AppComponent extends React.Component { render() { return ( <div> <h2>Welcome to my App</h2> { this.props.children } </div> ); } }

    When a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland:

    import React from 'react'; export default class IndexComponent extends React.Component { render() { return ( <div> <p>This is the index page</p> </div> ); } }

    Server-side routing with React Router

    Head back into server.js, and firstly we’ll need to add some new imports:

    import React from 'react'; import { renderToString } from 'react-dom/server'; import { match, RoutingContext } from 'react-router'; import AppComponent from './components/app'; import IndexComponent from './components/index';

    The ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It’s this method that we’ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we’ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don’t need to concern yourself about how this component works, so don’t worry too much.

    Now for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes:

    app.get('*', (req, res) => { // routes is our object of React routes defined above match({ routes, location: req.url }, (err, redirectLocation, props) => { if (err) { // something went badly wrong, so 500 with a message res.status(500).send(err.message); } else if (redirectLocation) { // we matched a ReactRouter redirect, so redirect from the server res.redirect(302, redirectLocation.pathname +; } else if (props) { // if we got props, that means we found a valid component to render // for the given route const markup = renderToString(<RoutingContext {...props} />); // render `index.ejs`, but pass in the markup we want it to display res.render('index', { markup }) } else { // no route match, so 404. In a real app you might render a custom // 404 view here res.sendStatus(404); } }); });

    We call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err,redirectLocationandpropsas the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional,else if (props)`. If we got given props and we’ve made it this far it means we found a matching component to render and we can use this code to render it:

    ... } else if (props) { // if we got props, that means we found a valid component to render // for the given route const markup = renderToString(<RoutingContext {...props} />); // render `index.ejs`, but pass in the markup we want it to display res.render('index', { markup }) } else { ... }

    The renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components.

    Note the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent:

    <MyComponent a="foo" b="bar" /> // OR: const props = { a: "foo", b: "bar" }; <MyComponent {...props} />

    Running the server again

    I know that felt like a lot of work, but the good news is that once you’ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working!

    Refactoring and one more route

    Before we move on to getting this code running on the client, let’s add one more route and do some tidying up. First, move our routes object out into routes.js:

    import AppComponent from './components/app'; import IndexComponent from './components/index'; const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent } ] } export { routes };

    And then update server.js. You can remove the two component imports and replace them with:

    import { routes } from './routes';

    Finally, let’s add one more route for ./about and links between them. Create components/about.js:

    import React from 'react'; export default class AboutComponent extends React.Component { render() { return ( <div> <p>A little bit about me.</p> </div> ); } }

    And then you can add it to routes.js too:

    import AppComponent from './components/app'; import IndexComponent from './components/index'; import AboutComponent from './components/about'; const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent }, { path: '/about', component: AboutComponent } ] } export { routes };

    If you now restart the server and head to http://localhost:3003/about` you’ll see the about page!

    For the finishing touch we’ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so:

    import React from 'react'; import { Link } from 'react-router'; export default class AppComponent extends React.Component { render() { return ( <div> <h2>Welcome to my App</h2> <ul> <li><Link to='/'>Home</Link></li> <li><Link to='/about'>About</Link></li> </ul> { this.props.children } </div> ); } }

    You can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we’re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience.

    Client-side rendering

    First, we’re going to make a small change to views/index.js. React doesn’t like rendering directly into the body and will give a warning when you do so. To prevent this we’ll wrap our app in a div:

    <body> <div id="app"><%- markup %></div> <script src="build.js"></script> </body>

    I’ve also added in a script tag to build.js, which is the file we’ll generate containing all our client-side code.

    Next, create client-render.js. This is going to be the only bit of JavaScript that’s exclusive to the client side. In it we need to pull in our routes and render them to the DOM.

    import React from 'react'; import ReactDOM from 'react-dom'; import { Router } from 'react-router'; import { routes } from './routes'; import createBrowserHistory from 'history/lib/createBrowserHistory'; ReactDOM.render( <Router routes={routes} history={createBrowserHistory()} />, document.getElementById('app') )

    The first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser’s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we’ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation.

    Finally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element.

    Generating build.js

    We’re actually almost there! The final thing we need to do is generate our client side bundle. For this we’re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We’ll install it and babel-loader, a webpack plugin for transforming code through Babel.

    npm install --save-dev webpack babel-loader

    To run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the route of our application and add the following code:

    var path = require('path'); module.exports = { entry: path.join(process.cwd(), 'client-render.js'), output: { path: './public/', filename: 'build.js' }, module: { loaders: [ { test: /.js$/, loader: 'babel' } ] } }

    Note first that this file can’t be written in ES6 as it doesn’t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location – if we just gave it the string ‘client-render.js’, webpack wouldn’t be able to find it.

    Next, we tell webpack where to output our file, and here I’m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first.

    Now we’re ready to generate the bundle!


    This will take a fair few seconds to run (on my machine it’s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you’ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect!

    The first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you’re developing.


    First, if you’d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions.

    Next, I want to stress that you shouldn’t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you’d be right. I used it as it’s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it’s a suitable infrastructure for you.

    With that, all that’s left for me to do is wish you a very merry Christmas and best of luck with your React applications!

    About the author

    Jack Franklin is a developer, speaker and author who blogs at and has authored “Beginning jQuery”. Jack works as a developer evangelist for Pusher and is also a Google Developer Expert for the Chrome HTML 5 platform. He tweets as @jack_franklin and spends far too much time thinking about JavaScript.

    More articles by Jack

    Get Expressive with Your Typography

    posted: 03 Dec 2015

    Richard Rutter adapts an extract from his forthcoming book on web typography and encourages us to be brave with type choices. By allowing type to express a website’s intentions from even before the moment a visitor starts to read, we can help set the tone and our users’ expectations.

    In 1955 Beatrice Warde, an American communicator on typography, published a series of essays entitled The Crystal Goblet in which she wrote, “People who love ideas must have a love of words. They will take a vivid interest in the clothes that words wear.” And with that proposition Warde introduced the idea that just as we judge someone based on the clothes they are wearing, so we make judgements about text based on the typefaces in which it is set.

    Beatrice Warde. ©1970 Monotype Imaging Inc.

    Choosing the same typeface as everyone else, especially if you’re trying to make an statement, is like turning up to a party in the same dress; to a meeting in the same suit, shirt and tie; or to a craft ale dispensary in the same plaid shirt and turned-up skinny jeans.

    But there’s more to your choice of typeface than simply making an impression. In 2012 Jon Tan wrote on 24 ways about a scientific study called “The Aesthetics of Reading” which concluded that “good quality typography is responsible for greater engagement during reading and thus induces a good mood.”

    Furthermore, at this year’s Ampersand conference Sarah Hyndman, an expert in multisensory typography, discussed how typefaces can communicate with our subconscious. Sarah showed that different fonts could have an effect on how food tasted. A rounded font placed near a bowl of jellybeans would make them taste sweeter, and a jagged angular font would make them taste more sour.

    The quality of your typography can therefore affect the mood of your reader, and your font choice directly affect the senses. This means you can manipulate the way people feel. You can change their emotional state through type alone. Now that’s a real superpower!

    The effects of your body text design choices are measurable but subtle. If you really want to have an impact you need to think big. Literally. Display text and headings are your attention grabbers. They are your chance to interrupt, introduce and seduce.

    Display text and headings set the scene and draw people in. Text set large creates an image that visitors see before they read, and that’s your chance to choose a typeface that immediately expresses what the text, and indeed the entire website, stands for. What expectations of the text do you want to set up? Youthful enthusiasm? Businesslike? Cutting-edge? Hipster? Sensible and secure? Fun and informal? Authoritarian?

    Typography conveys much more than just information. It imparts feeling, emotion and sentiment, and arouses preconceived ideas of trust, tone and content. Think about taking advantage of this by introducing impactful, expressive typography to your designs on the web. You can alter the way your reader feels, so what emotion do you want to provoke?

    Maybe you want them to feel inspired like this stop smoking campaign:

    Perhaps they should be moved and intrigued, as with Makeshift magazine:

    Or calmly reassured:

    Fonts also tap into the complex library of associations that we’ve been accumulating in our brains all of our lives. You build up these associations every time you see a font from the context that you see it in. All of us associate certain letterforms with topics, times and places.

    Retiro is obviously Spanish:

    Retiro by Typofonderie

    Bodoni and Eurostile used in this menu couldn’t be much more Italian:

    Bodoni and Eurostile, both designed in Italy

    To me, Clarendon gives a sense of the 1960s and 1970s. I’m not sure if that’s what Costa was going for, but that’s what it means to me:

    Costa coffee flier

    And Knockout and Gotham really couldn’t be much more American:

    Knockout and Gotham by Hoefler & Co

    When it comes to choosing your display typeface, the type designer Christian Schwartz says there are two kinds. First are the workhorse typefaces that will do whatever you want them to do. Helvetica, Proxima Nova and Futura are good examples. These fonts can be shaped in many different ways, but this also means they are found everywhere and take great skill and practice to work with in a unique and striking manner.

    The second kind of typeface is one that does most of the work for you. Like finely tailored clothing, it’s the detail in the design that adds interest.

    Setting headings in Bree rather than Helvetica makes a big difference to the tone of the article

    Such typefaces carry much more inherent character, but are also less malleable and harder to adapt to different contexts. Good examples are Marr Sans, FS Clerkenwell, Strangelove and Bree.

    Push the boat out

    Remember, all type can have an effect on the reader. Take advantage of that and allow your type to have its own vernacular and impact. Be expressive with your type. Don’t be too reverential, dogmatic – or ordinary. Be brave and push a few boundaries.

    Adapted from Web Typography a book in progress by Richard Rutter.

    About the author

    Richard Rutter is a user experience consultant and director of Clearleft. In 2009 he cofounded the webfont service, Fontdeck. He runs an ongoing project called The Elements of Typographic Style Applied to the Web, where he extols the virtues of good web typography. Richard occasionally blogs at Clagnut, where he writes about design, accessibility and web standards issues, as well as his passion for music and mountain biking.

    More articles by Richard

    How to Do a UX Review

    posted: 02 Dec 2015

    Joe Leech offers a rundown on his UX review process, sharing tips about analysing data and creating personas, and setting out findings in a form that benefits clients. From quick wins to workshops, there are gifts here everyone will be grateful for.

    A UX review is where an expert goes through a website looking for usability and experience problems and makes recommendations on how to fix them.

    I’ve completed a number of UX reviews over my twelve years working as a user experience consultant and I thought I’d share my approach.

    I’ll be talking about reviewing websites here; you can adapt the approach for web apps, or mobile or desktop apps.

    Why conduct a review

    Typically, a client asks for a review to be undertaken by a trusted and, ideally, detached third party who either works for an agency or is a freelancer. Often they may ask a new member of the UX team to complete one, or even set it as a task for a job interview. This indicates the client is looking for an objective view, seen from the outside as a user would see the website.

    I always suggest conducting some user research rather than a review. Users know their goals and watching them make (what you might think of as) mistakes on the website is invaluable. Conducting research with six users can give you six hours’ worth of review material from six viewpoints. In short, user research can identify more problems and show how common those problems might be.

    There are three reasons, though, why a review might better suit client needs than user research:

    1. Quick results: user research and analysis takes at least three weeks.
    2. Limited budget: the £6–10,000 cost to run user research is about twice the cost of a UX review.
    3. Users are hard to reach: in the business-to-business world, reaching users is difficult, especially if your users hold senior positions in their organisations. Working with consumers is much easier as there are often more of them.

    There is some debate about the benefits of user research over UX review. In my experience you learn far more from research, but opinions differ.

    Be objective

    The number one mistake many UX reviewers make is reporting back the issues they identify as their opinion. This can cause credibility problems because you have to keep justifying why your opinion is correct.

    I’ve had the most success when giving bad news in a UX review and then finally getting things fixed when I have been as objective as possible, offering evidence for why something may be a problem.

    To be objective we need two sources of data: numbers from analytics to appeal to reason; and stories from users in the form of personas to speak to emotions. Highlighting issues with dispassionate numerical data helps show the extent of the problem. Making the problems more human using personas can make the problem feel more real.

    Numbers from analytics

    The majority of clients I work with use Google Analytics, but if you use a different analytics package the same concepts apply. I use analytics to find two sets of things.

    1. Landing pages and search terms

    Landing pages are the pages users see first when they visit a website – more often than not via a Google search. Landing pages reveal user goals. If a user landed on a page called ‘Yellow shoes’ their goal may well be to find out about or buy some yellow shoes.

    It would be great to see all the search terms bringing people to the website but in 2011 Google stopped providing search term data to (rightly!) protect users’ privacy. You can get some search term data from Google Webmaster tools, but we must rely on landing pages as a clue to our users’ goals.

    The thing to look for is high-traffic landing pages with a high bounce rate. Bounce rate is the percentage of visitors to a website who navigate away from the site after viewing only one page. A high bounce rate (over 50%) isn’t good; above 70% is bad.

    To get a list of high-traffic landing pages with a high bounce rate install this bespoke report.

    Google Analytics showing landing pages ordered by popularity and the bounce rate for each.

    This is the list of pages with high demand and that have real problems as the bounce rate is high. This is the main focus of the UX review.

    2. User flows

    We have the beginnings of the user journey: search terms and initial landing pages. Now we can tap into the really useful bit of Google Analytics. Called behaviour flows, they show the most common order of pages visited.

    Behaviour flows from Google Analytics, showing the routes users took through the website.

    Here we can see the second and third (and so on) pages users visited. Importantly, we can also see the drop-outs at each step.

    If your client has it set up, you can also set goal pages (for example, a post-checkout contact us and thank you page). You can then see a similar view that tracks back from the goal pages. If your client doesn’t have this, suggest they set up goal tracking. It’s easy to do.

    We now have the remainder of the user journey.

    A user journey

    Expect the work in analytics to take up to a day.

    We may well identify more than one user journey, starting from different landing pages and going to different second- and third-level pages. That’s a good thing and shows we have different user types. Talking of user types, we need to define who our users are.


    We have some user journeys and now we need to understand more about our users’ motivations and goals.

    I have a love-hate relationship with personas, but used properly these portraits of users can help bring a human touch to our UX review.

    I suggest using a very cut-down view of a persona. My old friends Steve Cable and Richard Caddick at cxpartners have a great free template for personas from their book Communicating the User Experience.

    The first thing to do is find a picture that represents that persona. Don’t use crappy stock photography – it’s sometimes hard to relate to perfect-looking people) – use authentic-looking people. Here’s a good collection of persona photos.

    An example persona.

    The personas have three basic attributes:

    1. Goals: we can complete these drawing on the analytics data we have (see example).
    2. Musts: things we have to do to meet the persona’s needs.
    3. Must nots: a list of things we really shouldn’t do.

    Completing points 2 and 3 can often be done during the writing of the report.

    Let’s take an example. We know that the search term ‘yellow shoes’ takes the user to the landing page for yellow shoes. We also know this page has a high bounce rate, meaning it doesn’t provide a good experience.

    With our expert hat on we can review the page. We will find two types of problem:

    1. Usability issues: ineffective button placement or incorrect wording, links not looking like links, and so on.
    2. Experience issues: for example, if a product is out of stock we have to contact the business to ask them to restock.
    That link is very small and hard to see.

    We could identify that the contact button isn’t easy to find (a usability issue) but that’s not the real problem here. That the user has to ask the business to restock the item is a bad user experience. We add this to our personas’ must nots. The big experience problems with the site form the musts and must nots for our personas.

    We now have a story around our user journey that highlights what is going wrong.

    If we’ve identified a number of user journeys, multiple landing pages and differing second and third pages visited, we can create more personas to match. A good rule of thumb is no more than three personas. Any more and they lose impact, watering down your results.

    Expect persona creation to take up to a day to complete.

    Let’s start the review

    We take the user journeys and we follow them step by step, working through the website looking for the reasons why users drop out at each step. Using Keynote or PowerPoint, I structure the final report around the user journey with separate sections for each step.

    For each step we’ll find both usability and experience problems. Split the results into those two groups.

    Usability problems are fairly easy to fix as they’re often quick design changes. As you go along, note the usability problems in one place: we’ll call this ‘quick wins’. Simple quick fixes are a reassuring thing for a client to see and mean they can get started on stuff right away. You can mark the severity of usability issues. Use a scale from 1 to 3 (if you use 1 to 5 everything ends up being a 3!) where 1 is minor and 3 is serious.

    Review the website on the device you’d expect your persona to use. Are they using the site on a smartphone? Review it on a smartphone.

    I allow one page or slide per problem, which allows me to explain what is going wrong. For a usability problem I’ll often make a quick wireframe or sketch to explain how to address it.

    A UX review slide displaying all the elements to be addressed. These slides may be viewed from across the room on a screen so zoom into areas of discussion.

    (Quick tip: if you use Google Chrome, try Awesome Screenshot to capture screens.)

    When it comes to the more severe experience problems – things like an online shop not offering next day delivery, or a business that needs to promise to get back to new customers within a few hours – these will take more than a tweak to the UI to fix.

    Call these something different. I use the terms like business challenges and customer experience issues as they show that it will take changes to the organisation and its processes to address them. It’s often beyond the remit of a humble UX consultant to recommend how to fix organisational issues, so don’t try.

    Again, create a page within your document to collect all of the business challenges together.

    Expect the review to take between one and three days to complete.

    The final report should follow this structure:

    • The approach
    • Overview of usability quick wins
    • Overview of experience issues
    • Overview of Google Analytics findings
    • The user journeys
    • The personas
    • Detailed page-by-page review (broken down by steps on the user journey)

    There are two academic theories to help with the review.

    Heuristic evaluation is a set of criteria to organise the issues you find. They’re great for categorising the usability issues you identify but in practice they can be quite cumbersome to apply.

    I prefer the more scientific and much simpler cognitive walkthrough that is focused on goals and actions.

    A workshop to go through the findings

    The most important part of the UX review process is to talk through the issues with your client and their team.

    A document can only communicate a certain amount. Conversations about the findings will help the team understand the severity of the issues you’ve uncovered and give them a chance to discuss what to do about them.

    Expect the workshop to last around three hours.

    When presenting the report, explain the method you used to conduct the review, the data sources, personas and the reasoning behind the issues you found. Start by going through the usability issues. Often these won’t be contentious and you can build trust and improve your credibility by making simple, easy to implement changes.

    The most valuable part of the workshop is conversation around each issue, especially the experience problems. The workshop should include time to talk through each experience issue and how the team will address it.

    I collect actions on index cards throughout the workshop and make a note of who will take what action with each problem.

    Index cards showing the problem and who is responsible.

    When talking through the issues, the person who designed the site is probably in the room – they may well feel threatened. So be nice. When I talk through the report I try to have strong ideas, weakly held.

    At the end of the workshop you’ll have talked through each of the issues and identified who is responsible for addressing them. To close the workshop I hand out the cards to the relevant people, giving them a physical reminder of the next steps they have to take.

    That’s my process for conducting a review. I’d love to hear any tips you have in the comments.

    About the author

    @MrJoe, Joe to his friends, is the author of the book Psychology of Designers.

    A recovering neuroscientist, then a spell as a elementary school teacher, Joe started his UX career 12 years ago. He has worked with organisations like Disney, eBay, Glenfiddich and Marriott.

    More articles by Joe

    Being Customer Supportive

    posted: 01 Dec 2015

    Elizabeth Galle extends goodwill to all customers this Christmas (and beyond) with some insights into establishing and maintaining good relationships through effective customer support.

    Every day in customer support is an inbox, a Twitter feed, or a software forum full of new questions. Each is brimming with your customers looking for advice, reassurance, or fixes for their software problems. Each one is an opportunity to take a break from wrestling with your own troublesome tasks and assist someone else in solving theirs.

    Sometimes the questions are straightforward and can be answered in a few minutes with a short greeting, a link to a help page, or a prewritten bit of text you use regularly: how to print a receipt, reset a password, or even, sadly, close your account.

    More often, a support email requires you to spend some time unpacking the question, asking for more information, and writing a detailed personal response, tailored to help that particular user on this particular day.

    Here I offer a few of my own guidelines on how to make today’s email the best support experience for both me and my customer. And even if you don’t consider what you do to be customer support, you might still find the suggestions useful for the next time you need to communicate with a client, to solve a software problem with teammates, or even reach out and ask for help yourself.

    (All the examples appearing in this article are fictional. Any resemblance to quotes from real, software-using persons is entirely coincidental. Except for the bit about Star Wars. That happened.)

    Who’s TAHT girl

    I’ll be honest: I briefly tried making these recommendations into a clever mnemonic like FAST (facial drooping, arm weakness, speech difficulties, time) or PAD (pressure, antiseptic, dressing). But instead, you get TAHT: tone, ask, help, thank. Ah, well.

    As I work through each message in my support queue, I

    • listen to the tone of the email
    • ask clarifying questions
    • bring in extra help as needed
    • and thank the customer when the problem is solved.

    Let’s open an email and get started!

    Leave your message at the sound of the tone

    With our enthusiasm for emoji, it can be very hard to infer someone’s tone from plain text. How much time have you spent pondering why your friend responded with “Thanks.” instead of “Thanks!”? I mean, why didn’t she :grin: or :wink: too?

    Our support customers, however, are often direct about how they’re feeling:

    I’m working against a deadline. Need this fixed ASAP!!!!

    This hasn’t worked in a week and I am getting really frustrated.

    I’ve done this ten times before and it’s always worked. I must be missing something simple.

    They want us to understand the urgency of this from their point of view, just as much as we want to help them in a timely manner. How this information is conveyed gives us an instant sense of whether they are frustrated, angry, or confused—and, just as importantly, how frustrated-angry-confused they are.

    Listen to this tone before you start writing your reply. Here are two ways I might open an email:

    1. “I’m sorry that you ran into trouble with this.”
    2. “Sorry you ran into trouble with this!”

    The content is largely the same, but the tone is markedly different. The first version is a serious, staid reaction to the problem the customer is having; the second version is more relaxed, but no less sincere.

    Matching the tone to the sender’s is an important first step. Overusing exclamation points or dropping in too-casual language may further upset someone who is already having a crummy time with your product. But to a cheerful user, a formal reply or an impersonal form response can be off-putting, and damage a good relationship.

    When in doubt, I err on the side of being too formal, rather than sending a reply that may be read as flip or insincere. But whichever you choose, matching your correspondent’s tone will make for a more comfortable conversation.

    Catch the ball and throw it back

    Once you’ve got that tone on lock, it’s time to tackle the question at hand. Let’s see what our customer needs help with today:

    I tried everything in the troubleshooting page but I can’t get it to work again. I am on a Mac. Please help.

    Hmm, not much information here. Now, if I got this short email after helping five other people with the same problem on Mac OS X, I would be sorely tempted to send this customer that common solution in my first reply. I’ve found it’s important to resist the urge to assume this sixth person needs the same answer as the other five, though: there isn’t enough to connect this email to the ones that came before hers.

    Instead, ask a few questions to start. Invest some time to see if there are other symptoms in common, like so:

    I’m sorry that you ran into trouble with this! I’ll need a little more information to see what’s happening here.


    Thank you for your help.

    Those questions are customized for the customer’s issue as much as possible, and can be fairly wide-ranging. They may include asking for log files, getting some screenshots, or simply checking the browser and operating system version she’s using. I’ll ask anything that might make a connection to the previous cases I’ve answered—or, just as importantly, confirm that there isn’t a connection. What’s more, a few well-placed questions may save us both from pursuing the wrong path and building additional frustration.

    (A note on that closing: “Thank you for your help”–I often end an email this way when I’ve asked for a significant amount of follow up information. After all, I’m imposing on my customer’s time to run any number of tests. It’s a necessary step, but I feel that thanking them is a nice acknowledgment we’re in this together.)

    Having said that, though, let’s bring tone back into the mix:

    I tried everything in the troubleshooting but I can’t get it to work again. I am on a Mac. I’m working against a deadline. Need this fixed ASAP!!!!

    This customer wants answers now. I’ll still ask for more details, but would consider including the solution to the previous problem in my initial reply as well. (But only if doing so can’t make the situation worse!)

    I’m sorry that you ran into trouble with this! I’ll need a little more information to see what’s happening here.


    If you’d like to try something in the meantime, delete the file named xyz.txt. (If this isn’t the cause of the problem, deleting the file won’t hurt anything.) Here’s how to find that file on your computer: [steps]

    Let me know how it goes!

    In the best case, the suggestion works and the customer is on her way. If it doesn’t solve the problem, you will get more information in answer to your questions and can explore other options. And you’ve given the customer an opportunity to be involved in fixing the issue, and some new tools which might come in handy again in the future.

    Bring in help

    The support software I use counts how many emails the customer and I have exchanged, and reports it in a summary line in my inbox. It’s an easy, passive reminder of how long the customer and I have been working together on a problem, especially first thing in the morning when I’m reacquainting myself with my open support cases.

    Three is the smallest number I’ll see there: the customer sends the initial question (1 email); I reply with an answer (2 emails); the customer confirms the problem is solved (3 emails). But the most complicated, stickiest tickets climb into double-digit replies, and anything that stretches beyond a dozen is worthy of a cheer in Slack when we finally get to the root of the problem and get it fixed.

    While an extra round of questions and answers will nudge that number higher, it gives me the chance to feel out the technical comfort level of the person I’m helping. If I ask the customer to send some screenshots or log files and he isn’t sure how to do that, I will use that information to adjust my instructions on next steps. I may still ask him to try running a traceroute on his computer, but I’ll break down the steps into a concise, numbered list, and attach screenshots of each step to illustrate it.

    If the issue at hand is getting complicated, take note if the customer starts to feel out of their depth technically—either because they tell you so directly or because you sense a shift in tone. If that happens, propose bringing some outside help into the conversation:

    Do you have a network firewall or do you use any antivirus software? One of those might be blocking a connection that the software needs to work properly; here’s a list of the required connections [link]. If you have an IT department in-house, they should be able to help confirm that none of those are being blocked.


    This error message means you don’t have permission to install the software on your own computer. Is there a systems administrator in the office that may be able to help with this?

    For email-based support cases, I’ll even offer to add someone from their IT department to the thread, so we can discuss the problem together rather than have the customer relay questions and answers back and forth.

    Similarly, there are occasionally times when my way of describing things doesn’t fit how the customer understands them. Rather than bang our heads against our keyboards, I will ask one of my support colleagues to join the conversation from our side, and see if he can explain things more clearly than I’ve been able to do.

    We appreciate your business. Please call again

    And then, o frabjous day, you get your reward: the reply which says the problem has been solved.

    That worked!! Thank you so much for saving my day!

    I wish I could send you some cookies!

    If you were here, I would give you my tickets to Star Wars.

    [Reply is an animated gif.]

    Sometimes the reply is a bit more understated:

    That fixed it. Thanks.

    Whether the customer is elated, satisfied, or frankly happy to be done with emailing support, I like to close longer email threads or short, complicated issues with a final thanks and reminder that we’re here to help:

    Thank you for the update; I’m glad to hear that solved the problem for you! I hope everything goes smoothly for you now, but feel free to email us again if you run into any other questions or problems. Best,

    Then mark that support case closed, and move on to the next question. Because even with the most thoughtfully designed software product, there will always be customers with questions for your capable support team to answer.

    Tone, ask, help, thank

    So there you have it: TAHT. Pay attention to tone; ask questions; bring in help; thank your customer.

    (Lack of) catchy mnemonics aside, good customer support is about listening, paying attention, and taking care in your replies. I think it can be summed up beautifully by this quote from Pamela Marie (as tweeted by Chris Coyier):

    Golden rule asking a question: imagine trying to answer it

    Golden rule in answering: imagine getting your answer

    You and your teammates are applying a variation of this golden rule in every email you write. You’re the software ambassadors to your customers and clients. You get the brunt of the problems and complaints, but you also get to help fix them. You write the apologies, but you also have the chance to make each person’s experience with your company or product a little bit better for next time.

    I hope that your holidays are merry and bright, and may all your support inboxes be light.

    About the author

    Elizabeth Galle got her start in customer support behind the counter of her grandfather’s delicatessen. She’s since moved from sandwiches to software, but the principles are about the same. Liz talks about her cat & quotes West Wing episodes as @drinkerthinker, and supports Adobe Typekit with the team at @typekit.

    More articles by Elizabeth

    Animating Your Brand

    posted: 30 Nov 2015

    Donovan Hutchinson stamps his snow-caked boots, unwinds his scarf and eases [See what we did there? Did you?] us into December with some tips and resources on integrating animation into our website style guides. A warm animated welcome to 24 ways 2015!

    Brought to you by MightyDeals Get 10% off with code ‘24ways’ on all deals from The best deals for web designers and developers.

    Let’s talk about how we add animation to our designs, in a way that’s consistent with other aspects of our brand, such as fonts, colours, layouts and everything else.

    Animating is fun. Adding animation to our designs can bring them to life and make our designs stand out. Animations can show how the pieces of our designs fit together. They provide context and help people use our products.

    All too often animation is something we tack on at the end. We put a transition on a modal window or sliding menu and we often don’t think about whether that animation is consistent with our overall design.

    Style guides to the rescue

    A style guide is a document that establishes and enforces style to improve communication. It can cover anything from typography and writing style to ethics and other, broader goals. It might be a static visual document showing every kind of UI, like in the redesign shown below.

    UI toolkit from “Reimagining” by @mslima

    It might be a technical reference with code examples. CodePen’s new design patterns and style guide is a great example of this, showing all the components used throughout the website as live code.

    CodePen’s design patterns and style guide

    A style guide gives a wide view of your project, it maintains consistency when adding new content, and we can use our style guide to present animations.

    Living documents

    Style guides don’t need to be static. We can use them to show movement. We can share CSS keyframe animations or transitions that can then go into production. We can also explain why animation is there in the first place.

    Just as a style guide might explain why we chose a certain font or layout, we can use style guides to explain the intent behind animation. This means that if someone else wants to create a new component, they will know why animation applies.

    If you haven’t yet set up a style guide, you might want to take a look at Pattern Lab. It’s a great tool for setting up your own style guide and includes loads of design patterns to get started.

    There are many style guide articles linked from the excellent, open sourced, Website Style Guide Resources. Anna Debenham also has an excellent pocket book on the subject.

    Adding animation

    Before you begin throwing animation at all the things, establish the character you want to convey.

    Andrex Puppy (British TV ad from 1994)

    List some words that describe the character you’re aiming for. If it was the Andrex brand, they might have gone for: fun, playful, soft, comforting.

    Perhaps you’re aiming for something more serious, credible and authoritative. Or maybe exciting and intense, or relaxing and meditative. For each scenario, the animations that best represent these words will be different.

    In the example below, two animations both take the same length of time, but use different timing functions. One eases, and the other bounces around. Either might be good, depending on your needs.

    Timing functions (CodePen)

    Example: Kitman Labs

    Working with Kitman Labs, we spent a little time working out what words best reflected the brand and came up with the following:

    • Scientific
    • Precise
    • Fast
    • Solid
    • Dependable
    • Helpful
    • Consistent
    • Clear

    With such a list of words in hand, we design animation that fits. We might prefer a tween that moves quickly to its destination over one that drifts slowly or bounces.

    We can use the list when justifying our use of animation, such as when it helps our customers understand the context of data on the page. Or we may even choose not to aniate, when that might make the message inconsistent.

    Create guidelines

    If you already have a style guide, adding animation could begin with creating an overview section.

    One approach is to create a local website and share it within your organisation. We recently set up a local site for this purpose.

    A recent project’s introduction to the topic of animation

    This document becomes a reference when adding animation to components. Include links to related resources or examples of animation to help demonstrate the animation style you want.


    You can explain the intent of your animation style guide with live animations. This doesn’t just mean waving our hands around. We can show animation through prototypes.

    There are so many prototype tools right now. You could use Invision, Principle, Floid, or even HTML and CSS as embedded CodePens.

    A login flow prototype created in Principle

    These tools help when trying out ideas and working through several approaches. Create videos, animated GIFs or online demos to share with others. Experiment. Find what works for you and work with whatever lets you get the most ideas out of your head fastest. Iterate and refine an animation before it gets anywhere near production.

    Build up a collection

    Build up your guide, one animation at a time.

    Some people prefer to loosely structure a guide with places to put things as they are discovered or invented; others might build it one page at a time – it doesn’t matter. The main thing is that you collect animations like you would trading cards. Or Pokemon. Keep them ready to play and deliver that explosive result.

    You could include animated GIFs, or link to videos or even live webpages as examples of animation. The use of animation to help user experience is also covered nicely in Val Head’s UI animation and UX article on A List Apart.

    What matters is that you create an organised place for them to be found. Here are some ideas to get started.

    Logos and brandmarks

    Many sites include some subtle form of animation in their logos. This can draw the eye, add some character, or bring a little liveliness to an otherwise static page. Yahoo and Google have been experimenting with animation on their logos. Even a simple bouncing animation, such as the logo on, can add character.

    The CSS-animated bouncer from

    Content transitions

    Adding content, removing content, showing and hiding messages are all opportunities to use animation. Careful and deliberate use of animation helps convey what’s changing on screen.

    Animating list items with CSS (

    For more detail on this, I also recommend “Transitional Interfaces” by Pasquale D’Silva.

    Page transitions

    On a larger scale than the changes to content, full-page transitions can smooth the flow between sections of a site. Medium’s article transitions are a good example of this.

    Medium-style page transition (

    Preparing a layout before the content arrives

    We can use animation to draw a page before the content is ready, such as when a page calls a server for data before showing it.

    Optimistic loading grid (CodePen)

    Sometimes it’s good to show something to let the user know that everything’s going well. A short animation could cover just enough time to load the initial content and make the loading transition feel seamless.


    Hover effects, dropdown menus, slide-in menus and active states on buttons and forms are all opportunities. Look for ways you can remove the sudden changes and help make the experience of using your UI feel smoother.

    Form placeholder animation (Studio MDS)

    Keep animation visible

    It takes continuous effort to maintain a style guide and keep it up to date, but it’s worth it. Make it easy to include animation and related design decisions in your documentation and you’ll be more likely to do so. If you can make it fun, and be proud of the result, better still.

    When updating your style guide, be sure to show the animations at the same time. This might mean animated GIFs, videos or live embedded examples of your components.

    By doing this you can make animation integral to your design process and make sure it stays relevant.

    Inspiration and resources

    There are loads of great resources online to help you get started. One of my favourites is IBM’s design language site.

    IBM’s design language: animation design guidelines

    IBM describes how animation principles apply to its UI work and components. They break down the animations into five categories of animations and explain how they apply to each example.

    The site also includes an animation library with example videos of animations and links to source code.

    Example component from IBM’s component library

    The way IBM sets out its aims and methods is helpful not only for their existing designers and developers, but also helps new hires. Furthermore, it’s a good way to show the world that IBM cares about these details.

    Another popular animation resource is Google’s material design.

    Google’s material design documentation

    Google’s guidelines cover everything from understanding easing through to creating engaging and useful mobile UI.

    This approach is visible across many of Google’s apps and software, and has influenced design across much of the web. The site is helpful both for learning about animation and as an showcase of how to illustrate examples.


    If you don’t want to create everything from scratch, there are resources you can use to start using animation in your UI. One such resource is Salesforce’s Lightning design system.

    The system goes further than most guides. It includes a downloadable framework for adding animation to your projects. It has some interesting concepts, such as elevation settings to handle positioning on the z-axis.

    Example of elevation from Salesforce’s Lightning design system

    You should also check out Animate.css.

    “Just add water” — Animate.css

    Animate.css gives you a set of predesigned animations you can apply to page elements using classes. If you use JavaScript to add or remove classes, you can then trigger complex animations. It also plays well with scroll-triggering, and tools such as WOW.js.

    Learn, evolve and make it your own

    There’s a wealth online of information and guides we can use to better understand animation. They can inspire and kick-start our own visual and animation styles. So let’s think of the design of animations just as we do fonts, colours and layouts. Let’s choose animation deliberately, making it part of our style guides.

    Many thanks to Val Head for taking the time to proofread and offer great suggestions for this article.

    About the author

    Donovan Hutchinson is a front-end designer who loves making fun stuff for the web. He also writes tutorials on Find him on Twitter at @donovanh.

    More articles by Donovan

    Cohesive UX

    posted: 24 Dec 2014

    Cameron Moll brings the tenth 24 ways to a close with a look at the increasing need for common experiences across devices. Despite our differences, there are more things we share than divide us. Merry Christmas!

    Brought to you by Beanstalk. Celebrate shipping code again! With Beanstalk, your entire team can ship code as simple as git push.

    Taglines and Truisms

    posted: 22 Dec 2014

    Andrew Clarke poses the question, that if we’re all telling prospective clients that we’re crafting and designing delightful, beautiful and remarkable digital experiences, what marks any of us out?

    Integrating Contrast Checks in Your Web Workflow

    posted: 21 Dec 2014

    Geri Coady bends low the Christmas tree branches with bright and contrasting baubles. This isn’t some gaudy seasonal distraction, though. It’s responsible accessibility advice you can work with throughout the year.

    Naming Things

    posted: 20 Dec 2014

    Paul Lloyd perches his partridge in the CSS pear tree to discuss naming methodologies, ontologies and semantics. What’s in a name? That which we call a cherub by any other name would smell as sweet.

    Meet for Learning

    posted: 19 Dec 2014

    Dr. Leslie Jensen-Inman manages to turn something that makes most of us cringe – meetings – into learning experiences that benefit everyone in the room. The new year is the perfect time to make a lifelong resolution to always be learning.

    Why You Should Design for Open Source

    posted: 18 Dec 2014

    Jina Bolton lets the Christmas spirit of generosity and goodwill flow freely into open source projects in need of good design. It’s work free of charge, perhaps, but not for nothing.

    A Holiday Wish

    posted: 18 Dec 2014

    Jeffrey Zeldman beckons us cosily closer to his warm websmith’s hearth to spin a winter’s tale of hope born of (user) experience. Regardless of job title or discipline, we’re all designers. It takes all the reindeers to pull the sleigh.

    Brought to you by HelpSpot. Move beyond mere efficiency, easily manage meaningful support interactions.

    Content Production Planning

    posted: 17 Dec 2014

    Sophie Dennis understands the necessity of planning carefully how and when website content will be produced – and who’s responsible. Content’s like Christmas: best not left to chance.

    Brought to you by GatherContent. GatherContent helps you plan, organise and collaborate on web content before it hits the CMS.