Design tools are holding us back

Tom Johnson
UX Collective
Published in
11 min readJul 16, 2019

--

They’ve never been faster.

They’ve never had more UI design-centric features.

They’ve never enabled more effective collaboration.

But design tools are still holding us back. Our tools are still using methods, workflows, and features from graphic and visual design.

I get it though. A lot of UI designers come from that background. But we still talk about UI design like we’re making art. We use terms like “canvas” and “artboard” like what we’re doing will hang on the wall of a museum some day behind a red velvet rope.

But it won’t.

Heck, we’re lucky if anyone will even want to look at the UIs we’re making in 6 months. We need to stop treating UI design like art. Yes, it can be pretty. Yes, it a visual medium. Yes, there is an art to making digital products. But it’s not artwork. I’m wading into territory that would merit a separate article, but the aesthetics of your UI should not come before their usability. We need our tools to stop using manipulation methods that emphasize looks over structure. We need better ways to make real products, not pretty mockups.

Increased collaboration helps with that. Features that help make new concepts help with that. But as long as tools keep ignoring the digital medium, implementation will continue to conflict with our mockups.

For starters:

We need the box model

The box model is the basic building block for UI. It’s everywhere. Right-click on this sentence (if you’re on a desktop computer), select, “inspect”, and start hovering your mouse around in the ‘elements’ area. You’ll see a bunch of rectangles appear on the page. That’s the box model.

It provides the structural rules for this page, and every other website. In it, every element has a definition for its size, and how it behaves on the page. It’s the air to the proverbial UI balloon that you’re now gazing upon.

Things push each other. Things have a size. Things share space. With it, websites and apps are a big ol’ game of Tetris and the box model is what determines the size and shape of the blocks.

BUT DESIGN TOOLS DON’T HAVE IT.

WHY.

FOR THE LOVE OF GOD WHY.

We drraaaaaaag things around. We nuuuuuuuudge this and that. We align vertical, center, left right up down, in out by tippity tap tap taping our trackpads to our heart’s content.

But, as far as our tools are concerned, we may as well be sliding sheets of paper around a table. Stacking them on top of each other until, from the bird’s eye view, everything looks juuuuuuuuuust right.

But it’s not how things actually are.

In code, almost everything that takes up space pushes something else out of that space. As I write this sentence, the paragraph gets bigger. As this paragraph grows, the section that it contains gets taller. As that section increases in height, the other sections push further down the page. As they push the page, the page itself gets taller. If I were to do this in a design tool, my frame/artboard would have no idea about the text being longer. It would stay the same size until I drag its bounding box. This is asinine.

If this was box model, the black box in this gif ☝️ would push down as the text grows and the parent frame would get bigger. Word documents have been able to do this for DECADES. Why do all elements in our design tools behave independently from each other? This is great for photo editing, print layouts, and illustrations, but not for UI design.

To add insult to injury, this lack of functionality has led to a pervasive ignorance of what the box model is… which is a bit mind-blowing. I hear the same phrase from designers: “I don’t understand why my designs aren’t developed like the mockups”. One of the main reasons for that is the total separation of the tooling from the medium we’re designing for. HTML and other UI constructs are not a freeform canvas. But that’s all we’re used to…

Would an architect be able to design without knowing the limitations of construction materials?

Would a car designer be able to make cars if they stopped with the clay model?

Other design industries take the time to understand their medium and end product. In their work, there is a clear distinction between concepts and comps. It’s time that digital product design tools helped create comps that emulate the digital medium. It’s time for them to stop using manipulation methods that were born in the print industry.

Inheritance and relationships

No element exists in a vacuum. In the web, elements have a relationship to one another. Some act as the containers for elements. These are parents. This page is a parent, this paragraph wrapper is a parent, and it is a sibling to the paragraphs next to it.

Let’s say this Medium article lived in a design tool, and I wanted to change, say, the font size and color. What would I do? Would I update a shared style? Hmmm maybe, that’s a start. Would I select this paragraph and change its attributes? Yeah, that would do it. But neither of those are how it should/could be done if our tools had inheritance principles.

If you inspect this page right now (right click, select inspect), you’ll see that there’s an element called <body> at the very top of the html tree. If you click on it, you’ll see something like this:

That’s a lot of font fallbacks…

The body, which is the parent of the whole page, is functioning like the artboard/frame in a design file. But, it’s very different from those because it has its own attributes. It defines the font, font color, font weight for the whole page.

Now, click on the black box next to “color” and you’ll get a color picker. If you change the color, you’ll see that this entire post changes with it.

This is because none of the text on this page actually has a color, but is rather inheriting it from the page itself. Pretty easy, right?

Design tools don’t do this. They make you to define it at the element level. Why does this matter? Well, imagine that you want to change your site to a dark mode. If design tools worked like this website does, you could select the artboard/frame on the page, and change the background color to something dark, and the text color to white. If the child text didn’t have its own attributes, you could be looking at dark mode in only a few clicks. You could duplicate the page and see what different configurations do to identical content.

Here’s how that might work:

👆Made in webflow

The same trick could work for seeing how your UI looks if the user had a large font size. If the text size was inherited from the artboard/frame, all you’d have to do is make it larger in one place, as I show in the example gif.

The beauty of this method is that you can still define things at the element level. If you try to change the font weight or size in <body> nothing will happen. This is because individual elements always supersede the parent’s styles. If you want to be granular, you can be granular. If you want to be global, you can be global.

I dream of a day where I open my design file and can define attributes at the frame, canvas, project, and even the global team layer.

Inaccurate handoff

Gotta admit, I struggled coming up with an illustration for this one…

I’ll come out and say it. The CSS that design tools generate is useless.

I used to think it was cool. I’d think things like “Hey, my devs will love this. It’s actually making CSS. Woooooow!!”.

But it’s not.

I mean, yes, it is technically making CSS, and it’s not that it’s got syntax errors or things like that, but let’s be real. If you don’t have a box model, don’t have parent <-> sibling relationships, and you don’t have element inheritance… you’ve got ignorant code.

Why is it even there? Sure, maybe it helps someone figure out the weird round corner-ness or copy a hex code quickly, but that’s about it.

It’s lulling designers into a false sense of security and contribution.

Here’s an example design:

Imagine that I handed off this 👆 glorious and totally original mockup to a developer. It’s straightforward to build, but here’s 👇 the CSS I’d have to write on the left, and the CSS from Figma that I actually used on the right:

I didn’t test this, so if it’s wrong, don’t judge me. It’s close enough for this article, jeesh.

I made the UI, so I knew how it could behave but even I had to guess on the layout of the UI.

Did I want the card to have padding or the row with text?

Did I want the header to stick when the page scrolls or scroll with the rest of the content?

Did I want to use flexbox for the card stack?

Do the cards have a maximum or minimum height?

None of these details are in the design file. As a result, as a “dev” I had to guess on all of them. FOR MY OWN DESIGN.

The redlines in this file were helpful, but didn’t tell me enough about why. Saying how far away things are from each other doesn’t tell you WHY they’re that far away from each other. UI is not some giant X/Y matrix. Some things have paddings, some things have margins, some things grow, some shrink but not beneath a certain size, some things only grow with their siblings, some things have sizes that change with different device heights and widths. We cannot rely on redlines for an accurate implementation.

Also, “dev” views don’t allow developers see how the design works. Sites and apps are not static. Comps that don’t allow developers to see how something behaves are limiting the information we can convey. Are we saying that if a designer wants to explain a responsive layout they have to show every screen 4 times? What if I have 100 screens? How do those stay in sync when things change? The overhead of this handoff results in making a few examples, at best, and leaving the rest up to interpretation.

Design tools should allow devs to see a UI on many different viewports without all the work from a designer.

To fix this, devs should be able to make those different viewports on their own. They should be able to see comps with the keyboard expanded, inside of a browser frame, with a zoomed font, with no wifi connection, or a slow API. Our tools should remove the pristine little “designed” world and show what will happen when things break. If they don’t see it, chances are it’ll never get built.

Database driven design

That’s not a keg, it’s a database…

What does your data model look like?

How are your DB’s tables structured?

Do you know if you can manipulate the data in the way you show it in your mockup?

How many entries are blank?

How long does your API take to return that data?

If the API times out, what error code does it return?

What are all the errors it returns?

Tools that generate mock data, or allow you to see a JSON structure in your design are going in the right direction. The problem is, that we designers don’t have a good way of knowing what we don’t know about our own databases. Design tools should treat back-end developers as a user too, and allow them to inform the designer’s toolkit with how the data are structured. We have tools for triangles, shadows, rectangles, and svgs… but no tools for using and understanding the data model? Seriously?

Want to make the UI super easy to read? Do you know what format the data is stored in? Do you know what it will take to convert some data into a different format? Wouldn’t it be nice if you could show your developer how you mutated the data so that they can implement it in the UI?

Want to show a list of songs, or houses, or images? We should be able to make our lists conditional on the number of entries in the database. We should be able to define our own for( )loops so that we can see how long/short/blank/slow that list will be for the user.

The data that our apps and sites use is just as important, if not more important, than the colors and pattern libraries that we craft. We should know it, understand it, and be able to manipulate it.

I would make my own version of what this could look like, but I’ll refer you to Josh Puckett’s article from 2015 (?!?): https://medium.com/@joshpuckett/modern-design-tools-using-real-data-62d499e97482 .

Unrealistic interactions and motion

Design tools need to ground us in reality. Interaction design tools should not allow designers to make whatever they want. Sure, let’s do conceptual designs with state-based animation tools, but we need tools that use the same behaviors and interaction limitations our the platforms. Prototyping tools and code implementations need to align.

Don’t give me 900 different ways to move a rectangle around a screen. Instead, give me a few ways to make things work the way a developer could actually implement them, so my time isn’t wasted once it’s time to build.

Also, allow me to see the performance impact of my interaction designs. Alert me when the implementation could slow the user’s device down to a screeching halt. Give me the ability to explore ideas, but make sure that they can be built.

Current prototyping tools are great at making concepts and gifs, but poor at making reality.

Wrapping up

Don’t get me wrong, I’m super excited about the direction of tools over the last few years. I’ll give a special shout out to Figma and Webflow for totally changing my own career and how we design at Asurion. We’re able to make things in ways we could never have dreamed of before at speeds we never imagined. The upsides of the industry are much bigger than the downsides and things will keep getting even better.

I do think, though, that it’s very much time for UI tools to stop emulating the models that print design gave us. Now that we’ve nailed making rectangles, it’s time to shift our gaze towards making products that work as well as the ideas in our heads.

Tools that are starting to solve these problems:

Webflow
Modulz
SwiftUI
Framer X
Hadron
Shift Studio (new)
Relate App (new)
Handoff (new)


Mason (RIP)

I’m a Product Designer in Nashville. I work at Asurion where I help make our design systems, teach design tools, and work on the Anywhere Expert Platform. Check out my personal and past work on my website. I also tweet about design things, but mostly just spend time with my wife and son. Also, Hondo, our Bernese mountain dog is pretty cool too.

--

--