[pcomp week 2] The design of everyday objects & physical computing’s greatest hits

9793515_orig

This week we read two works by Donald A. Norman: the first chapter of his book, Design of Everyday Things, and  his essay, Emotional Design: Attractive Things Work Better. The first rails against everyday objects that are poorly designed, by which he mostly means difficult to understand and confusing to use. He cites numerous examples, like doors that don’t make it clear whether you should pull or push, the thermostat in his refrigerator, and now-almost-obsolete landline telephones.

Scissors, Norman says, are an example of a well-designed everyday object because their affordances, constraints and mappings allow you to easily form a conceptual model in your mind of how they should be used, even if you’ve never done so before.

His essay is a response to some criticism in his book that makes it seem as though he values usability over all else in design—beauty, in particular. His response clarifies that it’s not what he was trying to say, and that designing with people’s emotions in mind is equally important.

These readings make me wonder about the cultural influences in what makes something considered easy to use, or beautiful. I was recently in Japan, a land well-known for its design and usability of everyday objects. As a non-Japanese speaker, some things were easy for me to understand: a basket under your restaurant chair for putting your purse, for example.

Others were not. Many ramen restaurants have you order via machine rather than telling it to the waitstaff (pictured above). The idea is great, but I unfortunately lacked the cultural knowledge or reading ability to figure parts of it out—like how you have to put your money in first before pushing the buttons for your order, and that you have to hit the change button to get change at all at the end.

You only have to give any modern 3 year-old an iPad to see how much culture determines whether or not something is easy to use, so I wonder what kind of cultural assumptions are in the background for a person to understand how to use something as seemingly straightforward as Norman’s scissors.

The final reading this week was Tom’s blog post, Physical Computing’s Greatest Hits (and misses). It’s intimidating and inspiring at the same time to see all the types of projects that can be made with physical computing. What I like in particular is a sense of playfulness about most of them. We don’t necessarily have to create world peace with our designs—making someone smile can be a good enough reason to make something.

[pcomp week 1] What is physical interaction?

After reading the first two chapters of Chris Crawford’s The Art of Interactive Design and Bret Victor’s A Brief Rant on the Future of Interaction Design, the question “what is physical interaction?” reminds me of another question I’ve been trying to answer a lot recently, which is “what is ITP?” With both, it seems that the more you think about it and the more you try to come up with a solid answer, the more inadequate your definition feels.

Crawford addresses this subjectivity, but nonetheless puts forth a definition of interactivity as a conversation “in which two actors alternately listen, think, and speak.” He describes interactivity as something that exists on a continuum rather than in absolutes, and defines it also by what it is not: reaction and participation, for example. Upon first thought, a conversation makes sense to me as a starting point for thinking about how to define interaction. A conversation isn’t static or predictable; it’ll change and adapt according to what each participant says in each turn. Sounds interactive to me!

But does it still count as interactive if there are no humans in the conversation? The video above showing two AI chatbots talking to each other is certainly a conversation (as well as a pretty cool piece of digital technology), but I wouldn’t classify it as interactive because people are not a part of the actual interaction. At least until we consider robots as people, which as far as I know, hasn’t happened yet.

Victor’s rant, similarly, encourages us to consider people when designing for interactivity. This is where the physical part kicks in. His blog post rages against the prevailing vision of the future that’s entirely screen-based, or as he calls it, “Pictures Under Glass.”

“We live in a three-dimensional world,” he writes. “I believe that our hands are the future.”

Physical interaction necessarily involves the body. Of course, as Victor argues, hands are under-considered as tools in design, but we should also think about the ways we can use other parts of the body to creative physical interaction. And to consider this in terms of sense, what else can we use besides touch? It’ll be interesting to design for interaction by sound, sight, smell and taste too.

What makes for good physical interaction? Maybe it’s what McLuhan considers to be “cool media,” or something that requires more active participation on behalf of the person or user to get something out of it. Or maybe it’s the other way around—something that gives you a wider array of output depending on how you interact with it. Like the way that light switch that turns the lights on and off is less of an interactive experience than a dial that allows you to change your lights to all colors of the rainbow.

But does more interaction mean good interaction? Does it make it a better interaction if you end up with stronger feelings about the experience? Or does that just make it better art? Maybe, the best physical interaction is one where the output is an experience tailored completely uniquely to your input, like a conversation. (Between humans.)