Skip to content

{ Monthly Archives } April 2011

pitching vs. thinking about what it is like to live with a system

One of the other things to come out in my visit to Malcolm’s class is an awareness of a certain difference in styles between School of Information, HCI/user-centered-design project presentations and architecture project presentations. Basically, teams of SI students or mostly-SI students presented projects as a bit of a “pitch” — this is why our idea is great and should be pursued — while teams with students from architecture tended to present projects ideas as a bit less positive, including at least one presentation of an idea as leading to a very dystopian world. One of the other visitors, an architect, reflected at the end on how strange it felt to have had three hours of mostly back-to-back pitches rather than discussions about what it would be like to “live with” a system.

This prompted some reactions from the SI folks, some of which got posted to Twitter, e.g., “shouldn’t a concept have a compelling use- or who would use it, and why?” and “there was an arch guy who was shocked by the idea of considering users.” I can understand these reactions, but as someone with less skin in the game (no project to pitch), I think I had a more moderated reaction. After reflecting a bit, I want to write down some thoughts about this difference and start a discussion.

The difference that I saw in the presentations was that the “pitches” showed use cases with no downsides, or only technical obstacles. The “live with” presentations showed a vision that was more rounded and showed pros as well as some very serious cons, particularly for people who would be affected by but may not choose to use the system. In comparison to the “live with” presentations, the pitches seemed a bit naïve or even dishonest, while the “live with” presentations felt incomplete: given such obvious problems, why not change the idea?

So, where does this difference come from? Of course architects consider the people who will be affected by their creations — and not just the “users” — so it’s not that. And of course things should have a compelling use. Something that has a compelling use for some people, though, may still create a less than pleasant experience for others who it affects. This is particularly true for architecture projects — everyone in a neighborhood has to live with a building, not just its occupants — so I can see how that would lead to a certain style of presentation or discussion of a proposal. This is not, however, unique to buildings; groupware and social software certainly affect people who may not opt in, and some persuasive technology is designed specifically to influence people who do not opt in, and so maybe it would be good for some HCI presentations to take a bit more of a humble tone that acknowledges potential downsides.

On the other hand, it’s also often fairly easy to prototype and even do large-scale test deployments of software (i.e., try living with) in a way that simply isn’t possible with large buildings or urban development projects. These prototypes and field tests often let designers learn many of the unintended consequences. (Of course, you only learn about the group you test the app with.)

This assuredness of early feedback on software products, as well as the ability to iterate rapidly after deployment to correct for problems or take advantage of newfound opportunities, makes many software presentations more about why something is worth starting this process of building, releasing, and refining, rather than a discussion about building and living with fairly immutable and durable creation, and I think that motivates a lot of the difference in styles. I’m not completely sure that software designers can continue with this attitude as software becomes more social and hooking up system A to system B can lead to information disclosures with long lasting effects.

Other thoughts?

Mindful Technology vs. Persuasive Technology

On Monday, I had the pleasure of visiting Malcolm McCullough’s Architecture 531 – Networked Cities for final presentations. Many of the students in the class are from SI, where we talk a lot about incentive-centered design, choice architecture, and persuasive technology, which seems to have resulted in many of the projects having a persuasive technology angle. As projects were pitched as “extracting behavior” or “compelling” people to do things, it was interesting to watch the discomfort in the reactions from students and faculty who don’t frame problems in this way.1

Thinking about this afterwards brought me back to a series of conversations at Persuasive this past summer. A prominent persuasive technology researcher said something along the lines of “I’m really only focusing on people who already want to change their behavior.” This caused a lot of discussion, with major themes being: Is this a cop-out, shouldn’t we be worried about the people who aren’t trying? Is this just a neat way of skirting the ethical issues of persuasive (read: “manipulative”) technology?

I’m starting to think that there may be an important distinction that may help address these questions, one between technology that pushes people to do something without them knowing it and technology that supports people in achieving a behavior change they desire. The first category might be persuasive technology, and for now, I’ll call the second category mindful technology.

Persuasive Technology

I’ll call systems that push people who interact with them to behave in certain ways, without those people choosing the behavior change as an explicit goal, Persuasive Technology. This is a big category, and I believe that most systems are persuasive systems in that their design and defaults will favor certain behaviors over others (this is a Nudge inspired argument: whether or not it is the designer’s intent, any environment in which people make choices is inherently persuasive).

Mindful Technology

For now, I’ll call technology that helps people reflect on their behavior, whether or not people have goals and whether or not the system is aware of those goals, mindful technology. I’d put apps like Last.fm and Dopplr in this category, as well as a lot of tools that might be more commonly classified as persuasive technology, such as UbiFit, LoseIt, and other trackers. While designers of persuasive technology are steering users toward a goal that the designers’ have in mind, the designers of mindful technology give users the ability to better know their own behavior to support reflection and/or self-regulation in pursuit of goals that the users have chosen for themselves.

Others working in the broad persuasive tech space have also been struggling with the issue of persuasion versus support for behaviors an individual chooses, and I’m far from the first to start thinking of this work as being more about mindfulness. Mindfulness is, however, a somewhat loaded term with its own meaning, and that may or may not be helpful. If I were to go with the tradition of “support systems” naming, I might call applications in this category “reflection support systems,” “goal support systems,” or “self-regulation support systems.”

Where I try to do my work

I don’t quite think that this is the right distinction yet, but it’s a start, and I think these are two different types of problems (that may happen to share many characteristics) with different sets of ethical considerations.

Even though my thinking is still a bit rough, I’m finding this idea useful in thinking through some of the current projects in our lab. For example, among the team members on AffectCheck, a tool to help people see the emotional content of their tweets, we’ve been having a healthy debate about how prescriptive the system should be. Some team members prefer something more prescriptive – guiding people to tweet more positively, for example, or tweeting in ways that are likely to increase their follower and reply counts – while I lean toward something more reflective – some information about the tweet currently being authored, how the user’s tweets have changed over time, here is how they stack up against the user’s followers’ tweets or the rest of Twitter. While even comparisons with friends or others offer evidence of a norm and can be incredibly persuasive, the latter design still seems to be more about mindfulness than about persuasion.

This is also more of a spectrum than a dichotomy, and, as I said above, all systems, by nature of being a designed, constrained environment, will have persuasive elements. (Sorry, there’s no way of dodging the related ethical issues!) For example, users of Steps, our Facebook application to promote walking (and other activity that registers on a pedometer), have opted in to the app to maintain or increase their current activity level. They can set their own daily goals, but the app’s goal recommender will push them to the fairly widely accepted recommendation of 10,000 steps per day. Other tools such as Adidas’s MiCoach or Nike+ have both tracking and coaching features. Even if people are opting into specific goals, the mere limited menu of available coaching programs is a bit persuasive, as it constrains people’s choices.

Overall, my preference when designing is to focus on helping people reflect on their behavior, set their own goals, and track progress toward them, rather than to nudge people toward goals that I have in mind. This is partly because I’m a data junkie, and I love systems that help me learn more about my behavior is without telling me what it should be. It is also partly because I don’t trust myself to persuade people toward the right goal at all times. Systems have a long history of handling exceptions quite poorly. I don’t want to build the system that makes someone feel bad or publicly shames them for using hotter water or a second rinse after a kid throws up in bed, or that takes someone to task for driving more after an injury.

I also often eschew gamification (for many reasons), and to the extent that my apps show rankings or leaderboards, I often like to leave it to the viewer to decide whether it is good to be at the top of the leaderboard or the bottom. To see how too much gamification can prevent interfere with people working toward their own goals, consider the leaderboards on TripIt and similar sites. One person may want to have the fewest trips or miles, because they are trying to reduce their environmental impact or because they are trying to spend more time at home with family and friends, while another may be trying to maximize their trips. Designs that simply reveal data can support both goals, while designs that use terms like “winning” or that award trophies or badges to the person with the most trips start to shout: this is what you should do.

Thoughts?

What do you think? Useful distinction? Cluttering of terms? Have a missed an existing, better framework for thinking about this?


1Some of the discomfort was related to some of the projects’ use punishment (a “worst wasters” leaderboard or similar). This would be a good time to repeat Sunny Consolvo’s guideline that technology for persuasive technology range from neutral to positive (Consolvo 2009), especially, in my opinion, in discretionary use situations – because otherwise people will probably just opt-out.

@display

For those interested in the software that drives the SIDisplay, SI master’s student Morgan Keys has been working to make a generalized and improved version available. You can find it, under the name “@display” at this GitHub repository.

SIDisplay is a Twitter-based public display described in a CSCW paper with Paul Resnick and Emily Rosengren. We built it for the School of Information community, where it replaced a number of previous displays, including a Thank You Board (which we compare it to in the paper), a photo collage (based on the context, content & community collage), and a version of the plasma poster network. Unlike many other Twitter-based displays, SI Display and @display do not follow a hashtag, but instead follow @-replies to the display’s Twitter account. It also includes private tweets, so long as the Twitter user has given the display’s Twitter account permission to follow them.