Skip to content

{ Tag Archives } design

Reflection and Persuasion in Personal Informatics

After a variety of conversations, I’ve expanded on my earlier thoughts on reflection, mindfulness, persuasion, and coercion in systems for this year’s Personal Informatics in Practice workshop at CHI. The expanded article and a blog post introducing it are over on the Quantified Self blog.

CHI Highlights: General

Okay, the third and final (?) set of CHI Highlights, consisting of brief notes on some other papers and panels that caught my attention. My notes here will, overall, be briefer than in the other posts.

More papers

  • We’re in It Together: Interpersonal Management of Disclosure in Social Network Services
    Airi Lampinen, Vilma Lehtinen, Asko Lehmuskallio, Saraki Tamminen

    I possibly should have linked to this one in my post about social networks for health, as my work in that area is why this paper caught my attention. Through a qualitative study, the authors explore how people manage their privacy and disclosures on social network sites.

    People tend to apply their own expectations about what they’d like posted about themselves to what they post about others, but sometimes negotiate and ask others to take posts down, and this can lead to either new implicit or explicit rules about what gets posted in the future. They also sometimes stay out of conversations when they know that they are not as close to a the original poster as the other participants (even if they have the same “status” on the social network site). Even offline behavior is affected: people might make sure that embarrassing photos can’t be taken so that they cannot be posted.

    To regulate boundaries, some people use different services targeted at different audiences. While many participants believed that it would be useful to create friend lists within a service and to target updates to those lists, many had not done so (quite similar to my findings with health sharing: people say they want the feature and that it is useful, but just aren’t using it. I’d love to see Facebook data on what percent of people are actively using lists.) People also commonly worded posts so that those “in the know” would get more information than others, even if all saw the same post.

    Once aversive content had been posted, however, it was sometimes better for participants to ty to repurpose it to be funny or a joke, rather than to delete it. Deletions say “this was important,” while adding smilies can decrease its impact and say “oh, that wasn’t serious.”

  • Utility of human-computer interactions: toward a science of preference measurement
    Michael Toomim, Travis Kriplean, Claus Pörtner, James Landay

    Many short-duration user studies rely on self-report data of satisfaction with an interface or tool, even though we know that self-report data is often quite problematic. To measure the relative utility of design alternatives, the authors place them on Mechanical Turk and measure how many tasks people complete on each alternative under differing pay conditions. A design that gets more work for the same or less pay implies more utility. Because of things like the small pay effect and its ability to crowd out intrinsic rewards, I’m curious about whether this approach will work better for systems meant for work rather than for fun, as well as just how far it can go – but I really do like the direction of measuring what people actually do rather than just what they say.

Perhaps it is my love of maps or my senior capstone project at Olin, but I have a love for location based services work, even if I’m not currently doing it.

  • Opportunities exist: continuous discovery of places to perform activities
    David Dearman, Timothy Sohn, Khai N. Truong

  • In the best families: tracking and relationships
    Clara Mancini, Yvonne Rogers, Keerthi Thomas, Adam N. Joinson, Blaine A. Price, Arosha K. Bandara, Lukasz Jedrzejczyk, Bashar Nuseibeh

    I’m wondering more and more if there’s an appropriate social distance for location trackers: with people are already very close, it is smothering, while with people who are too far, is it creepy? Thinking about preferences for Latitude, I wouldn’t want my family or socially distance acquaintances on there, but I do want friends who I don’t see often enough on there.

The session on Affective Computing had lots of good stuff.

  • Identifying emotional states using keystroke dynamics
    Clayton Epp, Michael Lippold, Regan L. Mandryk

    Fairly reliable classifier for emotions, including confidence, hesitance, nervousness, relaxation, sadness, and tiredness, based on analysis of typing rhythms on a standard keyboard. One thing I like about this paper is it opens up a variety of systems ideas, ranging from fairly simple to quite sophisticated. I’m also curious if this can be extended to touch screens, which seems like a much more difficult environment.

  • Affective computational priming and creativity
    Sheena Lewis, Mira Dontcheva, Elizabeth Gerber

    In a Mechanical Turk based experiment, showing people a picture that induced positive affect increased the quality of ideas generated — measured by originality and creativity — in a creativity task. Negative priming reduced comparison compared to positive or neutral priming. I’m very curious to see if this result is sustainable over time, with the same image or with different images, or in group settings (particularly considering the next paper in this list!)

  • Upset now?: emotion contagion in distributed group
    Jamie Guillory, Jason Spiegel, Molly Drislane, Benjamin Weiss, Walter Donner, Jeffrey Hancock

    An experiment on emotional contagion. Introducing negative emotion led others to be more negative, but it also improved the group’s performance.

  • Emotion regulation for frustrating driving contexts
    Helen Harris, Clifford Nass

We’ve seen a lot of research on priming in interfaces lately, most often in lab or mturk based studies. I think it’ll start to get very interesting when we start testing to see if that also works in long-term field deployments or when people are using a system at their own discretion for their own needs, something that has been harder to do in many past studies of priming.

I didn’t make it to these next few presentations, but having previously seen Steven talk about this work, it’s definitely a worth a pointer. The titles nicely capture the main points.

The same is true for Moira’s work:

Navel-Gazy Stuff

  • RepliCHI: issues of replication in the HCI community
    Max L. Wilson, Wendy Mackay, Dan Russell, Ed Chi, Michael Bernstein, Harold Thimbleby

    Discussion about the balance between reproducing other studies in different contexts or prior to making “incremental” advances vs. a focus on novelty and innovation. Nice summary here. I think the panelists and audience were generally leading toward increasing the use of replication + extension in the training of HCI PhD students. I think this would be beneficial, in that it can encourage students to learn how to write papers that are reproducible, introduce basic research methods by doing, and may often lead to some surprising and interesting results. There was some discussion of whether there should be a repli.chi track alongside an alt.chi track. I’m a lot less enthusiastic about that – if there’s a research contribution, the main technical program should probably be sufficient, and if not, why is it there? I do understand that there’s an argument to be made that it’s worth doing as an incentive, but I don’t think that is a sufficient reason. Less addressed by the panel was that a lot of the HCI research isn’t of a style that lends itself to replication, though Dan Russell pointed out that some studies must also be taken on faith since don’t all have our own Google or LHC.

  • The trouble with social computing systems research
    Michael Bernstein, Mark S. Ackerman, Ed Chi, Robert C. Miller

    alt.chi entry into the debate perceived issues with underrepresentation of systems work in CHI submissions and with how CHI reviewers treat systems work. As someone who doesn’t do “real” systems work — the systems I build are usually intended as research probes rather than contributions themselves) — I’ve been reluctant to say much on this issue for fear that I would talk more than I know. That said, I can’t completely resist. While I agree that there are often issues with how systems work is presented and reviewed, I’m not completely sympathetic to the argument in this paper.

    Part of my skepticism is that I’ve yet to be shown an example of a good systems paper that was rejected. This is not to say that these do not exist; the authors of the paper are speaking from experience and do great work. The majority of systems rejections I have seen are from reviewing, and the decisions have mostly seemed reasonable. Most common are papers that make a modest (or even very nice) systems contribution, tack on a poorly executed evaluation, and then make claims based on the evaluation that it just doesn’t support. I believe at least one rejection would have been accepted had the authors just left out the evaluation altogether, and I think a bad evaluation and unsupported claims should doom a paper unless they are excised (which maybe possible with the new CSCW review process).

    I was a little bit frustrated because Michael’s presentation seemed to gloss over the authors’ responsibilities to explain the merits of their work to the broader audience of the conference and to discuss biases introduced by snowball samples. The last point is better addressed in the paper, but I still feel that the paper still deemphasizes authors’ responsibility in favor of reviewers’ responsibility.

    The format for this presentation was also too constrained to have a particularly good discussion (something that was unfortunately true in most sessions with the new CHI time limits). The longer discussion about systems research in the CSCW and CHI communities that followed one of the CSCW Horizons sessions this year was more constructive and more balanced, perhaps because the discussion was anchored at least partially on the systems that had just been presented.

  • When the Implication Is Not to Design (Technology)
    Eric P.S. Baumer, M. Six Silberman

    Note on how HCI can improve the way we conduct our work, particularly the view that there are problems and technical solutions to solve them. The authors argue that it may be better think of these as conditions and interventions. Some arguments they make for practice are: Value the implication not to design technology (i.e., that in some situations computing technology may be inappropriate), explicate unpursued avenues (explain alternative interventions and why they were not pursued), technological extravention (are there times when technology should be removed?), more than negative results (why and in what context did the system fail, ad what does that failure mean), and to not to stop building – just to be more reflective on why that building is occurring.

CHI Highlights: Diversity, Discussion, and Information Consumption

For my second post on highlights from CHI this year, I’m focusing on papers related to opinion diversity and discourse quality.

  • Normative influences on thoughtful online participation
    Abhay Sukumaran, Stephanie Vezich, Melanie McHugh, Clifford Nass

    Two lab experiments on whether it is possible to foster more thoughtful commenting and participation by participants in online discussion forums by priming thoughtful norms. The first tested the effects of the behavior of other participants in the forum. The dependent variables were comment length, time taken to write the comments, and number of issue-relevant thoughts. Not surprisingly, being exposed to other thoughtful comments led people to make more thoughtful comments themselves. One of the audience members asked the question about whether this would break down with just one negative or less thoughtful comment (such as how merely one piece of litter seems to break down antilittering norms).

    The second study tested effects of visual, textual, and interaction design features on the same dependent variables. The manipulations included a more subdued vs. more playful visual design, differing CAPTCHAs (words positively correlated with thoughtfulness in the thoughtful condition and words negatively correlated with thoughtfulness in the unthoughtful condition), and different labels for the comment box. The design intended to provoke thoughtfulness did correspond to more thoughtful comments, suggesting that it is possible, at least in the lab, to design sites to prompt more thoughtful comments. For this second study in particular, I’m curious if these measures only work in the short term or if they would work in the long term and about the effects of each of the specific design features.

  • I will do it, but I don’t like it: User Reactions to Preference-Inconsistent Recommendations
    Christina Schwind, Jürgen Buder, Friedrich W. Hesse

    This paper actually appeared a in health session, but I found that it spoke much more to the issues my colleagues and I are confronting in the BALANCE project. The authors begin with the observation that most recommender systems are intended to produce content that their users will like, but that this can be problematic. In the health and wellness domain, people sometimes need to hear information that might disagree with their perspective or currently held beliefs, and so it can be valuable to recommend disagreeable information. In this Mechanical Turk-based study, subjects were equally likely to follow preference-consistent and preference-inconsistent recommendations. Following preference-inconsistent recommendations did reduce confirmation bias, but people were happier to see preference-consistent recommendations. This raises the important question: subjects may have followed the recommendation the first time, but now that they know this system gives recommendations they might not like, will they follow the recommendations less often in the future, or switch to another system altogether?

  • ConsiderIt: improving structured public deliberation
    Travis Kriplean, Jonathan T. Morgan, Deen Freelon, Alan Borning, Lance Bennett

    I really like the work Travis is doing with Reflect and ConsiderIt (which powers the Living Voters Guide) to promote more thoughtful listening and discussion online, so I was happy to see this WiP and am looking forward to seeing more!

  • Computing political preference among twitter followers
    Jennifer Golbeck, Derek L. Hansen

    This work uses follow data for Congresspeople and others on Twitter to assess the “political preference” (see comment below) of Twitter users and the media sources they follow. This approach and the information it yields has some similarities to Daniel Zhou’s upcoming ICWSM paper and, to a lesser extent, Souneil Park’s paper from CSCW this year.

    One critique: despite ample selective exposure research, I’m not quite comfortable with this paper’s assumption that political preference maps so neatly to political information preference, partly because I think this may be an interesting research question: do people who lean slightly one way or the other prefer information sources that may be more biased than they are? (or something along those lines)

In addition to these papers, Ethan Zuckerman’s closing plenary, Desperately Seeking Serendipity, touched on the topics of serendipity and homophily extensively. Zuckerman starts by suggesting the reason that people like to move to cities – even at times when cities were really quite horrible places – is, yes, for more opportunities and choices, but also “to encounter the people you couldn’t encounter in your rural, disconnected lifestyle… to become a cosmopolitan, a citizen of the world.” He goes on, “if you wanted to encounter a set of ideas that were radically different than your own, your best bet in an era before telecommunications was to move to a city.” There reasons to question this idea of cities as a “serendipity engine,” though: even people in urban environments have extremely predictable routines and just don’t go all that many places. Encounters with diverse others may not as common as idealized.

He then shifts gears to discuss what people encounter online. He walks through the argument that the idea of a Freshman Fishwrap or Daily Me is possibly quite harmful as it allows people to filter to the news that they want. Adding in social filters or getting news through our friends can make this worse. While Sunstein is concerned about this leading to polarization within the US, Zuckerman is more concerned that it leads people to see only news about where they are and less news about other places or from outside perspectives. This trend might lead people to miss important stories.

I tend to agree with the argument that surfacing coincidences or manufacturing serendipity is an incredibly powerful capability of current technology. Many of the approaches that the design community has taken to achieve this are probably not the kind of serendipity Zuckerman is looking for. I love Dopplr’s notifications that friends are also in town, but the time I spend with them or being shown around by them is time that I’m less likely to have a chance encounter with someone local or a traveler from elsewhere. The ability to filter reviews by friends may make for more accurate recommendations, but I’m also less likely to end up somewhere a bit different. Even serendipity has been repurposed to support homophily

Now, it might be that the definition of serendipity that some of the design community hasn’t quite been right. As Zuckerman notes, serendipity usually means “happy accident” now – it’s become a synonym for coincidence – and that the sagacity part of the definition has been lost. Zuckerman returns to the city metaphor, arguing for a pedestrian-level view. Rather than building tools for only efficiency and convenience, build tools and spaces that maximize the chances to interact and mix. Don’t make filters hidden. Make favorites of other communities visible, not just the user’s friends. Zuckerman elegantly compares this last feature to the traces in a city: one does not see traces left just by one’s friends, no, but traces left by other users of the space, and this gives people a chance to wander from the path they were already on. One might also overlay a game on a city, to encourage people to explore more deeply or venture to new areas.

While I like these ideas, I’m a little concerned that they will lead to somewhat superficial exposure to the other. People see different others on YouTube, Flickr, or in the news, and yes, some stop and reflect, others leave comments that make fun of them, and many others just move on to the next one. A location-based game might get people to go to new places, but are they thinking about what it is like to be there, or are they thinking about the points they are earning? This superficiality is something I worry about in my own work to expose people to more diverse political news – they may see it, but are they really considering the perspective or gawking at the other side’s insanity? Serendipity may be necessary, but I question whether it is sufficient. We also need empathy: technology that can help people listen and see others’ perspectives and situations. Maybe empathy is part of the lost idea of sagacity that Zuckerman discusses — a sort of emotional sagacity — but whatever it is, I need to better know how to design for it.

For SI and UM students who really engage with this discussion and the interweaving of cities, technology, and flows, I strongly, strongly recommend ARCH 531 (Networked Cities).

Mindful Technology vs. Persuasive Technology

On Monday, I had the pleasure of visiting Malcolm McCullough’s Architecture 531 – Networked Cities for final presentations. Many of the students in the class are from SI, where we talk a lot about incentive-centered design, choice architecture, and persuasive technology, which seems to have resulted in many of the projects having a persuasive technology angle. As projects were pitched as “extracting behavior” or “compelling” people to do things, it was interesting to watch the discomfort in the reactions from students and faculty who don’t frame problems in this way.1

Thinking about this afterwards brought me back to a series of conversations at Persuasive this past summer. A prominent persuasive technology researcher said something along the lines of “I’m really only focusing on people who already want to change their behavior.” This caused a lot of discussion, with major themes being: Is this a cop-out, shouldn’t we be worried about the people who aren’t trying? Is this just a neat way of skirting the ethical issues of persuasive (read: “manipulative”) technology?

I’m starting to think that there may be an important distinction that may help address these questions, one between technology that pushes people to do something without them knowing it and technology that supports people in achieving a behavior change they desire. The first category might be persuasive technology, and for now, I’ll call the second category mindful technology.

Persuasive Technology

I’ll call systems that push people who interact with them to behave in certain ways, without those people choosing the behavior change as an explicit goal, Persuasive Technology. This is a big category, and I believe that most systems are persuasive systems in that their design and defaults will favor certain behaviors over others (this is a Nudge inspired argument: whether or not it is the designer’s intent, any environment in which people make choices is inherently persuasive).

Mindful Technology

For now, I’ll call technology that helps people reflect on their behavior, whether or not people have goals and whether or not the system is aware of those goals, mindful technology. I’d put apps like and Dopplr in this category, as well as a lot of tools that might be more commonly classified as persuasive technology, such as UbiFit, LoseIt, and other trackers. While designers of persuasive technology are steering users toward a goal that the designers’ have in mind, the designers of mindful technology give users the ability to better know their own behavior to support reflection and/or self-regulation in pursuit of goals that the users have chosen for themselves.

Others working in the broad persuasive tech space have also been struggling with the issue of persuasion versus support for behaviors an individual chooses, and I’m far from the first to start thinking of this work as being more about mindfulness. Mindfulness is, however, a somewhat loaded term with its own meaning, and that may or may not be helpful. If I were to go with the tradition of “support systems” naming, I might call applications in this category “reflection support systems,” “goal support systems,” or “self-regulation support systems.”

Where I try to do my work

I don’t quite think that this is the right distinction yet, but it’s a start, and I think these are two different types of problems (that may happen to share many characteristics) with different sets of ethical considerations.

Even though my thinking is still a bit rough, I’m finding this idea useful in thinking through some of the current projects in our lab. For example, among the team members on AffectCheck, a tool to help people see the emotional content of their tweets, we’ve been having a healthy debate about how prescriptive the system should be. Some team members prefer something more prescriptive – guiding people to tweet more positively, for example, or tweeting in ways that are likely to increase their follower and reply counts – while I lean toward something more reflective – some information about the tweet currently being authored, how the user’s tweets have changed over time, here is how they stack up against the user’s followers’ tweets or the rest of Twitter. While even comparisons with friends or others offer evidence of a norm and can be incredibly persuasive, the latter design still seems to be more about mindfulness than about persuasion.

This is also more of a spectrum than a dichotomy, and, as I said above, all systems, by nature of being a designed, constrained environment, will have persuasive elements. (Sorry, there’s no way of dodging the related ethical issues!) For example, users of Steps, our Facebook application to promote walking (and other activity that registers on a pedometer), have opted in to the app to maintain or increase their current activity level. They can set their own daily goals, but the app’s goal recommender will push them to the fairly widely accepted recommendation of 10,000 steps per day. Other tools such as Adidas’s MiCoach or Nike+ have both tracking and coaching features. Even if people are opting into specific goals, the mere limited menu of available coaching programs is a bit persuasive, as it constrains people’s choices.

Overall, my preference when designing is to focus on helping people reflect on their behavior, set their own goals, and track progress toward them, rather than to nudge people toward goals that I have in mind. This is partly because I’m a data junkie, and I love systems that help me learn more about my behavior is without telling me what it should be. It is also partly because I don’t trust myself to persuade people toward the right goal at all times. Systems have a long history of handling exceptions quite poorly. I don’t want to build the system that makes someone feel bad or publicly shames them for using hotter water or a second rinse after a kid throws up in bed, or that takes someone to task for driving more after an injury.

I also often eschew gamification (for many reasons), and to the extent that my apps show rankings or leaderboards, I often like to leave it to the viewer to decide whether it is good to be at the top of the leaderboard or the bottom. To see how too much gamification can prevent interfere with people working toward their own goals, consider the leaderboards on TripIt and similar sites. One person may want to have the fewest trips or miles, because they are trying to reduce their environmental impact or because they are trying to spend more time at home with family and friends, while another may be trying to maximize their trips. Designs that simply reveal data can support both goals, while designs that use terms like “winning” or that award trophies or badges to the person with the most trips start to shout: this is what you should do.


What do you think? Useful distinction? Cluttering of terms? Have a missed an existing, better framework for thinking about this?

1Some of the discomfort was related to some of the projects’ use punishment (a “worst wasters” leaderboard or similar). This would be a good time to repeat Sunny Consolvo’s guideline that technology for persuasive technology range from neutral to positive (Consolvo 2009), especially, in my opinion, in discretionary use situations – because otherwise people will probably just opt-out.

Word clouds to support reflection

When preparing our Persuasive 2010 paper on Three Good Things, we ended up cutting a section on using word clouds to support reflection. The section wasn’t central to this paper, but it highlights one of the design challenges we encountered, and so I want to share it and take advantage of any feedback.

Our Three Good Things application (3GT) is based on a positive psychology exercise that encourages people to record three good things that happen to them, as well as the reasons why they happened. By focusing on the positive, rather than dwelling on the negative, it is believed that people can train themselves to be happier.

Example 3GT tag clouds

When moving the application onto a computer (and out of written diaries), I wanted to find a way to leverage a computer’s ability to analyze a user’s previous good things and reasons to help them identify trends. If people are more aware of what makes them happy, or why these things happen, they might make decisions that cause these good things to happen more. In 3GT, I made a simple attempt to support this trend detection by generating word clouds from a participant’s good things and reasons. I used simple stop-wording, lowerizing, and no stemming.

Limited success for Word Clouds

When we interviewed 3GT users, we expected to find that the participants believed the word clouds helped them notice and reinforce trends in their good things. Results here were mixed. Only one participant we interviewed described how the combination of listing reasons and seeing them summarized in the word clouds had helped her own reflection:

“You’ve got tags that show up, like tag clouds on the side, and it kind of pulls out the themes… as I was putting the reasoning behind why certain [good] things would happen, I started to see another aspect of a particular individual in my life. And so I found it very fascinating that I had pulled out that information… it’s made me more receptive to that person, and to that relationship.”

A second participant liked the word cloud but was not completely convinced of its utility:

I like having the word cloud. I noticed that the biggest thing in my reason words is “cat”. (Laughs). And the top good words isn’t quite as helpful, because I’ve written a lot of things like ‘great’ and ‘enjoying’ – evidently I’ve written these things a lot of times. So it’s not quite as helpful. But it’s got ‘cat’ pretty good there, and ‘morning’, and I’m not sure if that’s because I’ve had a lot of good mornings, or I tend to write about things in the morning.

Another participant who had examined the word cloud noticed that “people” was the largest tag in his good things cloud and “liked that… [his] happiness comes from interaction with people,” but that he did not think that this realization had any influence over his behavior outside of the application.

One participant reported looking at the word clouds shortly after beginning to post. The words selected did not feel representative of the good things or reasons he had posted, and feeling that they were “useless,” he stopped looking at them. He did say that he could imagine it “maybe” being useful as the words evolved over time, and later in the interview revisited one of the items in the word cloud: “you know the fact that it says ‘I’m’ as the biggest word is probably good – it shows that I’m giving myself some credit for these good things happening, and that’s good,” but this level of reflection was prompted by the interview, not day-to-day use of 3GT.

Another participant did not understand that word size in the word cloud was determined by frequency of usage and was even more negative:

It was like you had taken random words that I’ve typed, and some of them have gotten bigger. But I couldn’t see any reason why some of them would be bigger than the other ones. I couldn’t see a pattern to it. It was sort of weird… Some of the words are odd words… And then under the Reason words, it’s like they’ve put together some random words that make no sense.

Word clouds did sometimes help in ways that we had not anticipated. Though participants did not find that they helped them identify trends that would influence future decisions, looking at the word cloud from her good things helped at least one participant’s mood.

I remember ‘dissertation’ was a big thing, because for a while I was really gunning on my dissertation, and it was going so well, the proposal was going well with a first draft and everything. So that was really cool, to be able to document that and see… I can see how that would be really useful for when I get into a funk about not being able to be as productive as I was during that time… I like the ‘good’ words. They make me feel, I feel very good about them.

More work?

The importance of supporting reflection has been discussed in the original work on Three Good Things, as well as in other work that has shown how systems that support effective self-reflection can improve users’ ability to adopt positive behaviors as well as increase their feelings of self-efficacy. While some users found benefit in word clouds to assist reflection, a larger portion did not notice them or found them unhelpful. More explanation should be provided about how word clouds are generated to avoid confusion. They should also perhaps not be shown until a participant has entered a sufficient amount of data. To help participants better notice trends, improved stop-wording might be used, as well as detecting n-grams (e.g. “didn’t smoke” versus “smoke”) and grouping of similar terms (e.g., combining “bread” and “pork” into “food”). Alternatively, a different kind of reflection exercise might be more effective, one where participants are asked to review their three good things posts and write a longer summary of the trends they have noticed.

some thoughts on Facebook’s recent changes, from the perspective of an application designer

There’s a lot to like about the recent changes to Facebook, but, as an application developer, many of the changes are a mixed bag. Changes to navigation and to interaction points between Facebook and applications are problematic, while new application privacy features are a good start but seem incomplete.

Navigation to Apps
Formerly, the application dock made it easy to access an application from anywhere in Facebook. One click to get to a bookmarked application; two clicks (without waiting for page loads) to get to other applications. In the new homepage, this has been removed.

Not so with the new design. If an application doesn’t have a tab on your profile page, the only way to access it is from the home page. From my profile or someone else’s profile, this means: click to the Facebook home page, wait for the page to load, click the app icon (or, if the application is not one of your top three book marks, click more and then click the app icon). Yes, this is only one more click, but it requires waiting for an entire page load, and it’s worse for non-bookmarked apps: one click to the home page, wait for it to load, one click to the application dashboard, wait for it to load, one click for “see all of your applications,” wait for that to load, and finally click on the application.

One possible remedy might be to add an “applications” drop-down next to the new notifications, requests, and messages icons.

At the end of the month, Facebook will turn off the ability of applications to send notifications. This is a method I’ve been using to send reminders in Three Good Things, for both automatically generated reminders and user-to-user reminders.

3GT Notifications

3GT Notifications. Left: system generated reminder. Right: user-to-user reminder.

I like that the notifications are less invasive than email reminders. Some 3GT users appreciated their subtlety, though they may have been a little too subtle, at least when they appeared at the bottom of the screen, as many of 3GT users we interviewed never noticed the notifications they received. More importantly, they allowed notifications at the right time. Rather than sending someone a reminder to post — a reminder that might interrupt their other activity or would at least require them to visit the website — the notifications appeared when a 3GT participant was already logged into Facebook, when it was likely convenient for them to post a “good thing” in our application. B.J. Fogg, a champion of persuasive technology, calls this right-time, right-place notification kairos.

I understand that notifications have gotten a lot more intrusive with the addition of push notifications to the iPhone app, and that some app developers have used them more than some Facebook users would prefer. Facebook has also added additional integration points. On the balance, though, I think notifications are going to be an unfortunate integration point to lose.

Application Privacy
Along with some others building health and wellness applications for the Facebook platform, I’ve felt fairly strongly that Facebook needs to give users and developers enhanced privacy controls for applications. At a minimum, this should include the ability to hide one’s use of an application from friends (i.e., not appearing under “friends using this application” in the application’s profile page).

With the recent updates Facebook’s designers and developers appear to have recognized some of these concerns. Application developers are able to set an application as “private,” causing one’s use to not appear in the new application dashboard. This is a good start, but it feels incomplete for a number of reasons. First, users, not developers, should have control of privacy. What’s to stop a developer from later reverting to a more public setting, instantly and completely changing what user activity is revealed? Currently, users do not have any way to remove this information once it appears, either.

Second, this level of privacy does not extend far enough. Friends who use private applications still appear on the application’s profile page under “friends using this application.” Furthermore, the model of application use and content being either private or public is insufficient. In health and wellness applications, for example, participants may benefit from sharing and interacting with other participants in the intervention as well as their friends or family members on Facebook, while also wanting to keep their activity private from coworkers.

This is something that Facebook has already discovered and addressed with the newsfeed (now stream) content, but application content does not enjoy the same privacy controls. To share with only a subset of one’s friends within an application, however, application developers must implement their own social graph features and users must built a second network within the application. Enabling privacy controls similar to those for the newsfeed for application content and use could help people to feel more comfortable using health wellness applications on Facebook while creating more possibilities for designers. A user could only allow an application to be aware of relationships in one or more friend lists or networks, or to select some to exclude from the application while allowing the other connections to remain visible. Assuming a user had created the necessary friend lists or that their privacy preferences mapped to their networks, this would allow someone to filter out their coworkers or to allow only close friends to see their participation.

Update: As of 17 February, Facebook has added a privacy argument to calls for publishing methods (e.g. Stream.publish that gives applications much more control over shared content.

social sites repurposing contacts

A month or so ago, Cory Doctorow wrote a column about how your “creepy ex-co-workers will kill Facebook,” and introduced what he calls “boyd’s law:”

Adding more users to a social network increases the probability that it will put you in an awkward social circumstance.

I think there’s an important corollary: adding more features and content types to a social network increases the probability that it will put you in an awkward social circumstance.

Recent concerns about Beacon are one example. Yes, the privacy issues of an opt-out tool that follows you around from site to site recording your behavior are huge. But there’s also the issue of having this content added to the Facebook at all. Even among my close friends, I don’t want a list of their recent purchases. It’s not something we do in person, and it’s not something I want to do online. A site, though, can cause the same problem by adding content that I share with some people, but not necessarily my current friends. Facebook users presumably friend each other based on the norms for sharing the content that existed on Facebook at the time, adding more content or just changing how Facebook shares the content already there can cause some problems. Some of the content Beacon tried to so forcefully share isn’t that much different than if LinkedIn suddenly started sharing relationship status: you don’t want software deciding to re-purpose one set of social ties into another. For now, Facebook is handling this challenge with extremely fine-grained privacy controls, but that’s a lot of overhead.

The de-placing of facebook
When Facebook was smaller and the bounds were clearer, users had less need of the privacy settings. Two years ago, I had a pretty clear distinction in my head. Facebook was for some social communication and sharing among my college friends and some friends from high school. It had a clear identity, and felt either like a place or very connected to my school as place. I knew who I would “run into” on Facebook, and I knew that the content would be related to college students’ self-expression, communication, and socialization. Within the bounds, it was possible to identify a fairly consistent set of behaviors and information that members were willing to share with each other. Not so anymore. As Facebook adds users and features, it undermines this sense of place. Anyone, including the creepy ex-coworker, might show up. With new features and new applications, I am also less able to anticipate Facebook’s content.

I’m not necessarily criticizing Facebook’s decision to reduce their placiness. Its leadership has decided to trade some of the sense of place for growth, instead becoming an application platform and contact/identity management system. That’s their gamble to take, but I am critical that they seem to be moving in this direction without clearly thinking through some of the consequences for their members.

Other examples of repurposing contacts
Facebook isn’t the only company that has recently re-purposed existing social network information to share additional content. This December, if you use Google Reader and GTalk, Google decided to share all of your shared RSS feed items with all of your GTalk contacts. Your GTalk contacts were already being added to from people you email, so for many users, this exposed their shared items to many people they’d emailed a few times. This decision seems to be based on the incredibly naïve assumption that if you share content with some people, you want to share it with everyone you email. One user reported that this “ruined Christmas.”

Unforunately, as Google and Yahoo increasingly leverage our inboxes to compete with Facebook, we can probably look forward to more of the missteps.

Pursuit of places
I do think it’s possible to grow while keeping a distinct sense of place. After purchasing Flickr,, and upcoming, Yahoo! kept their contact lists separate and retained the identity of each property. Some would probably criticize Yahoo! for not integrating their brand, but I think that time will show they’ve made the right choice. It’s also true that managing the separate contact lists is very similar to the overhead of Facebook’s privacy settings, but there are a some key differences: managing your Flickr contacts does not interfere with the sense of Flickr as a bounded place, and you can (at least currently) be reasonably comfortable that Flickr is not going to repurpose your Flickr contacts outside of the social norms for Flickr users.

This also makes me believe that social startups like dopplr and others can succeed by creating a clear identity as a place. Even if Facebook offered better features (and perhaps more convenience) for sharing my travel status and tips with others, I’d still seek out Dopplr for its characteristics as a space — it’s a much more pleasant experience.

citizen-centered design and regulation in cabin design

This is a quick and very late heads up about Ken Erickson’s participation in a panel organized by Dori Tunstall at AAA this morning. The below description is cribbed from Dori’s blog:

Anthropologist Ken Erickson explores the world of FAA and Americans With Disabilities Act (ADA) regulations in the design of Boeing airplanes accessible to people with physical disabilities. He addresses how interdisciplinary teams handle the conflicts between the ethos of citizen-centered designing and formal government regulation.

Ken’s company, Pacific Ethnography, did some work with my group on universal cabin design.

On a mostly unrelated note, a profile of my workgroup appeared in this month’s Frontiers (pdf).

walkon – a networked cities project

For my final project in Network Cities (ARCH531), I worked with Garin Fons and Amy Grude to explore urban flows. We propose a system that enables sidewalks to respond to you and the people who came before you. As you walk through a city, the ground underfoot glows. Intense, extended glows show the most common direction people in your place next went, while weaker illuminations indicate less popular directions.

Specific numbers of people, dates, and times are never shown. These features would increase the cognitive load on pedestrians, while we intend this service – once people become accustomed to it – to blend into the background and become a moving, changing part of the cityscape. Our goal is not to guide people to a specific path, but to highlight flows at a pedestrian’s given location. In doing so, we restore the idea of “the beaten path” to urban landscapes – something that has largely been lost with permanent, paved pedestrian ways. It is up to you to decide whether to stick with the crowds or see what lies in less frequented areas.

Explore the WalkOn presentation (Flash). The other projects in the class are worth a look too.

just for fun: people markup

For one of our Networked Cities projects, we were asked to explore urban markup. While looking at existing projects, my teammate David Hutchful and I got the feeling that tagging spaces is a pretty crowded space. Tagging or otherwise marking people with the intent of learning more about them or feeling more connected to them appeared similarly crowded.

Inspired loosely by Steven Johnson’s work Everything Bad is Good for You, we began thinking about intermixing the ideas of place and people markup with play. This led to imagining a game in which you tag other people. If tag from two strangers match, aside from some stopwords, within a certain range of time and place, each player might get points.

The idea of being tagged by strangers ultimately feeds into peoples’ curiosity of what others think about them. This became our focus for the project, which we are calling Mirror. We built in anticipation (you can only check how you’ve been tagged once per day) and ambiguity (tags, for you, are only localized to the resolution of a cell tower). You can only be tagged by people who are not in your social network.

These tags also build identities for places. Imagine a space that displays the way people currently in it have been tagged — reflecting its current occupants. Browse a map that shows the way people have been tagged in a neighborhood. We also imagine games, such as scavenger hunts in which the goal is to go out and get tagged in certain ways.


We tell show some of the possibilities in the above storyboard. There is another write up on the project’s page, as well as (an admittedly hand-wavy) tech/design explanation (pdf).

We actually believe that such a product could be bad for both you and community in general, but that doesn’t stop it from being fun to think about.