Skip to content

{ Category Archives } social software

CHI Gamification: We should know better.

Thank a presenter, get swag, sell your soul.This year, CHI added gamification in the form of “missions” and prizes. Unfortunately, for a community that really ought to know something about good vs. bad gamification, the game implemented turned represented much of the worst of gamification — not just shallow and meaningless, but potentially destructive.

Consider the enticement, to the right, which appeared outside of all of the talk rooms. So, once I get past the flyer yelling at me, I learn that I can earn my way toward useless swag by thanking a presenter. There were several like this — thank a committee member member, or “find an entry in a student competition and talk to the students about their experiences” — things that are good to do as a member of the community, and for which there are (or should be) sufficient intrinsic motivators.

But now? Now your thanks are just a tool to get stuff for yourself. And as presenter or committee member, you’re left wondering if a thanks is sincere or just because someone needs to check off another badge on their way to status or a chotsky. Sigh.

About the only good thing I can say about it is that it didn’t appear to catch on. If you’re organizing CHI 2013 or another conference: please, let’s not have any more of this BS.

Google News adds Badges

Via Nick Diakopoulos, I see that Google News has added badges, awarded for reading an number of articles on a certain topics. You earn them privately and can then share them. Nick has some thoughts on his blog.

I agree with a lot of Nick’s thoughts. Having validated reading behavior is useful – though it’s also interesting to get the difference between what topics people read and what topics people want others to know they read. As Nick points out, it might be a way for people to communicate to others that they are an expert on a topic – or at least an informed reader, as I suspect that experts may have other channels for following the topics about which they care most.

Though BunchBall sort of looks down on the quantified self aspect, I do think it’s useful to give people feedback on what they are reading (sort of like last.fm) for news topics rather than what they think they read, though badges probably aren’t quite as data-rich as I’d want. At Michigan, we’re trying a similar experiment as part of the BALANCE project shortly, to assess whether feedback on past reading behavior affects the balance of political articles that subjects read.

If people do care about earning the badges, either to learn about their reading behavior or to share with others as a sign of their expertise or interests, then they’ll probably read more of their news through Google news – so that it is tracked in more detail. Thus, a win for Google, who gets the pageviews and data.

Google, why do you want me to earn a Kindle badge?

Influence When I first visited, I was encouraged to earn a Kindle badge. I couldn’t figure this out. Yeah, it’s an interesting product, but I don’t want to read a lot of news about it and a review of my Google News history showed that I never had through the site. So why, of all the >500 badges that Google could suggest to me (many for topics I read lots about), is it suggesting Kindle and only Kindle? If left me wondering if it was a random recommendation, if whatever Google used to suggest a badge was not very good for me, or it was a sponsored badge intended to get me to read more about Kindles (speaking of potential wins for Google…).

Whatever the case, this highlights a way that badges could push reading behavior – assuming that people want to earn, or want to avoid earning, badges. This can run both ways. Maybe someone is motivated by gadget badges and so reads more about Kindles; maybe someone doesn’t think of themselves as interested in celebrities or media and is thus pushed to read fewer articles about those topics than they were before. I’m not saying this is bad, per se, as feedback is an important part of self-regulation, but if badges matter to people, the simple design choice of which badges to offer (and promote) will be influential, just as the selection and presentation of articles are.

Sunlight Labs’ Inbox Influence: Sunlight or Sunburn?

Last week, Sunlight Labs released Inbox Influence, a set of browser extensions (Chrome, Firefox) and bookmarklets that annotate senders and entities in the body of emails with who has contributed to them and to whom they have contributed.

I really like the idea of using browser plugins to annotate information people encounter in their regular online interactions. This is something we’re doing on a variety of projects here, including AffectCheck, BALANCE, and Rumors. I think that tools that combine personal data, in-situ, with more depth can teach people more about with whom and with what they are interacting, and this just in time presentation of information is an excellent opportunity to persuade and possibly to prompt reflection. Technically, it’s also a pretty nice implementation.

There are some reasons why this tool may not be so great, however. With Daniel Avrahami, Sunny Consolvo, James Fogarty, Batya Friedman, and Ian Smith, I recently published a paper about people’s attitudes toward the online availability of US public records, including campaign contribution records such as the ones on which Inbox Influence draws. Many respondents to our survey (mailed, no compensation, likely biased toward people who care more about this issue) expressed discomfort with these records being so easily accessible, and less than half (as of 2008) even knew that campaign contribution records were available online before they received the survey. Nearly half said that they wanted some sort of change, and a third said that this availability would alter their future behavior, i.e., they’d contribute less (take this with a grain of salt, since it is about hypothetical future behavior).

Unless awareness and attitudes have changed quite a bit from 2008, tools such as Inbox Influence create privacy violations. The data is being used and presented in ways that people did not anticipate at the time when they made the decision to donate, and at least some people are “horrified” or at least uncomfortable with this information being so easily accessible. Perhaps we just need to do better at educating potential donors about in what ways campaign contribution data may be used (and anticipate future mashups), though it is also possible that tools like this do not need to be made, or could benefit from being a bit more nuanced in when and about whom they load information.

Speaking personally, I’m not sure how I feel. On the one hand, I think that campaign contributions and other other actions should be open to scrutiny and should have consequences. If you take the money you earn from your business and donate it to support Prop 8, I want the opportunity to boycott your business. If you support a politician who wants to eviscerate the NSF, I might want to engage you in conversation about that. On the other hand, I don’t like the idea that my campaign contribution history (anything above the reporting limit) might be loaded automatically when I email a professional colleague or a student. That’s just not relevant—or even appropriate—to the context. And there are some friendships among political diverse individuals that may survive, in part, because those differences are not always made salient. So it also seems like Inbox Influence or tools that let you load, with a click, your Facebook friends’ contribution history, could sometimes cause harm.

CHI Highlights: Persuasive Tech and Social Software for Health and Wellness

I want to take few minutes to highlight a few papers from CHI 2011, spread across a couple of posts. There was lots of good work at this conference. This post will focus on papers in the persuasive technology and social software for health and wellness space, which is the aspect of my work that I was thinking about most during this conference.

  • Fit4life: the design of a persuasive technology promoting healthy behavior and ideal weight
    Stephen Purpura, Victoria Schwanda, Kaiton Williams, William Stubler, Phoebe Sengers

    Fit4life is a hypothetical system that monitors users’ behavior using a variety of tactics in the Persuasive Systems Design model. After describing the system (in such a way that someone in the room commented made the audience “look horrified”), the authors transition to a reflection on persuasive technology research and design, and how such a design can “spiral out of control.” As someone working in this space, the authors hit on some of the aspects that leave me a bit unsettled: persuasion vs. coercion, individual good vs. societal good, whether people choose their own view points or are pushed to adopt those of the system designers, measurement and control vs. personal experiences and responsibility, and increased sensing and monitoring vs. privacy and surveillance and the potential to eliminate boundaries between front stage and back stage spaces. The authors also discuss how persuasive systems with very strong coaching features can reduce the opportunity for mindfulness and for their users to reflect on their own situation: people can simply follow the suggestions rather than weigh the input and decide among the options.

    This is a nice paper and a good starting point for lots of discussions. I’m a bit frustrated that it was presented in a different yet concurrent session as the session on persuasive technology for health. As such, it probably did not (immediately) reach the audience that would have led to the most interesting discussion about the paper. In many ways, it argued for a “think about what it is like to live with” rather than “pitch” approach to thinking about systems. I agree with a good bit of the potential tensions the authors highlight, but I think they are a bit harder on the persuasive tech community than appropriate: in general, persuasive tech folks are aware we are building systems intended to change behavior and that this is fraught with ethical considerations, while people outside of the community often do not think of their systems as persuasive or coercive, even when they are (again, I mean this in a Nudge, choice-environments sense. On the other hand, one presentation at Persuasive last year did begin with the statement “for the sake of this paper, set aside ethical concerns” (paraphrased), so clearly there is still room for improvement.

  • Designing for peer involvement in weight management
    Julie Maitland, Matthew Chalmers

    Based on interviews with nineteen individuals, the authors present an overview of approaches for how to involve peers in technology for weight management. These approaches fall into passive involvement (norms and comparisons) and five types of active involvement (table 1 in the paper): obstructive (“don’t do it”), inductive (“you should do it”), proactive (“do it with me”), supportive (“I’ll do it too”), and cooperative (“let’s do it together”). The last category includes competition, though there was some disagreement during the Q&A about whether that is the right alignment. The authors also find gender- and role- based differences in perceived usefulness of peer-based interventions, such as differences in attitudes about competition.

    Designers could use these types of engagement to think about what their application are supporting well or not so well. Here, I wish the authors had gone a bit further in linking the types of involvement to the technical mechanisms or features of applications and contexts, as I think that would be a better jumping off point for designers. For those thinking about how to design social support into online wellness interventions, I think this paper, combined with Skeels et al, “Catalyzing Social Support for Breast Cancer Patients”, CHI 2010 and our own paper from CSCW 2011 (“‘It’s not that I don’t have problems, I’m just not putting them on Facebook': Challenges and Opportunities in Using Online Social Networks for Health”), offer a nice high-level overview of some of the challenges and opportunities for doing so.

  • Mining behavioral economics to design persuasive technology for healthy choices
    Min Kyung Lee, Sara Kiesler, Jodi Forlizzi

    A nice paper that evaluates different persuasive approaches for workplace snack selection. These include:

    • default choice: a robot showing all snack choices with equal convenience or the healthy one more visibly, or a website that showed all snack choices (in random order) or that paginated them, with healthy choices shown on the first page.
    • planning: asking people to order a snack for tomorrow rather than select at the time of consumption.
    • information strategy: showing calorie counts for each snack.

    As one would expect, default choice strategy was highly effective in increasing the number of people who chose the healthy snack (apples) rather than the unhealthy snack (cookies). The planning strategy was effective among people who had a healthy snacking lifestyle, while those who snacked unhealthily continued to choose cookies. Interestingly, the information strategy had no effect on healthy snackers and actually led healthy snackers to choose cookies more than they otherwise would have. The authors speculate that this is either because the healthy snackers overestimate the caloric value of cookies in the absence of information (and thus avoid them more), or because considering the healthy apple was sufficiently fulfilling even if they ultimately chose the cookie.

    Some questions the study leaves open are: would people behave they same if they had to pay for the snacks? what would happen in a longer term deployment? What would have happened if the cookies were made the default, particularly for otherwise healthy snackers?

  • Side effects and “gateway” tools: advocating a broader look at evaluating persuasive systems
    Victoria Schwanda, Steven Ibara, Lindsay Reynolds, Dan Cosley

    Interviews with 20 Wii Fit users revela side effects of this use: some stop using it because it did not work while others stop because they go on to other, preferred fitness activities (abandonment as success), a tension between whether the Fit is viewed as a game or exercise tool (people rarely view it as both), and negative emotional impacts (particularly frustrating when the system misinterpreted some data, such as weight gains). One suggestion the authors propose is that behavior change systems might start with activities that better resemble games but gradually transition users to activities with fewer game-like elements, and eventually wean users off of the system all together. In practice, I’m not sure how this would work, but I like this direction because it gets at one of my main critiques of gamification: take away the game and its incentives (which my distract from the real benefits of changing one’s behavior) and the behavior reverts quite quickly.

  • Means based adaptive persuasive systems
    Maurits Kaptein, Steven Duplinsky, Panos Markopoulos

    Lab experiment evaluating the effects of using multiple sources of advice (single expert or consensus of similar others) at the same time, disclosing that advice is intended to persuade, and allowing users to select their source of advice. (This is framed more generally as about persuasive systems, but I think the framing is too broad: it’s really a study about advice.) Results: people are more likely to follow advice when they choose the source, people are less likely to follow advice when they are told that it is intended to persuade, and when shown expert advice and consensus advice from similar others, subjects were less likely to follow the advice than when they were only shown expert advice — regardless of whether the expert and consensus advice concurred with each other. This last finding is surprising to me and to the authors, who suggest that it may be a consequence of the higher cognitive load of processing multiple sources of advice; I’d love to see further work on this.

  • Opportunities for computing technologies to support healthy sleep behaviors
    Eun Kyuong Choe, Sunny Consolvo, Nathaniel F. Watson, Julie A. Kientz

    Aggregation of literature review, interviews with sleep experts, a survey of 230 individuals, and 16 potential users to learn about opportunities and challenges for designing sleep technologies. The work leads to a design framework that considers the goal of the individual using the system, the system’s features, the source of the information supporting the design choices made, the technology used, and stakeholders involved, and the input mechanism. During the presentation, I found myself thinking a lot about two things: (1) the value of design frameworks and how to construct a useful one (I’m unsure of both) and (2) how this stacks up against Julie’s recent blog post that is somewhat more down on the opportunities of tech for health.

  • How to evaluate technologies for health behavior change in HCI research
    Predrag Klasnja, Sunny Consolvo, Wanda Pratt

    The authors argue that evaluating behavior change systems based solely on whether they changed the behavior is not sufficient, and often infusible. Instead, they argue, HCI should focus on whether systems or features effectively implement or support particular strategies, such as self-monitoring or conditioning, which can be measured in shorter term evaluations.

    I agree with much of this. I think that more useful HCI contributions in this area speak to which particular mechanisms or features worked, why and how they worked, and in what context one might expect them to work. Contributions that throw the kitchen sink of features at a problem and do not get into the details of how people reacted to the specific features and what they features accomplished may tell us that technology can help with a condition, but do not, in general, do a lot to inform the designers of other systems. I also agree that shorter-term evaluations are often able to show that particular feature is or is not working as intended, though longer term evaluations are appropriate to understand if it continues to work. I am also reminded of the gap between the HCI community and the sustainability community pointed out by Froehlich, Findlater, and Landay at CHI last year, and fear that deemphasizing efficacy studies and RCTs will limit the ability of the HCI community to speak to the health community. Someone is going to have to do the efficacy studies, and the HCI community may have to carry some of this weight in order for our work to be taken seriously elsewhere. Research can make a contribution without showing health improvements, but if we ignore the importance of efficacy studies, we imperil the relevance of our work to other communities.

  • Reflecting on pills and phone use: supporting awareness of functional abilities for older adults
    Matthew L. Lee, Anind K. Dey

    Four month deployment of a system for monitoring medication taking and phone use in the homes of two older adults. The participants sought out anomalies in the recorded data; when they found them, they generally trusted the system and focused on explaining why it might have happened, turning first to their memory of the event and then to going over their routines or other records such as calendars and diaries. I am curious if this trust would extend to a purchased product rather than one provided by the researchers (if so, this could be hazardous in an unreliable system); I could see arguments for it going each way.

    The authors found that these systems can help older remain aware of their functional abilities and helped them better make adaptations to those abilities. Similar to what researchers have recommended for fitness journals or sensors, the authors suggest that people be able to annotate or explain discrepancies in their data and be able to view it jointly. They also suggest highlighting anomalies and showing them with other available contextual information about that date or time.

  • Power ballads: deploying aversive energy feedback in social media
    Derek Foster, Conor Linehan, Shaun Lawson, Ben Kirman

    I generally agree with Sunny Consolvo: feedback and consequences in persuasive systems should generally range from neutral to positive, and have been reluctant (colleagues might even say “obstinate”) about including it in GoalPost or Steps. Julie Kientz’s work, however, finds that certain personalities think they would respond well to negative feedback. This work in progress tests negative (“aversive”) feedback: Facebook posts about songs and the statement that they were using lots of energy in a pilot with five participants. The participants seemed to respond okay to the posts — which are, in my opinion, pretty mild and not all that negative — and often commented on them. The authors interpret this as aversive feedback not leading to disengagement, but I think that’s a bit too strong of a claim to make on this data: participants, despite being unpaid but having been recruited to the study, likely felt some obligation to follow through to its end in a way that they would not for a commercially or publicly available system, and, with that feeling, may have commented out of a need to publicly explain or justify their usage as shown in the posts. The last point isn’t particularly problematic, as such reflection may be useful. Still, this WiP and the existence of tools like Blackmail Yourself (which *really* hits at the shame element) do suggest that there is more work needed on the efficacy of public, aversive feedback.

  • Descriptive analysis of physical activity conversations on Twitter
    Logan Kendall, Andrea Civan Hartzler, Predrag Klasnja, Wanda Pratt

    In my work, I’ve heard a lot of concern about posting health related status updates and about seeing similar status updates from others, but I haven’t taken a detailed look at the status updates that people are currently making, which this WiP makes a start on for physical activity posts on Twitter.By analyzing the results of queries for “weight lifting”, “Pilates”, and “elliptical”, the authors find posts that show evidence of exercise, plans for exercise, attitudes about exercise, requests for help, and advertisements. As the authors note, the limited search terms probably lead to a lot of selection bias, and I’d like to see more information about posts coming from automated sources (e.g., FitBit), as well as how people reply to the different genres of fitness tweets.

  • HappinessCounter: smile-encouraging appliance to increase positive mood
    Hitomi Tsujita, Jun Rekimoto

    Fun yet concerning alt.chi work on pushing people to smile in order to increase positive mood. With features such as requiring a smile to open the refrigerator, positive feedback (lights, music) in exchange for smiles, automatic sharing of photos of facial expressions with friends or family members, automatic posting of whether or not someone is smiling enough, this paper hits many of the points about which the Fit4life authors raise concerns.

The panel I co-organized with Margaret E. Morris and Sunny Consolvo, “Facebook for health: opportunities and challenges for driving behavior change,” and featuring Adam D. I. Kramer, Janice Tsai, and Aaron Coleman, went pretty well. It was good to hear what everyone, both familiar and new faces, is up to and working on these days. Thanks to my fellow panelists and everyone who showed up!

There was a lot of interesting work — I came home with 41 papers in my “to read” folder — so I’m sure that I’m missing some great work in the above list. If I’m missing something you think I should be reading, let me know!

Mindful Technology vs. Persuasive Technology

On Monday, I had the pleasure of visiting Malcolm McCullough’s Architecture 531 – Networked Cities for final presentations. Many of the students in the class are from SI, where we talk a lot about incentive-centered design, choice architecture, and persuasive technology, which seems to have resulted in many of the projects having a persuasive technology angle. As projects were pitched as “extracting behavior” or “compelling” people to do things, it was interesting to watch the discomfort in the reactions from students and faculty who don’t frame problems in this way.1

Thinking about this afterwards brought me back to a series of conversations at Persuasive this past summer. A prominent persuasive technology researcher said something along the lines of “I’m really only focusing on people who already want to change their behavior.” This caused a lot of discussion, with major themes being: Is this a cop-out, shouldn’t we be worried about the people who aren’t trying? Is this just a neat way of skirting the ethical issues of persuasive (read: “manipulative”) technology?

I’m starting to think that there may be an important distinction that may help address these questions, one between technology that pushes people to do something without them knowing it and technology that supports people in achieving a behavior change they desire. The first category might be persuasive technology, and for now, I’ll call the second category mindful technology.

Persuasive Technology

I’ll call systems that push people who interact with them to behave in certain ways, without those people choosing the behavior change as an explicit goal, Persuasive Technology. This is a big category, and I believe that most systems are persuasive systems in that their design and defaults will favor certain behaviors over others (this is a Nudge inspired argument: whether or not it is the designer’s intent, any environment in which people make choices is inherently persuasive).

Mindful Technology

For now, I’ll call technology that helps people reflect on their behavior, whether or not people have goals and whether or not the system is aware of those goals, mindful technology. I’d put apps like Last.fm and Dopplr in this category, as well as a lot of tools that might be more commonly classified as persuasive technology, such as UbiFit, LoseIt, and other trackers. While designers of persuasive technology are steering users toward a goal that the designers’ have in mind, the designers of mindful technology give users the ability to better know their own behavior to support reflection and/or self-regulation in pursuit of goals that the users have chosen for themselves.

Others working in the broad persuasive tech space have also been struggling with the issue of persuasion versus support for behaviors an individual chooses, and I’m far from the first to start thinking of this work as being more about mindfulness. Mindfulness is, however, a somewhat loaded term with its own meaning, and that may or may not be helpful. If I were to go with the tradition of “support systems” naming, I might call applications in this category “reflection support systems,” “goal support systems,” or “self-regulation support systems.”

Where I try to do my work

I don’t quite think that this is the right distinction yet, but it’s a start, and I think these are two different types of problems (that may happen to share many characteristics) with different sets of ethical considerations.

Even though my thinking is still a bit rough, I’m finding this idea useful in thinking through some of the current projects in our lab. For example, among the team members on AffectCheck, a tool to help people see the emotional content of their tweets, we’ve been having a healthy debate about how prescriptive the system should be. Some team members prefer something more prescriptive – guiding people to tweet more positively, for example, or tweeting in ways that are likely to increase their follower and reply counts – while I lean toward something more reflective – some information about the tweet currently being authored, how the user’s tweets have changed over time, here is how they stack up against the user’s followers’ tweets or the rest of Twitter. While even comparisons with friends or others offer evidence of a norm and can be incredibly persuasive, the latter design still seems to be more about mindfulness than about persuasion.

This is also more of a spectrum than a dichotomy, and, as I said above, all systems, by nature of being a designed, constrained environment, will have persuasive elements. (Sorry, there’s no way of dodging the related ethical issues!) For example, users of Steps, our Facebook application to promote walking (and other activity that registers on a pedometer), have opted in to the app to maintain or increase their current activity level. They can set their own daily goals, but the app’s goal recommender will push them to the fairly widely accepted recommendation of 10,000 steps per day. Other tools such as Adidas’s MiCoach or Nike+ have both tracking and coaching features. Even if people are opting into specific goals, the mere limited menu of available coaching programs is a bit persuasive, as it constrains people’s choices.

Overall, my preference when designing is to focus on helping people reflect on their behavior, set their own goals, and track progress toward them, rather than to nudge people toward goals that I have in mind. This is partly because I’m a data junkie, and I love systems that help me learn more about my behavior is without telling me what it should be. It is also partly because I don’t trust myself to persuade people toward the right goal at all times. Systems have a long history of handling exceptions quite poorly. I don’t want to build the system that makes someone feel bad or publicly shames them for using hotter water or a second rinse after a kid throws up in bed, or that takes someone to task for driving more after an injury.

I also often eschew gamification (for many reasons), and to the extent that my apps show rankings or leaderboards, I often like to leave it to the viewer to decide whether it is good to be at the top of the leaderboard or the bottom. To see how too much gamification can prevent interfere with people working toward their own goals, consider the leaderboards on TripIt and similar sites. One person may want to have the fewest trips or miles, because they are trying to reduce their environmental impact or because they are trying to spend more time at home with family and friends, while another may be trying to maximize their trips. Designs that simply reveal data can support both goals, while designs that use terms like “winning” or that award trophies or badges to the person with the most trips start to shout: this is what you should do.

Thoughts?

What do you think? Useful distinction? Cluttering of terms? Have a missed an existing, better framework for thinking about this?


1Some of the discomfort was related to some of the projects’ use punishment (a “worst wasters” leaderboard or similar). This would be a good time to repeat Sunny Consolvo’s guideline that technology for persuasive technology range from neutral to positive (Consolvo 2009), especially, in my opinion, in discretionary use situations – because otherwise people will probably just opt-out.

@display

For those interested in the software that drives the SIDisplay, SI master’s student Morgan Keys has been working to make a generalized and improved version available. You can find it, under the name “@display” at this GitHub repository.

SIDisplay is a Twitter-based public display described in a CSCW paper with Paul Resnick and Emily Rosengren. We built it for the School of Information community, where it replaced a number of previous displays, including a Thank You Board (which we compare it to in the paper), a photo collage (based on the context, content & community collage), and a version of the plasma poster network. Unlike many other Twitter-based displays, SI Display and @display do not follow a hashtag, but instead follow @-replies to the display’s Twitter account. It also includes private tweets, so long as the Twitter user has given the display’s Twitter account permission to follow them.

some thoughts on Facebook’s recent changes, from the perspective of an application designer

There’s a lot to like about the recent changes to Facebook, but, as an application developer, many of the changes are a mixed bag. Changes to navigation and to interaction points between Facebook and applications are problematic, while new application privacy features are a good start but seem incomplete.

Navigation to Apps
Formerly, the application dock made it easy to access an application from anywhere in Facebook. One click to get to a bookmarked application; two clicks (without waiting for page loads) to get to other applications. In the new homepage, this has been removed.

Not so with the new design. If an application doesn’t have a tab on your profile page, the only way to access it is from the home page. From my profile or someone else’s profile, this means: click to the Facebook home page, wait for the page to load, click the app icon (or, if the application is not one of your top three book marks, click more and then click the app icon). Yes, this is only one more click, but it requires waiting for an entire page load, and it’s worse for non-bookmarked apps: one click to the home page, wait for it to load, one click to the application dashboard, wait for it to load, one click for “see all of your applications,” wait for that to load, and finally click on the application.

One possible remedy might be to add an “applications” drop-down next to the new notifications, requests, and messages icons.

Notifications
At the end of the month, Facebook will turn off the ability of applications to send notifications. This is a method I’ve been using to send reminders in Three Good Things, for both automatically generated reminders and user-to-user reminders.

3GT Notifications

3GT Notifications. Left: system generated reminder. Right: user-to-user reminder.

I like that the notifications are less invasive than email reminders. Some 3GT users appreciated their subtlety, though they may have been a little too subtle, at least when they appeared at the bottom of the screen, as many of 3GT users we interviewed never noticed the notifications they received. More importantly, they allowed notifications at the right time. Rather than sending someone a reminder to post — a reminder that might interrupt their other activity or would at least require them to visit the website — the notifications appeared when a 3GT participant was already logged into Facebook, when it was likely convenient for them to post a “good thing” in our application. B.J. Fogg, a champion of persuasive technology, calls this right-time, right-place notification kairos.

I understand that notifications have gotten a lot more intrusive with the addition of push notifications to the iPhone app, and that some app developers have used them more than some Facebook users would prefer. Facebook has also added additional integration points. On the balance, though, I think notifications are going to be an unfortunate integration point to lose.

Application Privacy
Along with some others building health and wellness applications for the Facebook platform, I’ve felt fairly strongly that Facebook needs to give users and developers enhanced privacy controls for applications. At a minimum, this should include the ability to hide one’s use of an application from friends (i.e., not appearing under “friends using this application” in the application’s profile page).

With the recent updates Facebook’s designers and developers appear to have recognized some of these concerns. Application developers are able to set an application as “private,” causing one’s use to not appear in the new application dashboard. This is a good start, but it feels incomplete for a number of reasons. First, users, not developers, should have control of privacy. What’s to stop a developer from later reverting to a more public setting, instantly and completely changing what user activity is revealed? Currently, users do not have any way to remove this information once it appears, either.

Second, this level of privacy does not extend far enough. Friends who use private applications still appear on the application’s profile page under “friends using this application.” Furthermore, the model of application use and content being either private or public is insufficient. In health and wellness applications, for example, participants may benefit from sharing and interacting with other participants in the intervention as well as their friends or family members on Facebook, while also wanting to keep their activity private from coworkers.

This is something that Facebook has already discovered and addressed with the newsfeed (now stream) content, but application content does not enjoy the same privacy controls. To share with only a subset of one’s friends within an application, however, application developers must implement their own social graph features and users must built a second network within the application. Enabling privacy controls similar to those for the newsfeed for application content and use could help people to feel more comfortable using health wellness applications on Facebook while creating more possibilities for designers. A user could only allow an application to be aware of relationships in one or more friend lists or networks, or to select some to exclude from the application while allowing the other connections to remain visible. Assuming a user had created the necessary friend lists or that their privacy preferences mapped to their networks, this would allow someone to filter out their coworkers or to allow only close friends to see their participation.

Update: As of 17 February, Facebook has added a privacy argument to calls for publishing methods (e.g. Stream.publish that gives applications much more control over shared content.

three good things

Three Good ThingsThe first of my social software for wellness applications is available on Facebook (info page).

Three Good Things supports a positive psychology exercise in which participants record three good things, and why these things happened. When completed daily – even on the bad days – over time, participants report increased happiness and decreased symptoms of depression. The good things don’t have to be major events – a good meal, a phone call with a friend or a family member, or a relaxing walk are all good examples.

I’m interested in identifying best practices for deploying these interventions on new or existing social websites, where adding social features may make the intervention more or less effective for participants, or may just make some participants more likely to complete the exercise on a regular basis. Anyway, feel free to give the app a try – you’ll be helping my research and you may end up a bit happier.

privacy on twitter vs. privacy on facebook

In a post describing some teens’ use of Twitter and Facebook (Twitter is for friends; Facebook is everybody; some teens are using private Twitter accounts for communication with friends because Twitter is too public), danah boyd poses the following question:

My guess is that if Twitter does take off among teens and Dylan’s friends feel pressured to let peers and parents and everyone else follow them, the same problem will arise and Twitter will become public in the same sense as Facebook. This of course raises a critical question: will teens continue to be passionate about systems that become “public” (to all that matter) simply because there’s social pressure to connect to “everyone”?

I believe that Twitter may actually be much more resistant to both this pressure and subsequent switch to less “public” platforms than Facebook for two reasons: account norms and Twitter clients.

Account Norms, Privacy, and Collapsed Contexts
On Facebook, everyone pretty much gets one account.1 This leaves me with a choice of collapsed contexts (same profile for everyone) or only friending people from a particular context or set of context. There are many fine-grained privacy controls, but this all adds up to a more-is-less experience, at least for me. There are enough many controls that I don’t particularly remember what I’ve set to be visible to whom. When I comment on something in friend’s profile (or am tagged in one of their photos), I don’t know who can see that.

With Twitter, people can have multiple accounts, and for private accounts, they know exactly who can see their posts: only people who I give permission. This is not to say Twitter is not without some privacy pitfalls – e.g. plenty of private tweets get retweeted or replies on others’ public accounts – but I have a much clearer idea of who can see a status update or reply on Twitter than I do of who can see similar content on Facebook at the time of posting. I suspect that many users of private Twitter accounts do so just to avoid the “what if so-and-so sees this?” question. So it seems reasonable that people could have different accounts for their work, family, friends, etc personas, though there’s a point at which it probably would be too many.

Twitter Clients
Having multiple accounts wouldn’t work well without an appropriate interface, and here Twitter benefits hugely from its API and the many, many Twitter clients available. Using more than one Facebook account, especially simultaneously, is an ordeal – multiple web browsers, no aggregation. With the right client, reading from and posting to multiple Twitter accounts is a breeze.

So while there may eventually be an exit from a more public Twitter, I think there is more room to move within the same service, diversifying accounts, than there might be on Facebook. This will only, work, though if people are willing to set boundaries and accept boundaries – and probably not if mom and dad insist on following the Twitter account their kids use to communicate with friends from school, or if colleagues regularly feel insulted when a coworker-acquaintance declines their request to follow an account they use to communicate with close friends.

1I believe this used to be part of the terms of service, but I don’t see it anymore and can’t be sure that it was ever there.

wikis in organizations

Antero Aunesluoma presents at WikiFest

In early September, I attended WikiSym 08 in Porto, Portugal, so this post is nearly two months overdue. In addition to presenting a short paper on the use of a wiki to enhance organizational memory and sharing in a Boeing workgroup, I participated on the WikiFest panel organized by Stewart Mader.

Since then, a couple of people have asked me to post the outline of my presentation for the WikiFest panel. These notes are reflections from the Medshelf, CSS-D, SI, and Boeing workgroup wiki projects and are meant for those thinking about or getting started with deploying a wiki in a team. For those that have been working with wikis and other collaborative tools for a while, there probably aren’t many surprises here.

  1. Consider the wiki within your ecosystem of tools. For CSS-D and MedShelf, the wikis were able to offload many of the frequently asked questions (and, to an even greater extent, the frequent responses) from the corresponding email lists. This helps to increase the signal to noise ratio on the lists for list members that have been around for a while, and increasing their satisfaction with the lists and perhaps making them more likely to stick around.

    Another major benefit of moving some of this content from the mailing lists to the wiki is that new readers had less to read to get an answer. If you’ve ever search for the answer to a problem and found part of the solution in a message board or mailing list archive, you may be familiar with the experience of having to read through several proposed, partial solutions, synthesizing as you go, before arriving at the information you need. If all of that information is consolidated as users add it to the wiki, it can reduce the burden of synthesizing information from each time it is accessed to just each time someone adds new information to the wiki.

    In addition to considering how a wiki (or really, any other new tool) will complement your existing tools, consider what it can replace. At Boeing, the wiki meant that workgroup members could stop using another tool they didn’t like. If there was a directive to use the wiki in addition to the other tool, it probably wouldn’t have been as enthusiastically adopted. One of the reasons that the SI Wiki has floundered a bit is that there are at least three other digital places this sort of information is stored: two CTools sites and an intranet site. When people don’t know where to put things, sometimes we just don’t put them at all.

  2. Sometimes value comes from aggregation rather than synthesis. In the previous point, I made a big deal out of the value of using the wiki to synthesize information from threaded discussions and various other sources. When we started the MedShelf project, I was expecting all wikis to be used this way, but I was very wrong. With Medshelf, a lot of the value comes from individuals’ stories about coping with the illness. Trying to synthesize that into a single narrative or neutral article would have meant losing these individual voices, and for content like this, it aggregation — putting it all in the same place — can be the best approach.

    The importance of these individual voices also meant that many more pages than I expected were single-authored.

  3. Don’t estimate the value of a searchable & browsable collection. Using the workgroup wiki, team members have found the information need because they knew about one project and then were able to browse links to documentation other, related projects that had the information they needed. Browsing between a project page and a team member’s profile has also helped people to identify experts on a given topic. The previous tools for documenting projects didn’t allow for connections between different project repositories and made it hard to browse to the most helpful information. But this only works if you are adding links between related content on the wiki, or if your wiki engine automatically adds related links.

    For the wikis tied to mailing lists (CSS-D and Medshelf), some people arriving at the wiki through a search engine, looking for a solution to a particular problem, have browsed to the list information and eventually joined the list. This is certainly something that happens with mailing list archives, but which makes a better front door — the typical mailing list archive or a wiki?

  4. Have new users arrive in parallel rather than serial (after seeding the wiki with content).
  5. The Boeing workgroup wiki stagnated when it was initially launched, and did not really take off until the wiki evangelist organized a “wiki party” (snacks provided) where people could come and get started on documenting their past projects. Others call this a Barn Raising. This sort of event can give potential users both a bit of peer (or management) pressure and necessary technical support to get started adding content. It also serves the valuable additional role of giving community members a chance to express their opinions about how the tool can/should be used, and to negotiate group norms and expectations for the new wiki.

    Even if you can’t physically get people together — for the mailing list wikis, this was not practical — it’s good to have them arrive at the same time, and to have both some existing content and suggestions for future additions ready and waiting for them.

  6. Make your contributors feel appreciated. Wikis typically don’t offer the same affordances for showing gratitude as a threaded discussion, where it is usually easy to add a quick “thank you” reply or to acknowledge someone else’s contribution while adding new information. With wikis, thanks are sometimes rare, and users may see revisions to content they added as a sign that they did something wrong, rather than provided a good starting point to which others added. It can make a big difference to acknowledge particularly good writeups publicly in a staff meeting or on the mailing list, or to privately express thanks or give a compliment.

Continue reading ›