Skip to content

{ Tag Archives } chi

CHI Gamification: We should know better.

Thank a presenter, get swag, sell your soul.This year, CHI added gamification in the form of “missions” and prizes. Unfortunately, for a community that really ought to know something about good vs. bad gamification, the game implemented turned represented much of the worst of gamification — not just shallow and meaningless, but potentially destructive.

Consider the enticement, to the right, which appeared outside of all of the talk rooms. So, once I get past the flyer yelling at me, I learn that I can earn my way toward useless swag by thanking a presenter. There were several like this — thank a committee member member, or “find an entry in a student competition and talk to the students about their experiences” — things that are good to do as a member of the community, and for which there are (or should be) sufficient intrinsic motivators.

But now? Now your thanks are just a tool to get stuff for yourself. And as presenter or committee member, you’re left wondering if a thanks is sincere or just because someone needs to check off another badge on their way to status or a chotsky. Sigh.

About the only good thing I can say about it is that it didn’t appear to catch on. If you’re organizing CHI 2013 or another conference: please, let’s not have any more of this BS.

CHI Highlights: General

Okay, the third and final (?) set of CHI Highlights, consisting of brief notes on some other papers and panels that caught my attention. My notes here will, overall, be briefer than in the other posts.

More papers

  • We’re in It Together: Interpersonal Management of Disclosure in Social Network Services
    Airi Lampinen, Vilma Lehtinen, Asko Lehmuskallio, Saraki Tamminen

    I possibly should have linked to this one in my post about social networks for health, as my work in that area is why this paper caught my attention. Through a qualitative study, the authors explore how people manage their privacy and disclosures on social network sites.

    People tend to apply their own expectations about what they’d like posted about themselves to what they post about others, but sometimes negotiate and ask others to take posts down, and this can lead to either new implicit or explicit rules about what gets posted in the future. They also sometimes stay out of conversations when they know that they are not as close to a the original poster as the other participants (even if they have the same “status” on the social network site). Even offline behavior is affected: people might make sure that embarrassing photos can’t be taken so that they cannot be posted.

    To regulate boundaries, some people use different services targeted at different audiences. While many participants believed that it would be useful to create friend lists within a service and to target updates to those lists, many had not done so (quite similar to my findings with health sharing: people say they want the feature and that it is useful, but just aren’t using it. I’d love to see Facebook data on what percent of people are actively using lists.) People also commonly worded posts so that those “in the know” would get more information than others, even if all saw the same post.

    Once aversive content had been posted, however, it was sometimes better for participants to ty to repurpose it to be funny or a joke, rather than to delete it. Deletions say “this was important,” while adding smilies can decrease its impact and say “oh, that wasn’t serious.”

  • Utility of human-computer interactions: toward a science of preference measurement
    Michael Toomim, Travis Kriplean, Claus Pörtner, James Landay

    Many short-duration user studies rely on self-report data of satisfaction with an interface or tool, even though we know that self-report data is often quite problematic. To measure the relative utility of design alternatives, the authors place them on Mechanical Turk and measure how many tasks people complete on each alternative under differing pay conditions. A design that gets more work for the same or less pay implies more utility. Because of things like the small pay effect and its ability to crowd out intrinsic rewards, I’m curious about whether this approach will work better for systems meant for work rather than for fun, as well as just how far it can go – but I really do like the direction of measuring what people actually do rather than just what they say.

Perhaps it is my love of maps or my senior capstone project at Olin, but I have a love for location based services work, even if I’m not currently doing it.

  • Opportunities exist: continuous discovery of places to perform activities
    David Dearman, Timothy Sohn, Khai N. Truong

  • In the best families: tracking and relationships
    Clara Mancini, Yvonne Rogers, Keerthi Thomas, Adam N. Joinson, Blaine A. Price, Arosha K. Bandara, Lukasz Jedrzejczyk, Bashar Nuseibeh

    I’m wondering more and more if there’s an appropriate social distance for location trackers: with people are already very close, it is smothering, while with people who are too far, is it creepy? Thinking about preferences for Latitude, I wouldn’t want my family or socially distance acquaintances on there, but I do want friends who I don’t see often enough on there.

The session on Affective Computing had lots of good stuff.

  • Identifying emotional states using keystroke dynamics
    Clayton Epp, Michael Lippold, Regan L. Mandryk

    Fairly reliable classifier for emotions, including confidence, hesitance, nervousness, relaxation, sadness, and tiredness, based on analysis of typing rhythms on a standard keyboard. One thing I like about this paper is it opens up a variety of systems ideas, ranging from fairly simple to quite sophisticated. I’m also curious if this can be extended to touch screens, which seems like a much more difficult environment.

  • Affective computational priming and creativity
    Sheena Lewis, Mira Dontcheva, Elizabeth Gerber

    In a Mechanical Turk based experiment, showing people a picture that induced positive affect increased the quality of ideas generated — measured by originality and creativity — in a creativity task. Negative priming reduced comparison compared to positive or neutral priming. I’m very curious to see if this result is sustainable over time, with the same image or with different images, or in group settings (particularly considering the next paper in this list!)

  • Upset now?: emotion contagion in distributed group
    Jamie Guillory, Jason Spiegel, Molly Drislane, Benjamin Weiss, Walter Donner, Jeffrey Hancock

    An experiment on emotional contagion. Introducing negative emotion led others to be more negative, but it also improved the group’s performance.

  • Emotion regulation for frustrating driving contexts
    Helen Harris, Clifford Nass

We’ve seen a lot of research on priming in interfaces lately, most often in lab or mturk based studies. I think it’ll start to get very interesting when we start testing to see if that also works in long-term field deployments or when people are using a system at their own discretion for their own needs, something that has been harder to do in many past studies of priming.

I didn’t make it to these next few presentations, but having previously seen Steven talk about this work, it’s definitely a worth a pointer. The titles nicely capture the main points.

The same is true for Moira’s work:

Navel-Gazy Stuff

  • RepliCHI: issues of replication in the HCI community
    Max L. Wilson, Wendy Mackay, Dan Russell, Ed Chi, Michael Bernstein, Harold Thimbleby

    Discussion about the balance between reproducing other studies in different contexts or prior to making “incremental” advances vs. a focus on novelty and innovation. Nice summary here. I think the panelists and audience were generally leading toward increasing the use of replication + extension in the training of HCI PhD students. I think this would be beneficial, in that it can encourage students to learn how to write papers that are reproducible, introduce basic research methods by doing, and may often lead to some surprising and interesting results. There was some discussion of whether there should be a repli.chi track alongside an alt.chi track. I’m a lot less enthusiastic about that – if there’s a research contribution, the main technical program should probably be sufficient, and if not, why is it there? I do understand that there’s an argument to be made that it’s worth doing as an incentive, but I don’t think that is a sufficient reason. Less addressed by the panel was that a lot of the HCI research isn’t of a style that lends itself to replication, though Dan Russell pointed out that some studies must also be taken on faith since don’t all have our own Google or LHC.

  • The trouble with social computing systems research
    Michael Bernstein, Mark S. Ackerman, Ed Chi, Robert C. Miller

    alt.chi entry into the debate perceived issues with underrepresentation of systems work in CHI submissions and with how CHI reviewers treat systems work. As someone who doesn’t do “real” systems work — the systems I build are usually intended as research probes rather than contributions themselves) — I’ve been reluctant to say much on this issue for fear that I would talk more than I know. That said, I can’t completely resist. While I agree that there are often issues with how systems work is presented and reviewed, I’m not completely sympathetic to the argument in this paper.

    Part of my skepticism is that I’ve yet to be shown an example of a good systems paper that was rejected. This is not to say that these do not exist; the authors of the paper are speaking from experience and do great work. The majority of systems rejections I have seen are from reviewing, and the decisions have mostly seemed reasonable. Most common are papers that make a modest (or even very nice) systems contribution, tack on a poorly executed evaluation, and then make claims based on the evaluation that it just doesn’t support. I believe at least one rejection would have been accepted had the authors just left out the evaluation altogether, and I think a bad evaluation and unsupported claims should doom a paper unless they are excised (which maybe possible with the new CSCW review process).

    I was a little bit frustrated because Michael’s presentation seemed to gloss over the authors’ responsibilities to explain the merits of their work to the broader audience of the conference and to discuss biases introduced by snowball samples. The last point is better addressed in the paper, but I still feel that the paper still deemphasizes authors’ responsibility in favor of reviewers’ responsibility.

    The format for this presentation was also too constrained to have a particularly good discussion (something that was unfortunately true in most sessions with the new CHI time limits). The longer discussion about systems research in the CSCW and CHI communities that followed one of the CSCW Horizons sessions this year was more constructive and more balanced, perhaps because the discussion was anchored at least partially on the systems that had just been presented.

  • When the Implication Is Not to Design (Technology)
    Eric P.S. Baumer, M. Six Silberman

    Note on how HCI can improve the way we conduct our work, particularly the view that there are problems and technical solutions to solve them. The authors argue that it may be better think of these as conditions and interventions. Some arguments they make for practice are: Value the implication not to design technology (i.e., that in some situations computing technology may be inappropriate), explicate unpursued avenues (explain alternative interventions and why they were not pursued), technological extravention (are there times when technology should be removed?), more than negative results (why and in what context did the system fail, ad what does that failure mean), and to not to stop building – just to be more reflective on why that building is occurring.

CHI Highlights: Diversity, Discussion, and Information Consumption

For my second post on highlights from CHI this year, I’m focusing on papers related to opinion diversity and discourse quality.

  • Normative influences on thoughtful online participation
    Abhay Sukumaran, Stephanie Vezich, Melanie McHugh, Clifford Nass

    Two lab experiments on whether it is possible to foster more thoughtful commenting and participation by participants in online discussion forums by priming thoughtful norms. The first tested the effects of the behavior of other participants in the forum. The dependent variables were comment length, time taken to write the comments, and number of issue-relevant thoughts. Not surprisingly, being exposed to other thoughtful comments led people to make more thoughtful comments themselves. One of the audience members asked the question about whether this would break down with just one negative or less thoughtful comment (such as how merely one piece of litter seems to break down antilittering norms).

    The second study tested effects of visual, textual, and interaction design features on the same dependent variables. The manipulations included a more subdued vs. more playful visual design, differing CAPTCHAs (words positively correlated with thoughtfulness in the thoughtful condition and words negatively correlated with thoughtfulness in the unthoughtful condition), and different labels for the comment box. The design intended to provoke thoughtfulness did correspond to more thoughtful comments, suggesting that it is possible, at least in the lab, to design sites to prompt more thoughtful comments. For this second study in particular, I’m curious if these measures only work in the short term or if they would work in the long term and about the effects of each of the specific design features.

  • I will do it, but I don’t like it: User Reactions to Preference-Inconsistent Recommendations
    Christina Schwind, Jürgen Buder, Friedrich W. Hesse

    This paper actually appeared a in health session, but I found that it spoke much more to the issues my colleagues and I are confronting in the BALANCE project. The authors begin with the observation that most recommender systems are intended to produce content that their users will like, but that this can be problematic. In the health and wellness domain, people sometimes need to hear information that might disagree with their perspective or currently held beliefs, and so it can be valuable to recommend disagreeable information. In this Mechanical Turk-based study, subjects were equally likely to follow preference-consistent and preference-inconsistent recommendations. Following preference-inconsistent recommendations did reduce confirmation bias, but people were happier to see preference-consistent recommendations. This raises the important question: subjects may have followed the recommendation the first time, but now that they know this system gives recommendations they might not like, will they follow the recommendations less often in the future, or switch to another system altogether?

  • ConsiderIt: improving structured public deliberation
    Travis Kriplean, Jonathan T. Morgan, Deen Freelon, Alan Borning, Lance Bennett

    I really like the work Travis is doing with Reflect and ConsiderIt (which powers the Living Voters Guide) to promote more thoughtful listening and discussion online, so I was happy to see this WiP and am looking forward to seeing more!

  • Computing political preference among twitter followers
    Jennifer Golbeck, Derek L. Hansen

    This work uses follow data for Congresspeople and others on Twitter to assess the “political preference” (see comment below) of Twitter users and the media sources they follow. This approach and the information it yields has some similarities to Daniel Zhou’s upcoming ICWSM paper and, to a lesser extent, Souneil Park’s paper from CSCW this year.

    One critique: despite ample selective exposure research, I’m not quite comfortable with this paper’s assumption that political preference maps so neatly to political information preference, partly because I think this may be an interesting research question: do people who lean slightly one way or the other prefer information sources that may be more biased than they are? (or something along those lines)

In addition to these papers, Ethan Zuckerman’s closing plenary, Desperately Seeking Serendipity, touched on the topics of serendipity and homophily extensively. Zuckerman starts by suggesting the reason that people like to move to cities – even at times when cities were really quite horrible places – is, yes, for more opportunities and choices, but also “to encounter the people you couldn’t encounter in your rural, disconnected lifestyle… to become a cosmopolitan, a citizen of the world.” He goes on, “if you wanted to encounter a set of ideas that were radically different than your own, your best bet in an era before telecommunications was to move to a city.” There reasons to question this idea of cities as a “serendipity engine,” though: even people in urban environments have extremely predictable routines and just don’t go all that many places. Encounters with diverse others may not as common as idealized.

He then shifts gears to discuss what people encounter online. He walks through the argument that the idea of a Freshman Fishwrap or Daily Me is possibly quite harmful as it allows people to filter to the news that they want. Adding in social filters or getting news through our friends can make this worse. While Sunstein is concerned about this leading to polarization within the US, Zuckerman is more concerned that it leads people to see only news about where they are and less news about other places or from outside perspectives. This trend might lead people to miss important stories.

I tend to agree with the argument that surfacing coincidences or manufacturing serendipity is an incredibly powerful capability of current technology. Many of the approaches that the design community has taken to achieve this are probably not the kind of serendipity Zuckerman is looking for. I love Dopplr’s notifications that friends are also in town, but the time I spend with them or being shown around by them is time that I’m less likely to have a chance encounter with someone local or a traveler from elsewhere. The ability to filter reviews by friends may make for more accurate recommendations, but I’m also less likely to end up somewhere a bit different. Even serendipity has been repurposed to support homophily

Now, it might be that the definition of serendipity that some of the design community hasn’t quite been right. As Zuckerman notes, serendipity usually means “happy accident” now – it’s become a synonym for coincidence – and that the sagacity part of the definition has been lost. Zuckerman returns to the city metaphor, arguing for a pedestrian-level view. Rather than building tools for only efficiency and convenience, build tools and spaces that maximize the chances to interact and mix. Don’t make filters hidden. Make favorites of other communities visible, not just the user’s friends. Zuckerman elegantly compares this last feature to the traces in a city: one does not see traces left just by one’s friends, no, but traces left by other users of the space, and this gives people a chance to wander from the path they were already on. One might also overlay a game on a city, to encourage people to explore more deeply or venture to new areas.

While I like these ideas, I’m a little concerned that they will lead to somewhat superficial exposure to the other. People see different others on YouTube, Flickr, or in the news, and yes, some stop and reflect, others leave comments that make fun of them, and many others just move on to the next one. A location-based game might get people to go to new places, but are they thinking about what it is like to be there, or are they thinking about the points they are earning? This superficiality is something I worry about in my own work to expose people to more diverse political news – they may see it, but are they really considering the perspective or gawking at the other side’s insanity? Serendipity may be necessary, but I question whether it is sufficient. We also need empathy: technology that can help people listen and see others’ perspectives and situations. Maybe empathy is part of the lost idea of sagacity that Zuckerman discusses — a sort of emotional sagacity — but whatever it is, I need to better know how to design for it.

For SI and UM students who really engage with this discussion and the interweaving of cities, technology, and flows, I strongly, strongly recommend ARCH 531 (Networked Cities).

CHI Highlights: Persuasive Tech and Social Software for Health and Wellness

I want to take few minutes to highlight a few papers from CHI 2011, spread across a couple of posts. There was lots of good work at this conference. This post will focus on papers in the persuasive technology and social software for health and wellness space, which is the aspect of my work that I was thinking about most during this conference.

  • Fit4life: the design of a persuasive technology promoting healthy behavior and ideal weight
    Stephen Purpura, Victoria Schwanda, Kaiton Williams, William Stubler, Phoebe Sengers

    Fit4life is a hypothetical system that monitors users’ behavior using a variety of tactics in the Persuasive Systems Design model. After describing the system (in such a way that someone in the room commented made the audience “look horrified”), the authors transition to a reflection on persuasive technology research and design, and how such a design can “spiral out of control.” As someone working in this space, the authors hit on some of the aspects that leave me a bit unsettled: persuasion vs. coercion, individual good vs. societal good, whether people choose their own view points or are pushed to adopt those of the system designers, measurement and control vs. personal experiences and responsibility, and increased sensing and monitoring vs. privacy and surveillance and the potential to eliminate boundaries between front stage and back stage spaces. The authors also discuss how persuasive systems with very strong coaching features can reduce the opportunity for mindfulness and for their users to reflect on their own situation: people can simply follow the suggestions rather than weigh the input and decide among the options.

    This is a nice paper and a good starting point for lots of discussions. I’m a bit frustrated that it was presented in a different yet concurrent session as the session on persuasive technology for health. As such, it probably did not (immediately) reach the audience that would have led to the most interesting discussion about the paper. In many ways, it argued for a “think about what it is like to live with” rather than “pitch” approach to thinking about systems. I agree with a good bit of the potential tensions the authors highlight, but I think they are a bit harder on the persuasive tech community than appropriate: in general, persuasive tech folks are aware we are building systems intended to change behavior and that this is fraught with ethical considerations, while people outside of the community often do not think of their systems as persuasive or coercive, even when they are (again, I mean this in a Nudge, choice-environments sense. On the other hand, one presentation at Persuasive last year did begin with the statement “for the sake of this paper, set aside ethical concerns” (paraphrased), so clearly there is still room for improvement.

  • Designing for peer involvement in weight management
    Julie Maitland, Matthew Chalmers

    Based on interviews with nineteen individuals, the authors present an overview of approaches for how to involve peers in technology for weight management. These approaches fall into passive involvement (norms and comparisons) and five types of active involvement (table 1 in the paper): obstructive (“don’t do it”), inductive (“you should do it”), proactive (“do it with me”), supportive (“I’ll do it too”), and cooperative (“let’s do it together”). The last category includes competition, though there was some disagreement during the Q&A about whether that is the right alignment. The authors also find gender- and role- based differences in perceived usefulness of peer-based interventions, such as differences in attitudes about competition.

    Designers could use these types of engagement to think about what their application are supporting well or not so well. Here, I wish the authors had gone a bit further in linking the types of involvement to the technical mechanisms or features of applications and contexts, as I think that would be a better jumping off point for designers. For those thinking about how to design social support into online wellness interventions, I think this paper, combined with Skeels et al, “Catalyzing Social Support for Breast Cancer Patients”, CHI 2010 and our own paper from CSCW 2011 (“‘It’s not that I don’t have problems, I’m just not putting them on Facebook’: Challenges and Opportunities in Using Online Social Networks for Health”), offer a nice high-level overview of some of the challenges and opportunities for doing so.

  • Mining behavioral economics to design persuasive technology for healthy choices
    Min Kyung Lee, Sara Kiesler, Jodi Forlizzi

    A nice paper that evaluates different persuasive approaches for workplace snack selection. These include:

    • default choice: a robot showing all snack choices with equal convenience or the healthy one more visibly, or a website that showed all snack choices (in random order) or that paginated them, with healthy choices shown on the first page.
    • planning: asking people to order a snack for tomorrow rather than select at the time of consumption.
    • information strategy: showing calorie counts for each snack.

    As one would expect, default choice strategy was highly effective in increasing the number of people who chose the healthy snack (apples) rather than the unhealthy snack (cookies). The planning strategy was effective among people who had a healthy snacking lifestyle, while those who snacked unhealthily continued to choose cookies. Interestingly, the information strategy had no effect on healthy snackers and actually led healthy snackers to choose cookies more than they otherwise would have. The authors speculate that this is either because the healthy snackers overestimate the caloric value of cookies in the absence of information (and thus avoid them more), or because considering the healthy apple was sufficiently fulfilling even if they ultimately chose the cookie.

    Some questions the study leaves open are: would people behave they same if they had to pay for the snacks? what would happen in a longer term deployment? What would have happened if the cookies were made the default, particularly for otherwise healthy snackers?

  • Side effects and “gateway” tools: advocating a broader look at evaluating persuasive systems
    Victoria Schwanda, Steven Ibara, Lindsay Reynolds, Dan Cosley

    Interviews with 20 Wii Fit users revela side effects of this use: some stop using it because it did not work while others stop because they go on to other, preferred fitness activities (abandonment as success), a tension between whether the Fit is viewed as a game or exercise tool (people rarely view it as both), and negative emotional impacts (particularly frustrating when the system misinterpreted some data, such as weight gains). One suggestion the authors propose is that behavior change systems might start with activities that better resemble games but gradually transition users to activities with fewer game-like elements, and eventually wean users off of the system all together. In practice, I’m not sure how this would work, but I like this direction because it gets at one of my main critiques of gamification: take away the game and its incentives (which my distract from the real benefits of changing one’s behavior) and the behavior reverts quite quickly.

  • Means based adaptive persuasive systems
    Maurits Kaptein, Steven Duplinsky, Panos Markopoulos

    Lab experiment evaluating the effects of using multiple sources of advice (single expert or consensus of similar others) at the same time, disclosing that advice is intended to persuade, and allowing users to select their source of advice. (This is framed more generally as about persuasive systems, but I think the framing is too broad: it’s really a study about advice.) Results: people are more likely to follow advice when they choose the source, people are less likely to follow advice when they are told that it is intended to persuade, and when shown expert advice and consensus advice from similar others, subjects were less likely to follow the advice than when they were only shown expert advice — regardless of whether the expert and consensus advice concurred with each other. This last finding is surprising to me and to the authors, who suggest that it may be a consequence of the higher cognitive load of processing multiple sources of advice; I’d love to see further work on this.

  • Opportunities for computing technologies to support healthy sleep behaviors
    Eun Kyuong Choe, Sunny Consolvo, Nathaniel F. Watson, Julie A. Kientz

    Aggregation of literature review, interviews with sleep experts, a survey of 230 individuals, and 16 potential users to learn about opportunities and challenges for designing sleep technologies. The work leads to a design framework that considers the goal of the individual using the system, the system’s features, the source of the information supporting the design choices made, the technology used, and stakeholders involved, and the input mechanism. During the presentation, I found myself thinking a lot about two things: (1) the value of design frameworks and how to construct a useful one (I’m unsure of both) and (2) how this stacks up against Julie’s recent blog post that is somewhat more down on the opportunities of tech for health.

  • How to evaluate technologies for health behavior change in HCI research
    Predrag Klasnja, Sunny Consolvo, Wanda Pratt

    The authors argue that evaluating behavior change systems based solely on whether they changed the behavior is not sufficient, and often infusible. Instead, they argue, HCI should focus on whether systems or features effectively implement or support particular strategies, such as self-monitoring or conditioning, which can be measured in shorter term evaluations.

    I agree with much of this. I think that more useful HCI contributions in this area speak to which particular mechanisms or features worked, why and how they worked, and in what context one might expect them to work. Contributions that throw the kitchen sink of features at a problem and do not get into the details of how people reacted to the specific features and what they features accomplished may tell us that technology can help with a condition, but do not, in general, do a lot to inform the designers of other systems. I also agree that shorter-term evaluations are often able to show that particular feature is or is not working as intended, though longer term evaluations are appropriate to understand if it continues to work. I am also reminded of the gap between the HCI community and the sustainability community pointed out by Froehlich, Findlater, and Landay at CHI last year, and fear that deemphasizing efficacy studies and RCTs will limit the ability of the HCI community to speak to the health community. Someone is going to have to do the efficacy studies, and the HCI community may have to carry some of this weight in order for our work to be taken seriously elsewhere. Research can make a contribution without showing health improvements, but if we ignore the importance of efficacy studies, we imperil the relevance of our work to other communities.

  • Reflecting on pills and phone use: supporting awareness of functional abilities for older adults
    Matthew L. Lee, Anind K. Dey

    Four month deployment of a system for monitoring medication taking and phone use in the homes of two older adults. The participants sought out anomalies in the recorded data; when they found them, they generally trusted the system and focused on explaining why it might have happened, turning first to their memory of the event and then to going over their routines or other records such as calendars and diaries. I am curious if this trust would extend to a purchased product rather than one provided by the researchers (if so, this could be hazardous in an unreliable system); I could see arguments for it going each way.

    The authors found that these systems can help older remain aware of their functional abilities and helped them better make adaptations to those abilities. Similar to what researchers have recommended for fitness journals or sensors, the authors suggest that people be able to annotate or explain discrepancies in their data and be able to view it jointly. They also suggest highlighting anomalies and showing them with other available contextual information about that date or time.

  • Power ballads: deploying aversive energy feedback in social media
    Derek Foster, Conor Linehan, Shaun Lawson, Ben Kirman

    I generally agree with Sunny Consolvo: feedback and consequences in persuasive systems should generally range from neutral to positive, and have been reluctant (colleagues might even say “obstinate”) about including it in GoalPost or Steps. Julie Kientz’s work, however, finds that certain personalities think they would respond well to negative feedback. This work in progress tests negative (“aversive”) feedback: Facebook posts about songs and the statement that they were using lots of energy in a pilot with five participants. The participants seemed to respond okay to the posts — which are, in my opinion, pretty mild and not all that negative — and often commented on them. The authors interpret this as aversive feedback not leading to disengagement, but I think that’s a bit too strong of a claim to make on this data: participants, despite being unpaid but having been recruited to the study, likely felt some obligation to follow through to its end in a way that they would not for a commercially or publicly available system, and, with that feeling, may have commented out of a need to publicly explain or justify their usage as shown in the posts. The last point isn’t particularly problematic, as such reflection may be useful. Still, this WiP and the existence of tools like Blackmail Yourself (which *really* hits at the shame element) do suggest that there is more work needed on the efficacy of public, aversive feedback.

  • Descriptive analysis of physical activity conversations on Twitter
    Logan Kendall, Andrea Civan Hartzler, Predrag Klasnja, Wanda Pratt

    In my work, I’ve heard a lot of concern about posting health related status updates and about seeing similar status updates from others, but I haven’t taken a detailed look at the status updates that people are currently making, which this WiP makes a start on for physical activity posts on Twitter.By analyzing the results of queries for “weight lifting”, “Pilates”, and “elliptical”, the authors find posts that show evidence of exercise, plans for exercise, attitudes about exercise, requests for help, and advertisements. As the authors note, the limited search terms probably lead to a lot of selection bias, and I’d like to see more information about posts coming from automated sources (e.g., FitBit), as well as how people reply to the different genres of fitness tweets.

  • HappinessCounter: smile-encouraging appliance to increase positive mood
    Hitomi Tsujita, Jun Rekimoto

    Fun yet concerning alt.chi work on pushing people to smile in order to increase positive mood. With features such as requiring a smile to open the refrigerator, positive feedback (lights, music) in exchange for smiles, automatic sharing of photos of facial expressions with friends or family members, automatic posting of whether or not someone is smiling enough, this paper hits many of the points about which the Fit4life authors raise concerns.

The panel I co-organized with Margaret E. Morris and Sunny Consolvo, “Facebook for health: opportunities and challenges for driving behavior change,” and featuring Adam D. I. Kramer, Janice Tsai, and Aaron Coleman, went pretty well. It was good to hear what everyone, both familiar and new faces, is up to and working on these days. Thanks to my fellow panelists and everyone who showed up!

There was a lot of interesting work — I came home with 41 papers in my “to read” folder — so I’m sure that I’m missing some great work in the above list. If I’m missing something you think I should be reading, let me know!

news

I’ve had a busy couple of months. Among the highlights:

  • My team’s submission for the CHI student design competition was accepted. Like a lot of good news, this begets more work, but it’s fun and I’m really looking forward to the conference.
  • I’ve been accepted to the PhD program at SI. I’m pretty excited; among the programs at various schools, I haven’t seen a better fit for my interests. My interest are broad, and it is going to take some work and reaching beyond SI to make sure I get what I want out of the PhD. Talking that through may be a future post.
  • I went back to Boston for a short weekend to interview candidates for Olin. Going back was strange. I couldn’t help but feel like I should be picking classes and settling into a dorm. It was a good time, though, and great to see people.

Boeing work continues to be a good complement to my SI activities. I’m having a lot of fun with my current portfolio of projects. I’ll admit, though, that after two trips in the last month, I’m feeling a bit spread thin.