Skip to content

{ Tag Archives } wellness

iOS 8: quick thoughts on Health and HealthKit

Since Apple unveiled the Health and HealthKit features as part of iOS 8 yesterday, I’ve a few people ask my thoughts. For those working with personal informatics for health and wellness, I think there’s a lot of reason to be excited but some parts of the announcement that make me skeptical about the impact of this application on its own.

One aspect of the announcement that discourages me a bit is the expectation that a dashboard of calories burned, sleep, physical activity, cholesterol, etc is a “really accurate answer” to “how’s your health?”, and that it adds up to a “clear and current overview of your health.” It’s not — at least not for the typical self-tracker. Though scales aren’t exactly new technology, people misinterpret or read too much into small changes in measurements. Even for many enthusiastic quantified selfers, this data is difficult to interpret and turn into something actionable.

Yesterday’s announcement gave scant mention to this challenge of data interpretation. To the extent that it was discussed, the focus was on sharing data with health providers, with a strong emphasis on Apple’s partnerships with the Mayo Clinic and electronic health record juggernaut EPIC.

For some conditions, such as diabetes, health providers have already developed a strong practice of using the sort of data that Apple proposes people store in HealthKit. Remote monitoring also has the potential to reduce re-admittance rates for patients following heart failure. In these cases and others where medical teams already have experience using patient collected data to improve care, HealthKit and similar tools have the potential to immediately improve the patient experience and reduce user burdens, while also potentially reducing the costs and barriers to integrating data from new devices into care.

Yet for some of the other most prevalent chronic conditions, such as weight management, there is not yet good practice around integrating the data that self-trackers collect into medical care. Weight, sleep, diet, and physical activity are some of the most commonly tracked health outcomes and behaviors. In my research, colleagues and I have talked with both patients and providers who want to, and some who try to, use this data to provide better care, but face many non-technical barriers to doing so. Providers describe feeling pressed for time, doubting the reliability or completeness of the data, feeling overwhelmed by the quantity, or lacking the expertise to suggest specific lifestyle changes based on what they see. Patients describe bringing data to providers but only being more frustrated that the health providers are unable to use it.

For these potential uses, HealthKit or other data sharing platforms seem unlikely to improve care in the short term. What they will do, however, is reduce some of the technical barriers to building systems that help researchers, designers, and health providers learn how patient-collected data can best be used in practice, and to experiment with variations on them. As a researcher working on these questions, this is an exciting development.

Individual trackers, their support networks, and the applications they use also will continue to have an important role in making sense of health data. Self-tracking tools collect more types of data, with greater precision and frequency, than ever before. Your phone can now tell you how many steps you took each minute, but unless it helps you figure out how to get what you want out of that data, this added data is just added burden. Many people I’ve talked with have given up on wearing complicated fitness trackers because they get no value from being able to know their heart rate, step count, and assorted other readings minute-by-minute.

What we need, then, is applications that can help people make sense of their data, through self-experimentation or exploration from which they can draw actionable inferences. Frank Bentley and colleagues have done some great on this with Health Mashups and colleagues and I have been working to design more actionable summaries of data available in lifelog applications such as Moves. Jawbone’s UP Coffee app is a great commercial example of giving people more actionable recommendations.

For these, HealthKit is again exciting. It means that application designers can draw on more data streams without requiring users to manually enter it, or without iOS developers having to implement support for importing from the myriad of health trackers out there. For the end user, it means that one can switch which interpretation application you use or tracking technology you use without having to start over from a blank slate of data (changing from HeathKit to another data integration platform, however, may be another story). So, here too, HealthKit has the potential to enable a lot of innovation, even if the app itself isn’t going to help anyone be healthier.

So, HealthKit is exciting — but most users are still a long way from getting a lot value from it. The best aspect of HealthKit may be that it puts reduces barriers to aggregating data about health factors and outcomes, and that it does so in a way that appears to enable people to retain reasonable control and ownership of their data.

Pervasive, Persuasive Health Challenges: Designing for Cessation of Use of the Intervention

A second area that has received too little attention is whether we, as designers, intend for people to stop using everyday health and wellness systems, and if so, what the optimal process for that is. In my own work (e.g., [1, 2]), I have focused largely on systems that people might use indefinitely, potentially for the rest of their lives. In doing so, I have focused on making applications that are simple and fast to use, so that people would have an easier time starting and continuing to use them. Given common issues and challenges with adoption and initial adherence, as well as reduced use after the novelty effect wears off, it is no wonder that this particular challenge has thus far received little attention. More cynically, another barrier to this issue receiving much attention is the competing interest of the individual and commercial application/system providers: an individual may prefer to some day no longer need an application, but it is potentially much more lucrative for companies to have a customer for life.

It is, nevertheless, important. First, there may be times when designing systems to support temporary use may actually help some of the initial adoption and adherence problems: people might be willing to put up with a tedious process or a somewhat intrusive device if an application promises to teach them new skills and then be gone from their lives. Second, if we consider what it is like to live with persuasive systems, how many of us would want people to have lives that are carefully regulated and nudged by a myriad of systems, until the day we die [3]? And finally, might some persuasive health systems create an effect of learned helplessness in which applications, assuming the role of determining and recommending the most appropriate choices, actually reduce individuals’ competency to make these decisions in the absence of that support?

Anecdotally, many researchers have described high recidivism rates after the conclusion of an intervention, when the fitness sensor or diary, or the calorie counting tool, is no longer available to the former subjects (this has been observed with other types of interventions as well [4]). Why are these applications not helping individuals to develop good, robust fitness habits or competencies for health eating and at least keeping approximate track of calories? Would a study actually find worse post-intervention health habits among some participants?

To help imagine what we might build if we had a better understanding of how to create temporary health and wellness interventions, consider Schwanda et al’s study of the Wii Fit [5]. Some stopped using the system when it no longer fit into their household arrangement or routine, others when they had unlocked all of the content and its activities became boring or repetitive, and others stopped using it because they switched to another, often more serious, fitness routine. From a fitness perspective, the first two reasons might be considered failures: the system was not robust to changes in life priorities or in living space, or it suffered a novelty effect. The third, though, is a fitness success (though possibly not a success for Nintendo, if the hope is that they would go on to buy the latest/greatest gaming product): participants “graduated” to other activities that potentially were more fulfilling or had still better health and wellness effects. Imagine if the design of the system had helped more users to graduate to these other activities before they became bored with it or before it no longer fit into their daily lives.

Returning to the examples of exercise and calorie diaries, what changes might make them better at instilling healthy habits? In the case of a pedometer application, could it start hiding activity data until participants guessed how many steps that had taking that day? Would such an interface change help people learn to better be aware of their activity level without a device’s constant feedback? What if, after some period of use, users of calorie counters started not getting feedback on the calories they had consumed per food until they end of the day? Would such activities support development of individuals’ health competencies better than tools that offer both ubiquitous sensing and feedback? How would such changes affect the locus of control and sense of self-efficacy of applications’ users?

These are some rough ideas – the medical community, perhaps because of a focus on controlling costs and/or lower ability to integrate the interventions they design into daily life, has more history of evaluating interventions for the post-intervention efficacy (e.g., [6], [7]). Other communities have deeper understanding of what it takes to develop habit (e.g., [8], [9]) or to promote development. What does the HCI community stand to learn from these studies, and to what extent should or community conduct them as well?

  1. Munson SA, Consolvo S. 2012. Exploring Goal-setting, Rewards, Self-monitoring, and Sharing to Motivate Physical Activity, Pervasive Health 2012. [pdf]
  2. Munson SA, Lauterbach D, Newman MW, Resnick P. 2010. Happier Together: Integrating a Wellness Application Into a Social Network Site, Persuasive 2010. [pdf]
  3. Purpura S, Schwanda V, Williams K, Stubler W, Sengers P. 2011. Fit4Life: The Design of a Persuasive Technology Promoting Healthy Behavior and Ideal Weight, Proceedings of CHI 2011. [pdf]
  4. Harland J, White M, Drinkwater C, Chinn D, Farr L, Howel D. 1999. The Newcastle exercise project: a randomised controlled trial of methods to promote physical activity in primary care, BMJ 319: 829-832. [pubmed]
  5. Schwanda V, Ibara S, Reynolds L, Cosley D. 2011. Side effects and ‘gateway’ tools: advocating a broader look at evaluating persuasive systems, Proceedings of CHI 2011. [pdf]
  6. Bock BC, Marcus BH, Pinto BM, Forsyth LH. 2001. Maintenance of physical activity following an individualized motivationally tailored intervention, Annals of Behavioral Medicine 23(2): 79-87. [pubmed]
  7. Moore SM, Charvat JM, Gordon NH, Roberts BL, Pashkow F, Ribisl P, Rocco M. 2006. Effects of a CHANGE intervention to increase exercise maintenance following cardiac events, Annals of Behavioral Medicine 31(1): 53-62. [pubmed]
  8. Rothman AJ, Sheeran P, Wood W. 2009. Reflective and Automatic Processes in the Initiation and Maintenance of Dietary Change, Annals of Behavioral Medicine 38(S1): S4-S17. [pdf]
  9. Verplanken B. 2010. Beyond Frequency: Habit as Mental Construct,
    British Journal of Social Psychology 45(3): 639-656.

Pervasive, Persuasive Health Challenges: One-Time Behaviors

Our field has made great strides in addressing recurring, day-to-day behaviors and challenges: exercising more, regular medication adherence, applications for mood tracking and improvement, smoking-cessation, and managing diet. The same might generally be said for persuasive technology, where the focus has often been on starting and then maintaining behaviors on a regular basis, such as in helping people make day to day greener living choices through eco-feedback technology.

Are the lessons we have learned up to or appropriate for the challenge of motivating or promoting one-time, infrequent, or rare behaviors? Is a focus on reflection, regular monitoring, and objective feedback going to teach us lessons that help us make the best use (or non-use [1]) of technology to promote behaviors such as health screenings or immunization? Indeed, with affordances such as ubiquitous, context-aware objective monitoring and the ability to deliver rich, tailored feedback at the right time and place, mobile computing may much more to offer for everyday behavior change and maintenance.

The answer may be mixed; many of the lessons and affordances may apply. Mobile and context aware systems can still help us deliver tailored messaging, at the right time and right place (kairos) [2]. Various forms of monitoring may identify people who would most benefit from a screening or from a vaccination. Knowledge of social networks and social messaging can help messages carry greater weight with the recipients.

But these problems may present unique challenges for which we, as a research and professional community, have developed less expertise. What are the right engagement points for one-time messaging, when people are not installing applications and interacting with them on a day-to-day basis?

Just as the public health community prefers some health behavior change models for day-day behavior change (e.g., Theory of Reasoned Action [3] & Theory of Planned Behavior [4]) and others for screening or other infrequent behaviors (e.g., the Health Belief Model [5]), the pervasive heath and persuasive technology communities may benefit from developing a different set of guidelines and best practices for this different category of behaviors.

This difference is recognized in models and frameworks such as the Fogg Behavior Grid, which recognizes trying to do a new or familiar behavior one time as a behavior change challenges. The recommended strategies, however, seem represent assumptions that all behavior change of this type is hindered by the same set of barriers. For one-time, new behaviors (“green dot behaviors“), the guide argues:

the main challenge that we face while triggering a Green Dot behavior is a lack of ability. Since Dot behaviors occur only once, the subject must have enough knowledge to successfully complete the action on the first attempt. Otherwise, frustration, and quitting, may occur.”

before moving on to note that motivation and triggers also matter. And for one time, familiar behaviors (“blue dot behaviors“), the recommendation is:

Blue Dot Behaviors are among the easiest to achieve. That’s because the person, by definition, is already familiar with the behavior. They know how to perform it (such as exercise, plant a tree, buy a book). In addition, they already have a sense of the costs and benefits for the behavior… With Blue Dot Behaviors, people do not require reassurance (enhancing motivation) or step-by-step instructions (increasing ability). Instead, the challenge is on timing: One must find a way to deliver a Trigger at a moment when the person is already Motivated and Able. This timing issue is well known: ‘Timing is everything.’

These recommendations and guidelines strike me as overly simplistic. It seems incorrect to assume that someone exercise necessarily sees it as beneficial or is able to exercise properly. Someone might be very able to start a new behavior – a doctor might be recommending a brief screening that is fully covered by an individual’s insurance, but if the individual feels there may be discomfort associated or not understand or believe in the benefits, he or she may still opt out. If these suggestions accurately represent the sum of what we know about persuasive technology for getting people to do one-time behaviors, we have considerably more work to do.

Consider, for example, the challenge of adult immunization. Timing certainly is a barrier, as might be some aspects of ability (having adequate medical insurance or finances to cover it, or knowledge of how to get it for free, for example). But at least some studies find that these are not the most common barriers, with common barriers including misconceptions about vaccines’ costs and benefits — including the belief that because they are healthy, vaccination is unnecessary, or that vaccination has common and negative side effects [6]. Even if the person has received vaccinations before, they may have misconceptions that leave them unmotivated. Or they may have once been able but had their circumstances change — such as by losing access to insurance or a shift in their social network to one that disapproves of vaccinations, or a doctor that is less inclined to remind patients about them.

A framework, then, that errantly, or over-generally, assumes and emphasizes certain barriers and not others may miss more effective opportunities for intervention, interventions that only work with people for whom it has accurately described the barriers. For the vaccination challenge, focusing on changing social norms, and making pro-vaccination norms visible, may be more effective in some communities.

There are also questions about how to deliver technical interventions for one-off activities (or if/when technical interventions are even well-suited). When the challenge is a trigger, getting a patient to install a reminder application that will trigger at an appropriate time (when the seasonal flu shot is available, for example) and context (when in a pharmacy that accepts their insurance). Even then, an individual might not see the benefits to keeping a single-purpose application around and delete it, or may witch phones, in the meantime, making the reminder less effective. Would bundling many one-time behavioral interventions into a single application, perhaps with day-to-day interventions as well, work? For vaccinations, an application to manage a patient’s interactions with a caregiver (including scheduling, billing, suggested vaccinations and screenings, access to health records, and so on), might be optimal. Text4Baby bundles many one-time health tips into a stream of health advice that is timed with expectant and new mothers’ needs; are there other such opportunities?

Furthermore, for health conditions that are more stigmatizing, some traditional techniques to increase motivation may be problematic. Despite the effectiveness of seeing celebrities or friends pursing a health behavior (e.g., the “Katie Couric effect” for colonoscopies [7]), social messages about who in your network has received a screening or vaccine may sometimes disclose more than is appropriate. I applaud efforts, such as Hansen and Johnson’s work on “veiled viral marketing” [8], to develop social triggers that work but are also appropriate for sensitive health behaviors. In their test of veiled viral marketing, individuals could send a social message to someone in their network recommending that they learn about the HPV vaccine – and the recipient would learn that a friend recommended this content, but not which friend, thus preserving the saliency of a social message while still affording some privacy to the sender.

Social proof, on Facebook, that others, including your friends, voted.

Social proof, on Facebook, that others, including your friends, voted.

For other situations, technology — such as social network data, precise knowledge about communities and attitudes, and electronic health records — might be better used to tailor messages that are delivered through various media, rather than delivering specific triggers. A Facebook app indicating “I was vaccinated,” with numbers and friends (possibly just a count in the case of stigmatizing conditions) – much like the experiment conducted during the US 2008 Presidential Election and 2010 midterm elections (right) – might add social proof and pressure, while messaging that highlights people in one’s network who you could be protecting by getting vaccinated might increase perceived benefits or feelings of responsibility.

It is also possible that some techniques will be better suited for one-time use than ongoing, day-to-day use. Social comparison data has been shown to be effective in yielding higher contributions to public radio [9], reducing energy use (particularly when combined with injunctive norms [10]), and increasing ratings in an online movie community [11]. I would speculate, though, that in at least some long-term, discretionary use applications, some individuals would prefer to avoid sites that regularly present them with aversive comparisons.

From a paper for the Wellness Interventions and HCI workshop at Pervasive Health 2012.

  1. Baumer EPS & Silberman MS. 2011. When the Implication is Not to Design (Technology), Proceedings of CHI 2011. [pdf]
  2. Fogg BJ. 2002. Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann. [Amazon]
  3. Fishbein M. and Ajzen I. 1975. Belief, attitude, intention, and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. [online]
  4. Ajzen I. 1991. The theory of planned behavior, Organizational Behavior and Human Decision Processes 50(2): 179–211. [pdf]
  5. Rosenstock IM. 1966. Why people use health services, Milbank Memorial Fund Quarterly 44 (3): 94–127. [pdf]
  6. Johnson DR, Nichol KL, Lipczynski K. 2008. Barriers to Adult Immunization, American Journal of Medicine 121(7B): 528-535. [pdf]
  7. Cram P, Fenrick AM, Inadomi J, Cowen ME, Vijan S. 2003. The impact of a celebrity promotional campaign on the use of colon cancer screening: the Katie Couric effect, Archives of Internal Medicine 163(13):1601-5. [pubmed]
  8. Hansen D and Johnson C. 2012. Veiled Viral Marketing: Disseminating Information on Stigmatized Illnesses via Social Networking Sites, Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium. [slideshare, acm]
  9. Sheng J, Croson R. 2009. A Field Experiment in Charitable Contribution: The Impact of Social Information on the Voluntary Provision of Public Goods, The Economic Journal 119(540): 1422-1439. [pdf]
  10. Schultz PW, Nolan JM, Cialdini RB, Goldstein NJ, Griskevicius V. 2007. The Constructive, Destructive, and Reconstructive Power of Social Norms, Psychological Science 18(5): 429-434. [pdf]
  11. Chen Y, Harper F Maxwell, Konstan J, Li, Sheri Xin. 2010. Social Comparisons and Contributions to Online Communities: A Field Experiment on MovieLens. American Economic Review 100(4): 1358-98. [pdf]

Pervasive, Persuasive Health: Some Challenges

From a paper for the Wellness Interventions and HCI workshop at Pervasive Health 2012.

As a community, or perhaps more accurately, as communities, health and persuasive technology researchers have made considerable progress on understanding the opportunities, challenges, and some best practices for designing technology to support health and wellness. There is an incredibly rich stream of current and past research, as well as commercially available applications to support a variety of health behaviors.

I think that there remain some under-researched challenges, and I question whether our existing knowledge and research directions can sufficiently address these challenges. If not, what else we should be including in our research discussions and plans. In particular, are doing enough to study one-time interventions and the process for tapering, weaning, or graduating people off of the interventions we build and deploy?

Over the next few days, I’ll be posting my thoughts on the challenges of one-time use and designing for tapered use. I’d love your feedback and your thoughts on other areas that are under-explored or studied in the persuasive health system communities.

CHI Highlights: Persuasive Tech and Social Software for Health and Wellness

I want to take few minutes to highlight a few papers from CHI 2011, spread across a couple of posts. There was lots of good work at this conference. This post will focus on papers in the persuasive technology and social software for health and wellness space, which is the aspect of my work that I was thinking about most during this conference.

  • Fit4life: the design of a persuasive technology promoting healthy behavior and ideal weight
    Stephen Purpura, Victoria Schwanda, Kaiton Williams, William Stubler, Phoebe Sengers

    Fit4life is a hypothetical system that monitors users’ behavior using a variety of tactics in the Persuasive Systems Design model. After describing the system (in such a way that someone in the room commented made the audience “look horrified”), the authors transition to a reflection on persuasive technology research and design, and how such a design can “spiral out of control.” As someone working in this space, the authors hit on some of the aspects that leave me a bit unsettled: persuasion vs. coercion, individual good vs. societal good, whether people choose their own view points or are pushed to adopt those of the system designers, measurement and control vs. personal experiences and responsibility, and increased sensing and monitoring vs. privacy and surveillance and the potential to eliminate boundaries between front stage and back stage spaces. The authors also discuss how persuasive systems with very strong coaching features can reduce the opportunity for mindfulness and for their users to reflect on their own situation: people can simply follow the suggestions rather than weigh the input and decide among the options.

    This is a nice paper and a good starting point for lots of discussions. I’m a bit frustrated that it was presented in a different yet concurrent session as the session on persuasive technology for health. As such, it probably did not (immediately) reach the audience that would have led to the most interesting discussion about the paper. In many ways, it argued for a “think about what it is like to live with” rather than “pitch” approach to thinking about systems. I agree with a good bit of the potential tensions the authors highlight, but I think they are a bit harder on the persuasive tech community than appropriate: in general, persuasive tech folks are aware we are building systems intended to change behavior and that this is fraught with ethical considerations, while people outside of the community often do not think of their systems as persuasive or coercive, even when they are (again, I mean this in a Nudge, choice-environments sense. On the other hand, one presentation at Persuasive last year did begin with the statement “for the sake of this paper, set aside ethical concerns” (paraphrased), so clearly there is still room for improvement.

  • Designing for peer involvement in weight management
    Julie Maitland, Matthew Chalmers

    Based on interviews with nineteen individuals, the authors present an overview of approaches for how to involve peers in technology for weight management. These approaches fall into passive involvement (norms and comparisons) and five types of active involvement (table 1 in the paper): obstructive (“don’t do it”), inductive (“you should do it”), proactive (“do it with me”), supportive (“I’ll do it too”), and cooperative (“let’s do it together”). The last category includes competition, though there was some disagreement during the Q&A about whether that is the right alignment. The authors also find gender- and role- based differences in perceived usefulness of peer-based interventions, such as differences in attitudes about competition.

    Designers could use these types of engagement to think about what their application are supporting well or not so well. Here, I wish the authors had gone a bit further in linking the types of involvement to the technical mechanisms or features of applications and contexts, as I think that would be a better jumping off point for designers. For those thinking about how to design social support into online wellness interventions, I think this paper, combined with Skeels et al, “Catalyzing Social Support for Breast Cancer Patients”, CHI 2010 and our own paper from CSCW 2011 (“‘It’s not that I don’t have problems, I’m just not putting them on Facebook’: Challenges and Opportunities in Using Online Social Networks for Health”), offer a nice high-level overview of some of the challenges and opportunities for doing so.

  • Mining behavioral economics to design persuasive technology for healthy choices
    Min Kyung Lee, Sara Kiesler, Jodi Forlizzi

    A nice paper that evaluates different persuasive approaches for workplace snack selection. These include:

    • default choice: a robot showing all snack choices with equal convenience or the healthy one more visibly, or a website that showed all snack choices (in random order) or that paginated them, with healthy choices shown on the first page.
    • planning: asking people to order a snack for tomorrow rather than select at the time of consumption.
    • information strategy: showing calorie counts for each snack.

    As one would expect, default choice strategy was highly effective in increasing the number of people who chose the healthy snack (apples) rather than the unhealthy snack (cookies). The planning strategy was effective among people who had a healthy snacking lifestyle, while those who snacked unhealthily continued to choose cookies. Interestingly, the information strategy had no effect on healthy snackers and actually led healthy snackers to choose cookies more than they otherwise would have. The authors speculate that this is either because the healthy snackers overestimate the caloric value of cookies in the absence of information (and thus avoid them more), or because considering the healthy apple was sufficiently fulfilling even if they ultimately chose the cookie.

    Some questions the study leaves open are: would people behave they same if they had to pay for the snacks? what would happen in a longer term deployment? What would have happened if the cookies were made the default, particularly for otherwise healthy snackers?

  • Side effects and “gateway” tools: advocating a broader look at evaluating persuasive systems
    Victoria Schwanda, Steven Ibara, Lindsay Reynolds, Dan Cosley

    Interviews with 20 Wii Fit users revela side effects of this use: some stop using it because it did not work while others stop because they go on to other, preferred fitness activities (abandonment as success), a tension between whether the Fit is viewed as a game or exercise tool (people rarely view it as both), and negative emotional impacts (particularly frustrating when the system misinterpreted some data, such as weight gains). One suggestion the authors propose is that behavior change systems might start with activities that better resemble games but gradually transition users to activities with fewer game-like elements, and eventually wean users off of the system all together. In practice, I’m not sure how this would work, but I like this direction because it gets at one of my main critiques of gamification: take away the game and its incentives (which my distract from the real benefits of changing one’s behavior) and the behavior reverts quite quickly.

  • Means based adaptive persuasive systems
    Maurits Kaptein, Steven Duplinsky, Panos Markopoulos

    Lab experiment evaluating the effects of using multiple sources of advice (single expert or consensus of similar others) at the same time, disclosing that advice is intended to persuade, and allowing users to select their source of advice. (This is framed more generally as about persuasive systems, but I think the framing is too broad: it’s really a study about advice.) Results: people are more likely to follow advice when they choose the source, people are less likely to follow advice when they are told that it is intended to persuade, and when shown expert advice and consensus advice from similar others, subjects were less likely to follow the advice than when they were only shown expert advice — regardless of whether the expert and consensus advice concurred with each other. This last finding is surprising to me and to the authors, who suggest that it may be a consequence of the higher cognitive load of processing multiple sources of advice; I’d love to see further work on this.

  • Opportunities for computing technologies to support healthy sleep behaviors
    Eun Kyuong Choe, Sunny Consolvo, Nathaniel F. Watson, Julie A. Kientz

    Aggregation of literature review, interviews with sleep experts, a survey of 230 individuals, and 16 potential users to learn about opportunities and challenges for designing sleep technologies. The work leads to a design framework that considers the goal of the individual using the system, the system’s features, the source of the information supporting the design choices made, the technology used, and stakeholders involved, and the input mechanism. During the presentation, I found myself thinking a lot about two things: (1) the value of design frameworks and how to construct a useful one (I’m unsure of both) and (2) how this stacks up against Julie’s recent blog post that is somewhat more down on the opportunities of tech for health.

  • How to evaluate technologies for health behavior change in HCI research
    Predrag Klasnja, Sunny Consolvo, Wanda Pratt

    The authors argue that evaluating behavior change systems based solely on whether they changed the behavior is not sufficient, and often infusible. Instead, they argue, HCI should focus on whether systems or features effectively implement or support particular strategies, such as self-monitoring or conditioning, which can be measured in shorter term evaluations.

    I agree with much of this. I think that more useful HCI contributions in this area speak to which particular mechanisms or features worked, why and how they worked, and in what context one might expect them to work. Contributions that throw the kitchen sink of features at a problem and do not get into the details of how people reacted to the specific features and what they features accomplished may tell us that technology can help with a condition, but do not, in general, do a lot to inform the designers of other systems. I also agree that shorter-term evaluations are often able to show that particular feature is or is not working as intended, though longer term evaluations are appropriate to understand if it continues to work. I am also reminded of the gap between the HCI community and the sustainability community pointed out by Froehlich, Findlater, and Landay at CHI last year, and fear that deemphasizing efficacy studies and RCTs will limit the ability of the HCI community to speak to the health community. Someone is going to have to do the efficacy studies, and the HCI community may have to carry some of this weight in order for our work to be taken seriously elsewhere. Research can make a contribution without showing health improvements, but if we ignore the importance of efficacy studies, we imperil the relevance of our work to other communities.

  • Reflecting on pills and phone use: supporting awareness of functional abilities for older adults
    Matthew L. Lee, Anind K. Dey

    Four month deployment of a system for monitoring medication taking and phone use in the homes of two older adults. The participants sought out anomalies in the recorded data; when they found them, they generally trusted the system and focused on explaining why it might have happened, turning first to their memory of the event and then to going over their routines or other records such as calendars and diaries. I am curious if this trust would extend to a purchased product rather than one provided by the researchers (if so, this could be hazardous in an unreliable system); I could see arguments for it going each way.

    The authors found that these systems can help older remain aware of their functional abilities and helped them better make adaptations to those abilities. Similar to what researchers have recommended for fitness journals or sensors, the authors suggest that people be able to annotate or explain discrepancies in their data and be able to view it jointly. They also suggest highlighting anomalies and showing them with other available contextual information about that date or time.

  • Power ballads: deploying aversive energy feedback in social media
    Derek Foster, Conor Linehan, Shaun Lawson, Ben Kirman

    I generally agree with Sunny Consolvo: feedback and consequences in persuasive systems should generally range from neutral to positive, and have been reluctant (colleagues might even say “obstinate”) about including it in GoalPost or Steps. Julie Kientz’s work, however, finds that certain personalities think they would respond well to negative feedback. This work in progress tests negative (“aversive”) feedback: Facebook posts about songs and the statement that they were using lots of energy in a pilot with five participants. The participants seemed to respond okay to the posts — which are, in my opinion, pretty mild and not all that negative — and often commented on them. The authors interpret this as aversive feedback not leading to disengagement, but I think that’s a bit too strong of a claim to make on this data: participants, despite being unpaid but having been recruited to the study, likely felt some obligation to follow through to its end in a way that they would not for a commercially or publicly available system, and, with that feeling, may have commented out of a need to publicly explain or justify their usage as shown in the posts. The last point isn’t particularly problematic, as such reflection may be useful. Still, this WiP and the existence of tools like Blackmail Yourself (which *really* hits at the shame element) do suggest that there is more work needed on the efficacy of public, aversive feedback.

  • Descriptive analysis of physical activity conversations on Twitter
    Logan Kendall, Andrea Civan Hartzler, Predrag Klasnja, Wanda Pratt

    In my work, I’ve heard a lot of concern about posting health related status updates and about seeing similar status updates from others, but I haven’t taken a detailed look at the status updates that people are currently making, which this WiP makes a start on for physical activity posts on Twitter.By analyzing the results of queries for “weight lifting”, “Pilates”, and “elliptical”, the authors find posts that show evidence of exercise, plans for exercise, attitudes about exercise, requests for help, and advertisements. As the authors note, the limited search terms probably lead to a lot of selection bias, and I’d like to see more information about posts coming from automated sources (e.g., FitBit), as well as how people reply to the different genres of fitness tweets.

  • HappinessCounter: smile-encouraging appliance to increase positive mood
    Hitomi Tsujita, Jun Rekimoto

    Fun yet concerning alt.chi work on pushing people to smile in order to increase positive mood. With features such as requiring a smile to open the refrigerator, positive feedback (lights, music) in exchange for smiles, automatic sharing of photos of facial expressions with friends or family members, automatic posting of whether or not someone is smiling enough, this paper hits many of the points about which the Fit4life authors raise concerns.

The panel I co-organized with Margaret E. Morris and Sunny Consolvo, “Facebook for health: opportunities and challenges for driving behavior change,” and featuring Adam D. I. Kramer, Janice Tsai, and Aaron Coleman, went pretty well. It was good to hear what everyone, both familiar and new faces, is up to and working on these days. Thanks to my fellow panelists and everyone who showed up!

There was a lot of interesting work — I came home with 41 papers in my “to read” folder — so I’m sure that I’m missing some great work in the above list. If I’m missing something you think I should be reading, let me know!

three good things

Three Good ThingsThe first of my social software for wellness applications is available on Facebook (info page).

Three Good Things supports a positive psychology exercise in which participants record three good things, and why these things happened. When completed daily – even on the bad days – over time, participants report increased happiness and decreased symptoms of depression. The good things don’t have to be major events – a good meal, a phone call with a friend or a family member, or a relaxing walk are all good examples.

I’m interested in identifying best practices for deploying these interventions on new or existing social websites, where adding social features may make the intervention more or less effective for participants, or may just make some participants more likely to complete the exercise on a regular basis. Anyway, feel free to give the app a try – you’ll be helping my research and you may end up a bit happier.