Skip to content

{ Tag Archives } conference

CHI Highlights: General

Okay, the third and final (?) set of CHI Highlights, consisting of brief notes on some other papers and panels that caught my attention. My notes here will, overall, be briefer than in the other posts.

More papers

  • We’re in It Together: Interpersonal Management of Disclosure in Social Network Services
    Airi Lampinen, Vilma Lehtinen, Asko Lehmuskallio, Saraki Tamminen

    I possibly should have linked to this one in my post about social networks for health, as my work in that area is why this paper caught my attention. Through a qualitative study, the authors explore how people manage their privacy and disclosures on social network sites.

    People tend to apply their own expectations about what they’d like posted about themselves to what they post about others, but sometimes negotiate and ask others to take posts down, and this can lead to either new implicit or explicit rules about what gets posted in the future. They also sometimes stay out of conversations when they know that they are not as close to a the original poster as the other participants (even if they have the same “status” on the social network site). Even offline behavior is affected: people might make sure that embarrassing photos can’t be taken so that they cannot be posted.

    To regulate boundaries, some people use different services targeted at different audiences. While many participants believed that it would be useful to create friend lists within a service and to target updates to those lists, many had not done so (quite similar to my findings with health sharing: people say they want the feature and that it is useful, but just aren’t using it. I’d love to see Facebook data on what percent of people are actively using lists.) People also commonly worded posts so that those “in the know” would get more information than others, even if all saw the same post.

    Once aversive content had been posted, however, it was sometimes better for participants to ty to repurpose it to be funny or a joke, rather than to delete it. Deletions say “this was important,” while adding smilies can decrease its impact and say “oh, that wasn’t serious.”

  • Utility of human-computer interactions: toward a science of preference measurement
    Michael Toomim, Travis Kriplean, Claus Pörtner, James Landay

    Many short-duration user studies rely on self-report data of satisfaction with an interface or tool, even though we know that self-report data is often quite problematic. To measure the relative utility of design alternatives, the authors place them on Mechanical Turk and measure how many tasks people complete on each alternative under differing pay conditions. A design that gets more work for the same or less pay implies more utility. Because of things like the small pay effect and its ability to crowd out intrinsic rewards, I’m curious about whether this approach will work better for systems meant for work rather than for fun, as well as just how far it can go – but I really do like the direction of measuring what people actually do rather than just what they say.

Perhaps it is my love of maps or my senior capstone project at Olin, but I have a love for location based services work, even if I’m not currently doing it.

  • Opportunities exist: continuous discovery of places to perform activities
    David Dearman, Timothy Sohn, Khai N. Truong

  • In the best families: tracking and relationships
    Clara Mancini, Yvonne Rogers, Keerthi Thomas, Adam N. Joinson, Blaine A. Price, Arosha K. Bandara, Lukasz Jedrzejczyk, Bashar Nuseibeh

    I’m wondering more and more if there’s an appropriate social distance for location trackers: with people are already very close, it is smothering, while with people who are too far, is it creepy? Thinking about preferences for Latitude, I wouldn’t want my family or socially distance acquaintances on there, but I do want friends who I don’t see often enough on there.

The session on Affective Computing had lots of good stuff.

  • Identifying emotional states using keystroke dynamics
    Clayton Epp, Michael Lippold, Regan L. Mandryk

    Fairly reliable classifier for emotions, including confidence, hesitance, nervousness, relaxation, sadness, and tiredness, based on analysis of typing rhythms on a standard keyboard. One thing I like about this paper is it opens up a variety of systems ideas, ranging from fairly simple to quite sophisticated. I’m also curious if this can be extended to touch screens, which seems like a much more difficult environment.

  • Affective computational priming and creativity
    Sheena Lewis, Mira Dontcheva, Elizabeth Gerber

    In a Mechanical Turk based experiment, showing people a picture that induced positive affect increased the quality of ideas generated — measured by originality and creativity — in a creativity task. Negative priming reduced comparison compared to positive or neutral priming. I’m very curious to see if this result is sustainable over time, with the same image or with different images, or in group settings (particularly considering the next paper in this list!)

  • Upset now?: emotion contagion in distributed group
    Jamie Guillory, Jason Spiegel, Molly Drislane, Benjamin Weiss, Walter Donner, Jeffrey Hancock

    An experiment on emotional contagion. Introducing negative emotion led others to be more negative, but it also improved the group’s performance.

  • Emotion regulation for frustrating driving contexts
    Helen Harris, Clifford Nass

We’ve seen a lot of research on priming in interfaces lately, most often in lab or mturk based studies. I think it’ll start to get very interesting when we start testing to see if that also works in long-term field deployments or when people are using a system at their own discretion for their own needs, something that has been harder to do in many past studies of priming.

I didn’t make it to these next few presentations, but having previously seen Steven talk about this work, it’s definitely a worth a pointer. The titles nicely capture the main points.

The same is true for Moira’s work:

Navel-Gazy Stuff

  • RepliCHI: issues of replication in the HCI community
    Max L. Wilson, Wendy Mackay, Dan Russell, Ed Chi, Michael Bernstein, Harold Thimbleby

    Discussion about the balance between reproducing other studies in different contexts or prior to making “incremental” advances vs. a focus on novelty and innovation. Nice summary here. I think the panelists and audience were generally leading toward increasing the use of replication + extension in the training of HCI PhD students. I think this would be beneficial, in that it can encourage students to learn how to write papers that are reproducible, introduce basic research methods by doing, and may often lead to some surprising and interesting results. There was some discussion of whether there should be a repli.chi track alongside an alt.chi track. I’m a lot less enthusiastic about that – if there’s a research contribution, the main technical program should probably be sufficient, and if not, why is it there? I do understand that there’s an argument to be made that it’s worth doing as an incentive, but I don’t think that is a sufficient reason. Less addressed by the panel was that a lot of the HCI research isn’t of a style that lends itself to replication, though Dan Russell pointed out that some studies must also be taken on faith since don’t all have our own Google or LHC.

  • The trouble with social computing systems research
    Michael Bernstein, Mark S. Ackerman, Ed Chi, Robert C. Miller

    alt.chi entry into the debate perceived issues with underrepresentation of systems work in CHI submissions and with how CHI reviewers treat systems work. As someone who doesn’t do “real” systems work — the systems I build are usually intended as research probes rather than contributions themselves) — I’ve been reluctant to say much on this issue for fear that I would talk more than I know. That said, I can’t completely resist. While I agree that there are often issues with how systems work is presented and reviewed, I’m not completely sympathetic to the argument in this paper.

    Part of my skepticism is that I’ve yet to be shown an example of a good systems paper that was rejected. This is not to say that these do not exist; the authors of the paper are speaking from experience and do great work. The majority of systems rejections I have seen are from reviewing, and the decisions have mostly seemed reasonable. Most common are papers that make a modest (or even very nice) systems contribution, tack on a poorly executed evaluation, and then make claims based on the evaluation that it just doesn’t support. I believe at least one rejection would have been accepted had the authors just left out the evaluation altogether, and I think a bad evaluation and unsupported claims should doom a paper unless they are excised (which maybe possible with the new CSCW review process).

    I was a little bit frustrated because Michael’s presentation seemed to gloss over the authors’ responsibilities to explain the merits of their work to the broader audience of the conference and to discuss biases introduced by snowball samples. The last point is better addressed in the paper, but I still feel that the paper still deemphasizes authors’ responsibility in favor of reviewers’ responsibility.

    The format for this presentation was also too constrained to have a particularly good discussion (something that was unfortunately true in most sessions with the new CHI time limits). The longer discussion about systems research in the CSCW and CHI communities that followed one of the CSCW Horizons sessions this year was more constructive and more balanced, perhaps because the discussion was anchored at least partially on the systems that had just been presented.

  • When the Implication Is Not to Design (Technology)
    Eric P.S. Baumer, M. Six Silberman

    Note on how HCI can improve the way we conduct our work, particularly the view that there are problems and technical solutions to solve them. The authors argue that it may be better think of these as conditions and interventions. Some arguments they make for practice are: Value the implication not to design technology (i.e., that in some situations computing technology may be inappropriate), explicate unpursued avenues (explain alternative interventions and why they were not pursued), technological extravention (are there times when technology should be removed?), more than negative results (why and in what context did the system fail, ad what does that failure mean), and to not to stop building – just to be more reflective on why that building is occurring.

CHI Highlights: Diversity, Discussion, and Information Consumption

For my second post on highlights from CHI this year, I’m focusing on papers related to opinion diversity and discourse quality.

  • Normative influences on thoughtful online participation
    Abhay Sukumaran, Stephanie Vezich, Melanie McHugh, Clifford Nass

    Two lab experiments on whether it is possible to foster more thoughtful commenting and participation by participants in online discussion forums by priming thoughtful norms. The first tested the effects of the behavior of other participants in the forum. The dependent variables were comment length, time taken to write the comments, and number of issue-relevant thoughts. Not surprisingly, being exposed to other thoughtful comments led people to make more thoughtful comments themselves. One of the audience members asked the question about whether this would break down with just one negative or less thoughtful comment (such as how merely one piece of litter seems to break down antilittering norms).

    The second study tested effects of visual, textual, and interaction design features on the same dependent variables. The manipulations included a more subdued vs. more playful visual design, differing CAPTCHAs (words positively correlated with thoughtfulness in the thoughtful condition and words negatively correlated with thoughtfulness in the unthoughtful condition), and different labels for the comment box. The design intended to provoke thoughtfulness did correspond to more thoughtful comments, suggesting that it is possible, at least in the lab, to design sites to prompt more thoughtful comments. For this second study in particular, I’m curious if these measures only work in the short term or if they would work in the long term and about the effects of each of the specific design features.

  • I will do it, but I don’t like it: User Reactions to Preference-Inconsistent Recommendations
    Christina Schwind, Jürgen Buder, Friedrich W. Hesse

    This paper actually appeared a in health session, but I found that it spoke much more to the issues my colleagues and I are confronting in the BALANCE project. The authors begin with the observation that most recommender systems are intended to produce content that their users will like, but that this can be problematic. In the health and wellness domain, people sometimes need to hear information that might disagree with their perspective or currently held beliefs, and so it can be valuable to recommend disagreeable information. In this Mechanical Turk-based study, subjects were equally likely to follow preference-consistent and preference-inconsistent recommendations. Following preference-inconsistent recommendations did reduce confirmation bias, but people were happier to see preference-consistent recommendations. This raises the important question: subjects may have followed the recommendation the first time, but now that they know this system gives recommendations they might not like, will they follow the recommendations less often in the future, or switch to another system altogether?

  • ConsiderIt: improving structured public deliberation
    Travis Kriplean, Jonathan T. Morgan, Deen Freelon, Alan Borning, Lance Bennett

    I really like the work Travis is doing with Reflect and ConsiderIt (which powers the Living Voters Guide) to promote more thoughtful listening and discussion online, so I was happy to see this WiP and am looking forward to seeing more!

  • Computing political preference among twitter followers
    Jennifer Golbeck, Derek L. Hansen

    This work uses follow data for Congresspeople and others on Twitter to assess the “political preference” (see comment below) of Twitter users and the media sources they follow. This approach and the information it yields has some similarities to Daniel Zhou’s upcoming ICWSM paper and, to a lesser extent, Souneil Park’s paper from CSCW this year.

    One critique: despite ample selective exposure research, I’m not quite comfortable with this paper’s assumption that political preference maps so neatly to political information preference, partly because I think this may be an interesting research question: do people who lean slightly one way or the other prefer information sources that may be more biased than they are? (or something along those lines)

In addition to these papers, Ethan Zuckerman’s closing plenary, Desperately Seeking Serendipity, touched on the topics of serendipity and homophily extensively. Zuckerman starts by suggesting the reason that people like to move to cities – even at times when cities were really quite horrible places – is, yes, for more opportunities and choices, but also “to encounter the people you couldn’t encounter in your rural, disconnected lifestyle… to become a cosmopolitan, a citizen of the world.” He goes on, “if you wanted to encounter a set of ideas that were radically different than your own, your best bet in an era before telecommunications was to move to a city.” There reasons to question this idea of cities as a “serendipity engine,” though: even people in urban environments have extremely predictable routines and just don’t go all that many places. Encounters with diverse others may not as common as idealized.

He then shifts gears to discuss what people encounter online. He walks through the argument that the idea of a Freshman Fishwrap or Daily Me is possibly quite harmful as it allows people to filter to the news that they want. Adding in social filters or getting news through our friends can make this worse. While Sunstein is concerned about this leading to polarization within the US, Zuckerman is more concerned that it leads people to see only news about where they are and less news about other places or from outside perspectives. This trend might lead people to miss important stories.

I tend to agree with the argument that surfacing coincidences or manufacturing serendipity is an incredibly powerful capability of current technology. Many of the approaches that the design community has taken to achieve this are probably not the kind of serendipity Zuckerman is looking for. I love Dopplr’s notifications that friends are also in town, but the time I spend with them or being shown around by them is time that I’m less likely to have a chance encounter with someone local or a traveler from elsewhere. The ability to filter reviews by friends may make for more accurate recommendations, but I’m also less likely to end up somewhere a bit different. Even serendipity has been repurposed to support homophily

Now, it might be that the definition of serendipity that some of the design community hasn’t quite been right. As Zuckerman notes, serendipity usually means “happy accident” now – it’s become a synonym for coincidence – and that the sagacity part of the definition has been lost. Zuckerman returns to the city metaphor, arguing for a pedestrian-level view. Rather than building tools for only efficiency and convenience, build tools and spaces that maximize the chances to interact and mix. Don’t make filters hidden. Make favorites of other communities visible, not just the user’s friends. Zuckerman elegantly compares this last feature to the traces in a city: one does not see traces left just by one’s friends, no, but traces left by other users of the space, and this gives people a chance to wander from the path they were already on. One might also overlay a game on a city, to encourage people to explore more deeply or venture to new areas.

While I like these ideas, I’m a little concerned that they will lead to somewhat superficial exposure to the other. People see different others on YouTube, Flickr, or in the news, and yes, some stop and reflect, others leave comments that make fun of them, and many others just move on to the next one. A location-based game might get people to go to new places, but are they thinking about what it is like to be there, or are they thinking about the points they are earning? This superficiality is something I worry about in my own work to expose people to more diverse political news – they may see it, but are they really considering the perspective or gawking at the other side’s insanity? Serendipity may be necessary, but I question whether it is sufficient. We also need empathy: technology that can help people listen and see others’ perspectives and situations. Maybe empathy is part of the lost idea of sagacity that Zuckerman discusses — a sort of emotional sagacity — but whatever it is, I need to better know how to design for it.

For SI and UM students who really engage with this discussion and the interweaving of cities, technology, and flows, I strongly, strongly recommend ARCH 531 (Networked Cities).

Sidelines at ICWSM

Last week I presented our first Sidelines paper (with Daniel Zhou and Paul Resnick) at ICWSM in San Jose. Slides (hosted on slideshare) are embedded below, or you can watch a video of most of the talk on VideoLectures.

Opinion and topic diversity in the output sets can provide individual and societal benefits. If news aggregators relying on votes and links to select and subsets of the large quantity of news and opinion items generated each day simply select the most popular items may not yield as much diversity as is present in the overall pool of votes and links.

To help measure how well any given approach does at achieving these goals, we developed three diversity metrics that address different dimensions of diversity: inclusion/exclusion, nonalienation, and proportional representation (based on KL divergence).

To increase diversity in result sets chosen based on user votes (or things like votes), we developed the sidelines algorithm. This algorithm temporarily suppresses a voter’s preferences after a preferred item has been selected. In comparison to collections of the most popular items, from user votes on Digg.com and links from a panel of political blogs, the Sidelines algorithm increased inclusion while decreasing alienation. For the blog links, a set with known political preferences, we also found that Sidelines improved proportional representation.

Our approach differs and is complementary to work that selects for diversity or identifies bias based on classifying content (e.g. Park et al, NewsCube; ) or by classifying referring blogs or voters (e.g. Gamon et al, BLEWS). While Sidelines requires votes (or something like votes), it doesn’t require any information about content, voters, or long term voting histories. This is particularly useful for emerging topics and opinion groups, as well as for non-textual items.

wikis in organizations

Antero Aunesluoma presents at WikiFest

In early September, I attended WikiSym 08 in Porto, Portugal, so this post is nearly two months overdue. In addition to presenting a short paper on the use of a wiki to enhance organizational memory and sharing in a Boeing workgroup, I participated on the WikiFest panel organized by Stewart Mader.

Since then, a couple of people have asked me to post the outline of my presentation for the WikiFest panel. These notes are reflections from the Medshelf, CSS-D, SI, and Boeing workgroup wiki projects and are meant for those thinking about or getting started with deploying a wiki in a team. For those that have been working with wikis and other collaborative tools for a while, there probably aren’t many surprises here.

  1. Consider the wiki within your ecosystem of tools. For CSS-D and MedShelf, the wikis were able to offload many of the frequently asked questions (and, to an even greater extent, the frequent responses) from the corresponding email lists. This helps to increase the signal to noise ratio on the lists for list members that have been around for a while, and increasing their satisfaction with the lists and perhaps making them more likely to stick around.

    Another major benefit of moving some of this content from the mailing lists to the wiki is that new readers had less to read to get an answer. If you’ve ever search for the answer to a problem and found part of the solution in a message board or mailing list archive, you may be familiar with the experience of having to read through several proposed, partial solutions, synthesizing as you go, before arriving at the information you need. If all of that information is consolidated as users add it to the wiki, it can reduce the burden of synthesizing information from each time it is accessed to just each time someone adds new information to the wiki.

    In addition to considering how a wiki (or really, any other new tool) will complement your existing tools, consider what it can replace. At Boeing, the wiki meant that workgroup members could stop using another tool they didn’t like. If there was a directive to use the wiki in addition to the other tool, it probably wouldn’t have been as enthusiastically adopted. One of the reasons that the SI Wiki has floundered a bit is that there are at least three other digital places this sort of information is stored: two CTools sites and an intranet site. When people don’t know where to put things, sometimes we just don’t put them at all.

  2. Sometimes value comes from aggregation rather than synthesis. In the previous point, I made a big deal out of the value of using the wiki to synthesize information from threaded discussions and various other sources. When we started the MedShelf project, I was expecting all wikis to be used this way, but I was very wrong. With Medshelf, a lot of the value comes from individuals’ stories about coping with the illness. Trying to synthesize that into a single narrative or neutral article would have meant losing these individual voices, and for content like this, it aggregation — putting it all in the same place — can be the best approach.

    The importance of these individual voices also meant that many more pages than I expected were single-authored.

  3. Don’t estimate the value of a searchable & browsable collection. Using the workgroup wiki, team members have found the information need because they knew about one project and then were able to browse links to documentation other, related projects that had the information they needed. Browsing between a project page and a team member’s profile has also helped people to identify experts on a given topic. The previous tools for documenting projects didn’t allow for connections between different project repositories and made it hard to browse to the most helpful information. But this only works if you are adding links between related content on the wiki, or if your wiki engine automatically adds related links.

    For the wikis tied to mailing lists (CSS-D and Medshelf), some people arriving at the wiki through a search engine, looking for a solution to a particular problem, have browsed to the list information and eventually joined the list. This is certainly something that happens with mailing list archives, but which makes a better front door — the typical mailing list archive or a wiki?

  4. Have new users arrive in parallel rather than serial (after seeding the wiki with content).
  5. The Boeing workgroup wiki stagnated when it was initially launched, and did not really take off until the wiki evangelist organized a “wiki party” (snacks provided) where people could come and get started on documenting their past projects. Others call this a Barn Raising. This sort of event can give potential users both a bit of peer (or management) pressure and necessary technical support to get started adding content. It also serves the valuable additional role of giving community members a chance to express their opinions about how the tool can/should be used, and to negotiate group norms and expectations for the new wiki.

    Even if you can’t physically get people together — for the mailing list wikis, this was not practical — it’s good to have them arrive at the same time, and to have both some existing content and suggestions for future additions ready and waiting for them.

  6. Make your contributors feel appreciated. Wikis typically don’t offer the same affordances for showing gratitude as a threaded discussion, where it is usually easy to add a quick “thank you” reply or to acknowledge someone else’s contribution while adding new information. With wikis, thanks are sometimes rare, and users may see revisions to content they added as a sign that they did something wrong, rather than provided a good starting point to which others added. It can make a big difference to acknowledge particularly good writeups publicly in a staff meeting or on the mailing list, or to privately express thanks or give a compliment.

Continue reading ›

Training, Integration, and Identity: A Roundtable Discussion of Undergraduate and Professional Master’s Programs in iSchools

Libby Hemphill and I are hosting a roundtable discussion at the 2008 iConference, hosted by UCLA, at the end of February.

Professional students, whether undergraduates or masters’ students, represent a significant portion of the iSchool community. How do iSchools effectively educate those students while continuing to develop successful research programs? This roundtable discussion will focus on how iSchools educate their professional students and engage them in the research aspect of their programs. Innovative approaches to training and integration will be the central theme of this discussion. In an iSchool – where students training for professions including librarianship, information policy, human-centered computing, preservation and researchers exploring such topics as incentive-centered design, forensic informatics, computational linguistics, and digital libraries have both competing and complimentary goals – the potentials for collaboration, innovation, misunderstanding, and disharmony are all high.

The annual iConference provides a unique opportunity for us, as a community, to discuss the roles our professional students have in shaping our identity and our practices. The proposed roundtable will invite participants to discuss questions such as:

  • What should the role of research in training information professionals be?
  • How can we best engage professional students in our research?
  • How do iSchools address the unique curricular challenges we face in preparing students for a very wide variety of careers?
  • What do we want an Information degree to signal in the marketplace?
  • What are some successes in which research and professional training have benefited one another?

Participants will share innovative approaches to professional education, best practices in engaging professional students in research programs, and remaining challenges. We intend roundtable participation to represent the diversity of iSchools’ current programs.

We’ve setup a wiki for pre-conference sharing of exemplary programs, questions, and thoughts. It’s pretty sparse right now, but we’ll be adding some of our thoughts before the conference, and we welcome your contributions!

This is a topic that I started giving more thought around the time of the 2006 iConference, and I am looking forward to the discussion in February.