Archive for the ‘Claims’ Category

What Comes After CHI? The Systems of Truth Workshop

March 5, 2018 1 comment

The Center for Human-Computer Interaction (CHCI) at Virginia Tech just wrapped up its third workshop in the “What Comes After CHI?” series, this one focused on the theme “Socio-technical Systems of Truth”.  Kurt Luther was the primary organizer, and information about the workshop is at  The workshop is described as such:

This two-day workshop, held March-1-2, 2018 … will explore interdisciplinary perspectives on designing socio-technical systems of truth. We advocate for human-centered systems of truth that acknowledge the role of belief, testing, and trust in the accretion of knowledge. We focus on processes of questioning and accountability that enable a deeper understanding of the world through the careful, comprehensive gathering and analysis of evidence. We will consider the entire investigative pipeline, from ethical information gathering and archiving, to balanced and insightful analysis, to responsible content authoring and dissemination, to productive reflection and discourse about its implications.

This post lists some of my own observations about the things of interest to me and is not meant to be at all comprehensive. Look to the workshop site for more comprehensive summaries.

The workshop kicked off with faculty lightning talks, featuring 8 faculty from 4 different departments and centers around campus.  I talked about the intersection of core HCI topics—particularly things that I care about, like claims and personas—connect with the themes of this workshop.  I included results from surveying my 105-person introductory HCI class. I used Shuo Niu’s AwareTable system to mine the student answers for occurrence and frequency, revealing workshop-relevant terms (e.g., social (40), media (14), bias (15), ethical (20)), key course concepts (e.g., claim (4), persona (6), artifact (6), scenario (6), constraint (3)), and topics mentioned in the invited guest bios and abstracts like dead (3), nude (4), and the scary gap between academia and industry (3).  You’ll have to read up on the invited guests to learn the relevance of those last few terms!

The big highlight of the workshop was to have four invited fellows in attendance: Mor Naamen, Alice Marwick, Travis Kriplean, and Jay Aronson. Each gave a talk, followed by discussant comments and open discussion.  There were also several breakout groups that explored relevant topics, and a reception and dinner.  A quick take on each of the talks and the other events.

Mor Naamen spun off the notion of “systems of trust”, where trust is the result of truth.  He focused on his research into Airbnb, showing (among other things) that longer profiles, and profiles that cover more topics, correlate with high trustworthy ratings.  So what’s the right thing to say in your Airbnb profile? Things like “We look forward to hosting you.”  And the wrong thing? Providing a life motto.

So what about fake news? Mor noted that there’s not a good reliability/credibility signal.  Possible solutions? Local news, where familiarity and relevance is high.  Proof that statements are true (but how to do that?).  Discussant Tanu Mitra pushed that notion, seeking to identify ways to encourage people to call out fake news, with the danger of risking (or helping?) their own reputation.

Alice Marwick talked about fake news: how it is created, why it is shared, how it evolves into our consciousness, and how it is (and can be) debunked.

Are people who share fake news “dupes”?  That’s been proven false multiple times over.  They share stories that support pre-existing beliefs and signal identity to like-minded others.  Algorithmic visibility and social sharing contribute to this.  What to do? Understand where fake news resides in media ecosystem, take polarization and partisanship into account in fact checking, and scrutinize algorithms and ad systems.

During the Q&A led by Carlos Evia (and afterward), Alice noted that it’s difficult for the average citizen to know what to do when someone you know (someone you’re related to) puts forth information that’s clearly false.  It’s hard to foster dialog when people are repeating stories that mirror a deeply-felt belief. The many fact-checking sites out there (Snopes, Politifact) do not seem to influence behavior, and corrections often lead to more repetition of misinformation.

Travis Kriplean put forth 3 categories of systems of truth, with examples of systems that fall into each category that he has crafted.  The categories (and systems) include:

  • empirical (fact$,
  • intersubjective (, reflect, deslider,com,
  • reflective (cheeseburgertherapy, dellider,

Andrea Kavanaugh took the lead on the discussion. One statement by Travis during the discussion that resonated with me was his statement that people have to be part of the loop—though it was unclear how that could happen with a web site.

Travis used the notion of claims a lot. But not in the Toulmin or Carroll or Sutcliffe or McCrickard sense of the word. He seemed interested in claims as hypotheses, to be debated with the help of systems.

Jay Aronson talked about methods to organize and analyze event-based video. The early part of Jay’s talk addressed how technology is a double-edged sword. It can be used for “good”, but also for harm. He emphasized the need for a trusted human in the loop, which I read as an “Uncle Walt”; i.e., Walter Cronkite, or a Billy Graham, to work the controls.

The bulk of Jay’s talk featured an examination of a video he created to show a murder that took place at a Ukraine protest.  He stitched together a collection of mobile phone videos that were taken at the protest.  There are often tons of videos of disasters, so how can you sync them?  The obvious way seems to be to look for similar (visual) objects in the videos, but that’s hard. Audio proved to be easier: by identifying and connecting similar loud sounds, Jay could connect videos taken from different locations.

Jay hired animators to connect the videos, which made him somewhat uncomfortable. These sketch-based animations make assumptions that aren’t present on the video, though they stitch together a compelling argument. Jay cautions against de-coupling the video from the people; they need to be coupled to maintain “truth”.

Deborah Tatar, in her discussion, noted the ability to query video is very important–YES!  But it took around a year to produce the video, so a query system that doesn’t take 6 months to answer anything more than a trivial question seems far away.

Breakout groups were each centered on a series of questions. A common theme was the effort to define terms like “system” and “truth”, and efforts to define people’s role in systems of truth. This section details my perspective on some of the discussions in breakout groups.

So who do we need?  Is it Walter Cronkite or Billy Graham? Mor’s work suggests that someone local may help to turn the tide, like the “News 2 at 5pm” anchor. Were/are any of these people more trustworthy than Rachel Maddow, Bill O’Reilly, and the like?  Or just less criticized?  Or is there some different sort of person we need?

How do we determine what’s true?  And do so by avoiding provocative phrases like “pants on fire” (and “lying”, per the Wall Street Journal controversy from 2017. So is there a set of words that are provocative, that should be avoided?  And if a system helps with that, could it avoid such words From Snopes:

I realize this is quite possibly a novel idea to the Daily Caller writer, but here at we employ fact-checkers and editors who review and amend (as necessary) everything we publish to ensure its fairness and accuracy rather than just allowing our writers to pass off biased, opinionated, slanted, skewed, and unethically partisan work as genuine news reporting.

Perhaps some Daily Caller writers could give that approach a try sometime.

I realize that it may be fun for an organization like Snopes to put the smackdown on an organization that puts forth factually-inaccurate articles like Daily Caller. But is the closing snark helpful for advancing the argument, particularly to those who wish to think positively about Daily Caller?

The systems that Travis developed helped to prompt a lot of discussion in one breakout group on systems that help with decision-making.  IBIS-based systems were a big part of that, including gIBIS, QOC, and Compendium.  Steve talked about his thesis work, which was related to IBIS.  And I interjected about claims as a hypothesis investigation technique.

The reception and dinner provided a great venue for further discussion.  Students presented posters at the reception, held in the Moss Arts Center lobby area.  Big thanks to my students, Shuo Niu and Taha Hasan, for putting together posters about their work for the event. The dinner was upstairs in the private room at 622 North.

Next steps seemed to start with a writeup that would appeal to a broad population, including a VT News posting and possibly an interactions article. Some sort of literature review might fit well into someone’s Ph.D. dissertation.  Design fictions, part of one breakout session, might help spur thoughtful discussion.  And follow-up workshops at CHI and elsewhere seem like a good next step.

I suggested putting forth a series of videos, perhaps as a class project for students in the Department of Communication at VT–they’ve put together other compelling video collections.  The videos could be made available on YouTube for use in classes and other meetings.

It was great to see the different perspectives at the workshop, and I’m particularly grateful to the invited speakers for taking the time to connect with us.  Looking forward to the next steps!


HCI, STS, CAT and other three letter words

April 9, 2014 Leave a comment

My CS 6724 Applied Theories in HCI class has attracted an interesting and diverse group of people, who have brought to the table some new ways of looking at HCI as a discipline.  I thought there would be a close connection among the themes, but the class has really highlighted how far we have to go…and that perhaps we’re headed in the wrong direction, at least in terms of unifying ideas.

One class session focused on STS–Science and Technology Studies, or Science, Technology, and Society (or maybe it stands for other things).  Its roots are in history, sociology, and philosophy, and has been emerging as an independent discipline since the 1970s.  I was struck by the diversity of people in the STS department at Virginia Tech–from 17th century literature to politics and technology in Russia to technology and education–to the point that it became difficult to identify the core themes of STS.

Another class session looked at themes from our new Institute for Creative Arts and Technologies, which describes itself as “at the nexus of the arts, design, science, and technology”.  Wow, what a broad endeavor, seeking to find a nexus among all of those fields!  It will be interesting to see what events the institute attracts, and whether there’s a core group of people from all four nexus disciplines that become active in the institute.  The class session looked at affect and visualization, a combination of two (three?) of the areas.  So maybe it’s not necessary to include all areas?

As an academic, I’m interested in how a curriculum for these areas will develop and evolve.  STS has two core classes that every grad student takes.  As yet, there’s no ICAT curriculum (no broad degree program, though there are sub-programs).  Yet the 0-2 “core” courses doesn’t seem that far off base, as there’s not a single required course to get the HCI Certificate from the Center for HCI, and the only required grad course in Computer Science is a seminar (though an “Intro to Grad Studies” course is strongly recommended).

So is that a problem?  Can a discipline survive, and thrive, without even pretending there’s core knowledge that’s important to the field?  It seems at some point we’ll have to answer that question, or die trying (or die not trying).

Celebrating Toulmin

March 25, 2013 Leave a comment

Stephen Toulmin's Wikipedia and USC photo

The late Stephen Toulmin would have turned 91 today—and he came pretty close, making it to 87.  And as an inspiration to us all, he remained very active through most of his life, released an updated version of his seminal book The Uses of Argument in 2003.  The book has never been out of print, and its ideas have influenced researchers in areas from rhetoric and communication to computer science and engineering.

At the heart of his argumentation methods is the notion of a claim, a statement that you are seeking to argue is correct.  The subtle but important part of that definition is that a claim is falsifiable, in that one can argue successfully for or against a claim.  And, the “truthiness” of a claim may vary as we learn more things—consider, for example, claims about the age of the universe or the intelligence of dinosaurs. I provide an extended look at how the notion of claims evolved in human-computer interaction in a previous post.  Or, you can read my Making Claims book for the long story about claims in HCI!

But on his birthday, we should celebrate not only his work but his his life. Toulmin was born and raised in England, and he released his seminal book in 1958, when he was still a young researcher.  But when his ideas were not well received in England, he moved to the United States.  He spent time on the faculty at Brandeis, Michigan State, Northwestern, the University of Chicago, and the University of Southern California. In the paperback version of his book, released in 1963, he was defensive of his ideas.  He certainly didn’t rest on his accomplishments though—in many ways his 1992 book Cosmopolis provides a more historically-grounded view of his philosophy (and he comes across as much more comfortable with his ideas).  The updated version of his book came out in 2003, and it, like much of his work of that time, reflected both a more confident and grounded philosophy while embracing his life position as a dissenter.

In many ways it would be hard to emulate his career track, as much of his highly-cited work was books and not papers, reflecting a different era in research.  But his career focus and ability to evolve ideas is worth studying.  And our current era has its own advantages–I can instantly post a blog entry on his birthday to initiate a small celebration and reflection!

Categories: Claims Tags: , ,

DIS 2012: Designing for Cognitive Limitations

August 22, 2012 Leave a comment

Together with Clayton Lewis, I hosted a workshop at the DIS 2012 conference titled Designing for Cognitive Limitations. We pulled together a great group of eight researchers and practitioners with interests in design and cognitive limitations (mostly cognitive disabilities). A full description of the workshop, including the call for participation and the position papers, is on the workshop web page. My thoughts on the workshop are summarized here.

First off, I was thrilled to get an impressive group of participants for this workshop. I’m not sure if a design-centered conference like DIS particularly attracts people who care about diverse populations such as those with cognitive limitations, or if that’s true of any HCI-related conference. And, despite the overlaps in communities and research areas, there were fewer connections between the people than I expected—not a lot of overlap in prior experiences together (though there certainly was some). An overview of the participants:

  • Joshua Hailpern is at HP Labs and has interests in aphasia emulation, empathy impact, and linguistics. His recently completed Ph.D. dissertation focused on , and he’s got a long list of papers at CHI, DIS, and ASSETS on the topic. His HP job is about modeling people and language, and will hopefully include aspects of accessibility.
  • Doris Hausen is a grad student at LMU in Germany. Her research looks at peripheral interaction as a sub form of multitasking, with a focus on lessons learned from usage, learnability, and modalities.
  • Young Seok Lee is one of nine people at the Motorola Mobility Research Center in a group that focuses on television and the TV experience, trends in TV wrt sociality, transmedia, participatory experiences, and similar topics. He gave as an example of the area Dan Olsen’s TOCHI 2011 sports viewing experience.
  • Justin Brockle of Therap Services has been exploring methods of knowledge capture and sharing, privacy issues, and maintaining data centers. His company has been working in electronic documentation for cognitive disabilities for a while, and they are interested in possible partnerships with universities and other groups (e.g., on NSF grants and such).
  • Margot Brereton is a Professor at Queensland University of Technology. She has a child-focused approach to supporting speech for diverse groups, including children with cognitive development issues, living well with HIV, and others. Her research approach is rooted in participatory design (profiling kids, designing interactions with kids, e.g., encouraging kids to take pictures through the day and reflect on them when they get home with dad by creating a photo album of experiences).
  • Mathew Kipling is a grad student at Newcastle University. He was helping out with the workshop, but he also acted as a participant. His interests are in photo recording and annotation, looking at ways to automate some recording use RFID tags and prototype devices.

As is often the case, a good part of the workshop gravitated to the introductions, but we did have two activities: a cognitive walkthrough and a claims-based prototype creation. The cognitive walkthrough (led by Clayton Lewis, who pioneered the technique) asked workshop participants to explore the ways that people with cognitive limitations would use (and have difficulties with) cameras on mobile devices like phones and tablets. Cognitive walkthroughs encourage people to use their expertise in an area (e.g., with respect to a cognitive disability) in using an interface; thereby experiencing the interface as a person with a cognitive limitation would experience it. People seemed to really get into the activity, discovering at each step of taking a picture what the user (not the workshop participant) would seek to try to do and how they would try to do it. And importantly, the workshop participants got in a mindset of thinking like the target users—important for the second activity.

The prototyping activity asked the workshop participants to create tools for people with cognitive limitations. Each group used one of three different card sets as a prototyping aid: the PIC-UP card set for notification systems, a Cognitive Claims card set as identified by conference participants, and the Context Cards from HaptiMap. Participants divided into three groups, each with a different card set and each with a different target system.

One group prototyped an aphasic support tool for the tablet, for which there was a tailored list based on where the user was and who the user was with (e.g., if the user was in the kitchen, then show kitchen words and pictures; if you’re with friends, give cues related to common interests). A person with aphasia could use this tailored set of words to help with recall. This group used the PIC-UP card set, though mainly just at the beginning to consider possible ideas for the system.

Another group built an interface for a non-residential senior center, to help seniors identify programs of interest and to raise awareness of future programs. They created a design for mobile phones and large screens that would detect people and highlight friends’ activities. They reported that the Cognitive Claims cards were helpful as a checklist to remind them about needed functionality for the system (to make sure they weren’t forgetting anything).

The final group built a system to help conference attendees find their way around a university campus (where the conference was taking place). The group considered a typical day at the conference, helping users find the important campus venues. This group used the Context Cards, primarily as a checklist near the end of the design process (though the group did flip through them at the beginning of the prototyping activity).

It was somewhat surprising that none of the card sets were heavily used—mainly just to gain some early inspiration or to serve as a checklist late in the prototyping process. It reminded me of some of Christa Chewar’s early findings in building a claims repository, in that providing knowledge without explicit guidance does not result in significant usage of the knowledge. In retrospect, it seems essential to provide much more explicit activities to a design team on how to use the design cards; e.g., a card sorting task, or card-based storyboarding, or by using cards as heuristics.

The workshop closed with group reflection on future directions for designing with cognitive disabilities. Clayton Lewis shared opportunities and directions with NIDRR and RERC, delving into topics like profile-based interaction design and the very great need for mobile phone evaluation and (perhaps?) standardization. It was pointed out that everyone wants a simpler phone but nobody will buy one–thus the shift to an app-based model where you can extend your phone’s capabilities.

An underlying analogy that I took away from the workshop: design is experiencing a shift much like medical treatment is experiencing a shift—away from a symptom-based model toward a behavior-based model. That is, rather than stating that all people with aphasia require some technique, designers are looking at the techniques with promise for people with aphasia. Often that might lead a designer to look at techniques useful for people with autism, or people with brain injuries, or people with some other cognitive disability. And that’s where a claims library (and appropriate accompanying tools) can connect communities of designers and practitioners in diverse fields—allowing them to get new ideas, to share their own ideas, and to create products that are far better because of their connections.

Thinking visually, engaging deeply

January 27, 2012 Leave a comment

Imagery provides opportunities to encourage thinking by enabling people to identify key aspects of an image and relate their own expertise to it. A well-chosen image can inspire new ideas, spark memories of prior experiences, highlight potential issues and drawbacks, and provide a point for conversation and debate. Eli Blevis has an interactions article, CHI workshop, and regular course at Indiana University that explores the impacts of digital imagery in HCI and design. In his article, he describes digital imagery as a form of visual thinking, where visual forms are used to create content and make sense of the world.

We turned to imagery as a way to inspire groups of designers to think broadly and engage meaningfully with each other during the design process. We looked for ways that images could serve as a starting point for group design activities, and as a gateway to other design knowledge. Specifically, we are interested in how imagery can be used to enhance claims during early-stage design. Claims, conceptualized by the classic Toulmin (1958) book and introduced to HCI by Carroll and Kellogg (1989), present a design artifact together with observed or hypothesized upsides (+) and downsides (-); e.g., a public display of information (+) can notify large groups of people about things of shared concern, BUT (-) often become unattractive, densely-packed discordances of data. Claims are accessible when compared to much denser knowledge capture mechanisms like papers, patterns, and cases. But it is still a daunting task for designers to look through long lists of textual claims toward finding the right ideas.

Our approach to mitigate this problem is to use imagery as a bridge to each claim. We chose to represent each claim with an image, selected not just because it captured a key aspect of the claim but also because it allowed designers who viewed it to include their own interpretation of the technology and the context.

Information exhibit

Information exhibit image used in design sessions

We have used a set of around 30 image-claim cards in design activities (e.g., brainstorming, storyboarding), using the image cards both in printed and digital form. The benefits of the images-first approach were numerous. It allowed designers to process large numbers of claims quickly, connecting the ideas to their own experiences and expertise toward solving a design problem. It supported collaboration among designers through the shared understanding centered around the images. It encouraged broad speculation down paths not captured by the claims, sometimes resulting in new and different directions. A set of papers led by Wahid at Interact, DIS, and CHI capture the lessons and tradeoffs.

All of this is in keeping with the nature of a claim, whose original intent was as a falsifiable hypothesis (Toulmin, 1958; Carroll & Kellogg, 1989). However, a purely textual claim risks narrowing the associations of the reader to the words in the claim, and thus limiting the design considerations and even alienating designers unfamiliar with the text of a claim. It is through imagery, and specifically through images as the initial shared view in a design session, that designers can make sense of a problem and create meaningful and informed content.

The evolution of claims

January 18, 2012 1 comment

This post seeks to trace the evolution of the claim in human-computer interaction (HCI), from its introduction in the Carroll and Kellogg (1989) paper through the appearance of three books, Carroll’s Making Use (2000), Sutcliffe’s The Domain Theory (2002), and Rosson and Carroll’s Usability Engineering (2002). (A chronological list of key papers is provided at the end of this post.) The definition and role of “claims” shifted significantly during that time period; I’m seeking to identify some of the evolutionary shifts from 1989 to 2002. This list isn’t meant to be complete, but rather it seeks to highlight the most important evolutionary points in the conceptualization of the claim.

Three phases highlight the progress in this evolution:
– Carroll and his colleagues at IBM T.J. Watson in the late 1980s and early 1990s. They were seeking ways to design not just toward creating a single design, but toward crafting a theory-based approach to design to enable designers to build on each others’ work in a meaningful, scientific way. This work continued until Carroll left for Virginia Tech, at which time his focus largely shifted to collaborative computing (save for a few papers that seemed to draw on his IBM work).
– Sutcliffe and Carroll’s collaboration, highlighted by Sutcliffe’s sabbatical time at Virginia Tech. Sutcliffe had been working for many years on knowledge abstraction in software design, and, like Carroll and his group, he was inspired by potential roles for theory in HCI.
– Three summative works led by Carroll, Sutcliffe, and Rosson. Each presented a different view of the role of claims—in the fields of design, engineering, and education, respectively.

Claims were introduced to the field of HCI in Carroll and Kellogg’s “Artifact as theory nexus” paper at CHI 1989. They seemed to base their definition on Toulmin’s 1958 use of the term, in which he established claims as a hypothesis-centered approach to crafting arguments. The Carroll and Kellogg paper seeks to move beyond the narrow focus of cognitive-based theories that were prominent in the 1980s (that focused on low-level phenomena like keystrokes) by introducing the a hermeneutic approach based on psychological claims, the effects on people of both natural and designed artifacts. Claims were the central part of a task-analysis framework, an attempt to position the design and interpretation of HCI artifacts as a central component of HCI research. This approach was intended to bridge the gap from research to innovation—reconciling the “hermeneutics vs theory-based design” conflict in the title. Several examples in the paper showed how developing an understanding of a claim—the artifact and its possible effects—can point out how much we have to learn and can encourage us to draw broader conclusions. Many of these issues, in particular the connection of claims and claims analysis to the task-artifact cycle, is elaborated in a Carroll, Kellogg, Rosson 1991, but the ideas were first presented in the 1989 paper.

A 1992 BIT paper by Carroll, Singley, and Rosson provided the first in-depth view of the tech transfer of UE results (though see the Moran and Carroll 1991 special issue and 1996 book described below). It connected the Scriven view of mediated evaluation to claims upsides and downsides, positioning claims as a contributor in the field of design rationale. In so doing, it expounded upon claims as a way to reuse knowledge, by encouraging designer consideration of specialized vs abstract claims. The expectation was that designers could use claims to “avoid throwing away thoughtful empirical work”. They avoided Grudin’s paradox, stating outright that design rationale (including claims-centric design rationale) was not an automatic mechanism, but requires additional human thought to yield a reusable knowledge unit.

A 1992 TOIS article by Carroll and Rosson opined that HCI should be an action-science “that produces ‘knowledge-in-implementation’ and views design practice as inquiry”. The paper argues that the task-artifact cycle is an action-science because designers must respond to user requirements by building artifacts with upsides and downsides—i.e., claims. This paper distinguishes the scenario/claim roles as such: “Where scenarios are a narrative account, claims are a causal account.” It argues that scenarios provided a situation narrative, but they are too rich, hard to classify, and hard to reuse (arguments brought up again and addressed to varying degrees by Sutcliffe, Chewar, and others). It is the claim that establish the link to action-science by facilitating design analysis, providing a mechanism for generalization and hypothesis, and explicitly recognizing potential tradeoffs.

A 1994 IJHCS paper by Carroll, Mack, Robertson, and Rosson provided a software-centric scenario-based design approach, with Point-of-View (POV) scenarios drawing parallels to object-centric/object-oriented development. This paper represents the most process-based, engineering-focused, and software-generative view of scenario-based design—both until this time and thereafter. Although claims play a fairly minor role in this paper (only appearing in step 4, leveraging the upsides and downsides in analysis and hillclimbing), there seemed to be opportunity for a much larger role: identifying objects, specifying interactions between objects, supporting inheritance, etc. There was also initial discussion of an education focus for POV scenarios, SBD, claims, and such—but it was not elaborated, and the 2002 Rosson and Carroll textbook described a more simplified approach to teaching design. This paper seemed to be hypothesized starting points that were not fully pursued by the authors—rich for mining by Sutcliffe, Chewar, and others in the years to come.

Moran and Carroll’s 1996 Design Rationale book (elaborated from their 1991 special issue of the HCI Journal) is pointed to as a landmark in the field of design rationale. It draws together contributions from Jintae Lee, Allan MacLean, Clayton Lewis, Simon Buckingham Shum, Gary Olson, Gerhard Fischer, Colin Potts, Jeff Conklin, Jonathan Grudin, and many others. Of relevance to the topic of claims is the introduction (by co-editors Tom Moran and Jack Carroll) and a Carroll and Rosson chapter. These chapters exhibit connections in their work to Horst Rittel (wicked problems, IBIS), Francis Bacon (deliberated evaluation), Herb Simon (environment and behavior), and Donald Schön (contexts of experience)—putting forth the most synthesized view of the position of claims within the design community. Some of the psychological themes, particularly those of Simon, are elaborated in Carroll’s 1997 journal paper in Annual Reviews of Psychology.

A 1999 Sutcliffe and Carroll IJHCS paper summarizes the joint efforts of the two authors on the use of claims as a knowledge capture and reuse mechanism. It delved into the possibility of using claims as a reuse mechanism, a concept touched upon in previous work but never described in sufficient detail. The paper introduced a formatting and classification scheme for claims (and scenarios) to enable their reuse, including a process and alternate pathways for claim evolution. Among the augmentations was the first explicit connection to its derivation history and background theory (i.e., where it came from), leading to the first claim map that can reflect parentage, original/evolving context, motivation, evidence, and possibilities for reuse. Also of great importance was the acknowledgement of work left to do: methods for indexing, tool support (hypertext links, structure matching), and the need for buy-in (and stay-in) incentives.

Sutcliffe’s 2000 TOCHI paper seeks to address the irrelevance of HCI in industry, particularly with regard to a theory-based engineering approach. The paper seeks to identify ways to deliver HCI knowledge in a tractable form that is reusable across applications—and, more importantly, across application areas. The paper argues that claims could provide a bridge if reuse scope was improved; specifically, if there were generic versions of claims and artifacts, and if there were mechanisms for matching claims to new application contexts. The bulk of the paper provides a three-step process to accomplish this: steps for creating more generic claims, mechanisms for cross-domain reuse, and approaches to recognize broader implications. Parts of these are elaborated in Sutcliffe’s book (described later) and in the dissertations of Christa Chewar and Shahtab Wahid. Other important products of this work are the notion of claim families, a claims-patterns comparison, and an explicit recognition of the importance of claims as “designer-digestible” knowledge (one of my favorite phrases).

This series of papers culminated with three books that offered very different visions of design, with very different roles for claims. I plan to elaborate on these books in a future post, but here’s a brief summary of each. Carroll’s 2000 Making Use book pulled together his vision for scenario-based design for scientists, with an eye toward the discovery process. Claims are used to augment the scenario-based design process, highlighting key aspects of the design (and leaving the generalization of claims as an exercise for the designer). Sutcliffe’s 2002 The Domain Theory provides a reuse-centric view of software engineering, extending the vision of Rittel and the design rationale literature and approaches. The role of claims is to make concrete Domain Theory’s high level of abstraction (too high, according to critics) by leveraging the high utility (but low flexibility and poor reuse) of claims. Finally, Rosson and Carroll’s 2002 Usability Engineering textbook advocates scenario-based development as a teaching tool, with claims and claims analysis a complementary and guiding technique to scenario development during each stage of design. It presents claims in a simplified, stripped-down manner (for better and worse) meant to be highly accessible for students. These books kicked off a period of scientific application, engineering refinement, and creative design that has continued in the years since they appeared.

Chronological bibliography:
== S. E. Toulmin (1958). The Uses of Argument. Cambridge Press.
== J. M. Carroll and W. A. Kellogg (1989). “Artifact as theory-nexus: Hermeneutics meets theory-based design.” In Proceedings of CHI, pp. 7-14.
== J. M. Carroll, D. A. Singley, M. B. Rosson (1992). “Integrating theory development with design evaluation.” Behaviour and Information Technology 11, pp. 247-255.
== J. M. Carroll, M. B. Rosson (1992). “Getting around the task-artifact cycle: How to make claims and design by scenario.” ACM Transactions on Information Systems 10 2, pp. 181-212.
== J. M. Carroll, Mack, S. R. Robertson, M. B. Rosson (1994). “Binding objects to scenarios of use.” International Journal of Human Computer Systems 41, pp. 243-276.
== J. M. Carroll (1997). “Human-computer interaction: Psychology as a science of design.” Annual Reviews in Psychology 48, pp. 61-83.
== A. G. Sutcliffe and J. M. Carroll (1999). “Designing claims for reuse in interactive systems design.” International Journal of Human-Computer Studies 50, pp. 213-241.
== A. G. Sutcliffe (2000). “On the effective use and reuse of HCI knowledge.” ACM Transactions on Computer-Human Interaction 7 2, pp. 197-221.
== J. M. Carroll (2000). Making Use: Scenario-based Design of Human-Computer Interactions. MIT Press.
== A. S. Sutcliffe (2002). The Domain Theory: Patterns for Knowledge and Software Reuse. Lawrence Erlbaum Associates.
== M. B. Rosson and J. M. Carroll (2002). Usability Engineering: Scenario-Based Development of Human-Computer Interaction. Morgan Kaufman.

20+ years on from gIBIS and QOC

December 2, 2011 8 comments

The Issue-Based Information System (IBIS) approach to capturing and using design rationale is one of the leading design theories that addresses how groups identify, structure, and make decisions during the problem-solving process. IBIS was conceived by Horst Rittel in the 1970s as a way to deal with what he called wicked problems, unique and novel problems with no stopping rule or “right” answer. IBIS was the first of many argumentation-based solutions—spawning or directly influencing instantiations that include PHI, QOC, DRL, gIBIS, and Compendium—with a common trait that outlining the problem space is equivalent to outlining the solution space. This post outlines the evolution of these design approaches, briefly exploring some key questions about when these approaches are (and aren’t) well-suited, particularly for the field of human-computer interaction.

Two writings were central to this historical review, connecting to all the other papers referenced here: Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC by Buckingham Shum, Selvin, Sierhuis, Conklin, Haley, and Nuseibeh (the title of which inspired this post’s title); and Rationale Management in Software Engineering: Concepts and Techniques by Dutoit, McCall, Mistrik, and Paech. And a great many more readings were highly influential and enlightening: the classic “Design Rationale” book by Moran and Carroll that captured the 1996 state-of-the-art for the design rationale field, the recent pair of special issues of the Human Technology journal on Creativity and Rationale in Software Design, and ongoing practitioners’ views on wicked problems and IBIS via blog posts and white papers by Paul Culmsee, Kailash Awati, Jeff Conklin, and others I’m surely forgetting.

Rittel is often pointed to as the initiator of design rationale due to a series of papers from the early 1970s to the early 1980s. He laid out an extensive definition for wicked problems, featuring ten distinguishing properties, with Melvin Webber in a 1973 paper Dilemmas in a General Theory of Planning (though Rittel and his colleagues had been discussing the issue and postulating approaches for at least five years prior to the paper). Other Rittel papers suggested that the right approach to address wicked problems was through issues, situationally-dependent questions that are “raised, argued, settled, ‘dodged’, or substituted” during a design session. Rittel’s concept of an issue is core to his Issue-Based Information Systems (IBIS) approach to group design and decision making, in which issues start a questioning process that links each issue with facts, positions, arguments, and other structures through knowledge relationships. The result is a knowledge space that doesn’t solve the issue, but rather creates an environment of “support” and “planning” where people better understand each others’ points of view.

IBIS was refined and simplified in subsequent years, and the IBIS tree-like structure led to many automated tools. Rittel’s student Ray McCall created a Procedural Hierarchy of Issues (PHI) refinement to IBIS that included many of the early tools: PROTOCOL, MIKROPLIS, PHIIDIAS, and JANUS. Other approaches to design rationale management drew inspirations from this early IBIS/PHI work: QOC by McLean, Young, Bellotti, and Moran; and DRL from the work of Potts and Bruns and of Lee. Perhaps the first widely-used tool was gIBIS, a graphical IBIS tool developed and popularized in the 1990s by Jeff Conklin and his collaborators, and its follow-up tool QuestMap. Much of the reflective literature on design rationale groups these techniques together, with Dutoit et al. noting that “there are so few significant differences in the schemas of IBIS, QOC, and DRL” (though there’s an excellent detailing of the differences in that paper). In general, these tools tended to be less intrusive than the original IBIS approach (e.g., less formality, resulting in simpler structures) and more prescriptive outcomes (with specific solutions). This simpler model enabled more immediate value to the participants, for whom value from the tool was imperative for any time investment.

Conklin joined with numerous other researchers and practitioners—Simon Buckingham Shum, Al Selvin, and others—to create the most current and widely-used instantiation of the IBIS ideas in the Compendium dialog mapping tool. I discussed Compendium in a previous tool review post, though I didn’t use it to its fullest capacity—in a collaborative situation in which divergent opinions need to be drawn together toward a common understanding. The big issues that I had with Compendium were with scalability and history: it’s hard to see more than a dozen nodes at once, and support for rolling back to previous views was limited. But it’s much more usable than gIBIS, and it seems to have attracted a fairly sizable following among usability consultants. Features like scalability and history don’t seem to be a focus of the Compendium tool. In fact, it seems that the biggest contribution of Compendium is not in how knowledge is represented (which had been done before) or in how it is manipulated (simplified…or in some cases ignored or deferred to a future version of the tool), but in the social processes around how the tool is used: an expert in knowledge management and the IBIS/Compendium provides real-time guidance during the analysis process, toward helping the participants debate directions moving forward.

In summary, two related trends that I notice in these IBIS-based tools are that (1) the “hard” stuff is left for experts; and (2) the approaches seek more immediate value to designers. Perhaps this is a response to a shift from academia to consultant environments—consultants certainly need to carve out an “expert” role for themselves, and they’d better make sure there’s value to the participants at the end of the day.

Another trend from IBIS to QOC to gIBIS to Compendium is that the approaches seem to be increasingly question-driven—as opposed to issue-driven—with progressively fewer structuring options for the knowledge that is generated. Does the path of simpification and certainty of IBIS tools violate the original wicked problem mandate that problems don’t have solutions, merely different states of being? Or does the simplification actually match the vision of wickedness that Rittel initially posed? I worry that the increasingly tree-like structure of many of these graphs draws designers further away from the initial problem and doesn’t encourage revisiting issues (though I’ll acknowledge again that Rittel’s original IBIS graph (non-tree) structure with its many loops and cycles is far harder to understand).

One big drawback I see with all of these approaches is their inability to deal with the changing truth that occurs in most design efforts and is prominent in the field of human-computer interaction. I think that’s central to my issues with Compendium and other tools—regarding scalability and history—in which problem spaces become more complex over time. Lots of factors—the state of technology, the skill sets of the designers, the knowledge, skills, and acceptance levels of the target user population—change over time, and decisions that were made at any single point may not apply later. Recall two other key feature of wicked problems: that solutions (or problem states) aren’t right or wrong, and that there’s no stopping rule. Popper, as paraphrased in Rittel and Webber’s 1973 paper, suggested that solutions to problems should only be posited as “hypotheses offered for refutation”; otherwise, you can end up pursuing tame solutions to wicked problems.

Finally, we must be careful that we’re not reducing the wickedness of a problem to the creation of a claims map, or the mapping of a dialog, or the removal of storms from brains—in effect, turning the wicked problem into a tame one. Or, if we choose to do that, we must ensure that, when a design team goes back to look at a DR representation, each element in it is appropriately questioned. Sometimes computer tools can hurt in that regard—they help designers violate some tenet of wickedness by providing a “memory” that captures truths that don’t exist, or by encouraging the capture of knowledge at the wrong granularity.