Archive

Posts Tagged ‘Compendium’

What Comes After CHI? The Systems of Truth Workshop

March 5, 2018 1 comment

The Center for Human-Computer Interaction (CHCI) at Virginia Tech just wrapped up its third workshop in the “What Comes After CHI?” series, this one focused on the theme “Socio-technical Systems of Truth”.  Kurt Luther was the primary organizer, and information about the workshop is at https://systemsoftruth.wordpress.com/.  The workshop is described as such:

This two-day workshop, held March-1-2, 2018 … will explore interdisciplinary perspectives on designing socio-technical systems of truth. We advocate for human-centered systems of truth that acknowledge the role of belief, testing, and trust in the accretion of knowledge. We focus on processes of questioning and accountability that enable a deeper understanding of the world through the careful, comprehensive gathering and analysis of evidence. We will consider the entire investigative pipeline, from ethical information gathering and archiving, to balanced and insightful analysis, to responsible content authoring and dissemination, to productive reflection and discourse about its implications.

This post lists some of my own observations about the things of interest to me and is not meant to be at all comprehensive. Look to the workshop site for more comprehensive summaries.

The workshop kicked off with faculty lightning talks, featuring 8 faculty from 4 different departments and centers around campus.  I talked about the intersection of core HCI topics—particularly things that I care about, like claims and personas—connect with the themes of this workshop.  I included results from surveying my 105-person introductory HCI class. I used Shuo Niu’s AwareTable system to mine the student answers for occurrence and frequency, revealing workshop-relevant terms (e.g., social (40), media (14), bias (15), ethical (20)), key course concepts (e.g., claim (4), persona (6), artifact (6), scenario (6), constraint (3)), and topics mentioned in the invited guest bios and abstracts like dead (3), nude (4), and the scary gap between academia and industry (3).  You’ll have to read up on the invited guests to learn the relevance of those last few terms!

The big highlight of the workshop was to have four invited fellows in attendance: Mor Naamen, Alice Marwick, Travis Kriplean, and Jay Aronson. Each gave a talk, followed by discussant comments and open discussion.  There were also several breakout groups that explored relevant topics, and a reception and dinner.  A quick take on each of the talks and the other events.

Mor Naamen spun off the notion of “systems of trust”, where trust is the result of truth.  He focused on his research into Airbnb, showing (among other things) that longer profiles, and profiles that cover more topics, correlate with high trustworthy ratings.  So what’s the right thing to say in your Airbnb profile? Things like “We look forward to hosting you.”  And the wrong thing? Providing a life motto.

So what about fake news? Mor noted that there’s not a good reliability/credibility signal.  Possible solutions? Local news, where familiarity and relevance is high.  Proof that statements are true (but how to do that?).  Discussant Tanu Mitra pushed that notion, seeking to identify ways to encourage people to call out fake news, with the danger of risking (or helping?) their own reputation.

Alice Marwick talked about fake news: how it is created, why it is shared, how it evolves into our consciousness, and how it is (and can be) debunked.

Are people who share fake news “dupes”?  That’s been proven false multiple times over.  They share stories that support pre-existing beliefs and signal identity to like-minded others.  Algorithmic visibility and social sharing contribute to this.  What to do? Understand where fake news resides in media ecosystem, take polarization and partisanship into account in fact checking, and scrutinize algorithms and ad systems.

During the Q&A led by Carlos Evia (and afterward), Alice noted that it’s difficult for the average citizen to know what to do when someone you know (someone you’re related to) puts forth information that’s clearly false.  It’s hard to foster dialog when people are repeating stories that mirror a deeply-felt belief. The many fact-checking sites out there (Snopes, Politifact) do not seem to influence behavior, and corrections often lead to more repetition of misinformation.

Travis Kriplean put forth 3 categories of systems of truth, with examples of systems that fall into each category that he has crafted.  The categories (and systems) include:

  • empirical (fact$, consider.it)
  • intersubjective (slider.chat, reflect, deslider,com, consider.it)
  • reflective (cheeseburgertherapy, dellider, consider.it)

Andrea Kavanaugh took the lead on the discussion. One statement by Travis during the discussion that resonated with me was his statement that people have to be part of the loop—though it was unclear how that could happen with a web site.

Travis used the notion of claims a lot. But not in the Toulmin or Carroll or Sutcliffe or McCrickard sense of the word. He seemed interested in claims as hypotheses, to be debated with the help of systems.

Jay Aronson talked about methods to organize and analyze event-based video. The early part of Jay’s talk addressed how technology is a double-edged sword. It can be used for “good”, but also for harm. He emphasized the need for a trusted human in the loop, which I read as an “Uncle Walt”; i.e., Walter Cronkite, or a Billy Graham, to work the controls.

The bulk of Jay’s talk featured an examination of a video he created to show a murder that took place at a Ukraine protest.  He stitched together a collection of mobile phone videos that were taken at the protest.  There are often tons of videos of disasters, so how can you sync them?  The obvious way seems to be to look for similar (visual) objects in the videos, but that’s hard. Audio proved to be easier: by identifying and connecting similar loud sounds, Jay could connect videos taken from different locations.

Jay hired animators to connect the videos, which made him somewhat uncomfortable. These sketch-based animations make assumptions that aren’t present on the video, though they stitch together a compelling argument. Jay cautions against de-coupling the video from the people; they need to be coupled to maintain “truth”.

Deborah Tatar, in her discussion, noted the ability to query video is very important–YES!  But it took around a year to produce the video, so a query system that doesn’t take 6 months to answer anything more than a trivial question seems far away.

Breakout groups were each centered on a series of questions. A common theme was the effort to define terms like “system” and “truth”, and efforts to define people’s role in systems of truth. This section details my perspective on some of the discussions in breakout groups.

So who do we need?  Is it Walter Cronkite or Billy Graham? Mor’s work suggests that someone local may help to turn the tide, like the “News 2 at 5pm” anchor. Were/are any of these people more trustworthy than Rachel Maddow, Bill O’Reilly, and the like?  Or just less criticized?  Or is there some different sort of person we need?

How do we determine what’s true?  And do so by avoiding provocative phrases like “pants on fire” (and “lying”, per the Wall Street Journal controversy from 2017. So is there a set of words that are provocative, that should be avoided?  And if a system helps with that, could it avoid such words From Snopes:

I realize this is quite possibly a novel idea to the Daily Caller writer, but here at snopes.com we employ fact-checkers and editors who review and amend (as necessary) everything we publish to ensure its fairness and accuracy rather than just allowing our writers to pass off biased, opinionated, slanted, skewed, and unethically partisan work as genuine news reporting.

Perhaps some Daily Caller writers could give that approach a try sometime.

I realize that it may be fun for an organization like Snopes to put the smackdown on an organization that puts forth factually-inaccurate articles like Daily Caller. But is the closing snark helpful for advancing the argument, particularly to those who wish to think positively about Daily Caller?

The systems that Travis developed helped to prompt a lot of discussion in one breakout group on systems that help with decision-making.  IBIS-based systems were a big part of that, including gIBIS, QOC, and Compendium.  Steve talked about his thesis work, which was related to IBIS.  And I interjected about claims as a hypothesis investigation technique.

The reception and dinner provided a great venue for further discussion.  Students presented posters at the reception, held in the Moss Arts Center lobby area.  Big thanks to my students, Shuo Niu and Taha Hasan, for putting together posters about their work for the event. The dinner was upstairs in the private room at 622 North.

Next steps seemed to start with a writeup that would appeal to a broad population, including a VT News posting and possibly an interactions article. Some sort of literature review might fit well into someone’s Ph.D. dissertation.  Design fictions, part of one breakout session, might help spur thoughtful discussion.  And follow-up workshops at CHI and elsewhere seem like a good next step.

I suggested putting forth a series of videos, perhaps as a class project for students in the Department of Communication at VT–they’ve put together other compelling video collections.  The videos could be made available on YouTube for use in classes and other meetings.

It was great to see the different perspectives at the workshop, and I’m particularly grateful to the invited speakers for taking the time to connect with us.  Looking forward to the next steps!

Book review: The Heretic’s Guide to Best Practices

January 23, 2012 5 comments

The Heretic’s Guide to Best Practices: The Reality of Managing Complex Problems in Organisations, by Paul Culmsee and Kailash Awati, examines how groups of people can work to define a complex problem and to identify possible solutions. The book is divided into three sections: the first part argues why “best practices” often fail in the face of wicked problems; the second examines how people can work together (with a focus on dialog mapping, issue-based information systems (IBIS), and Compendium), and the third provides case studies illustrating successes and lessons learned from the authors’ work experiences. I found the middle section to be the most interesting and enlightening: it included motivation and history behind dialog mapping, with lots of illustrative examples and key citations balanced by alternative approaches. Much of the book centered around Compendium use and examples, the free IBIS-based dialog mapping tool I discussed in a previous post. In case you worry that the authors don’t eat their own dog food, a great many of the figures were generated by Compendium—reflecting intermediate steps of how a manager can address wicked problems using the tool.

The book represents an interesting pairing of authors. Paul Culmsee is a consultant who probably knows more about dialog mapping and Compendium than anyone (except maybe Jeff Conklin of gIBIS fame, who wrote a glowing foreword to the book with high praise for Culmsee). Kailash Awati is an information systems manager, with a couple of Ph.D. degrees and experience at several levels in academia. Both Culmsee and Awati blog prolifically, and many of their blog posts fed nicely into this book (a trick I’m using to prepare my book). People familiar with their styles will find their key writing styles featuring irreverent humor, pop-culture references, and in-depth examples prevalent in this book. (At times, though, I feel their pop-culture irreverence would be better if rooted in fact; e.g., the real Clippy story is interesting and perhaps relevant, and people and stories behind the development are still out there.)

There were a few major weaknesses of the book (though in the spirit of “wicked-ness”, many of these drawbacks to me may be neutral (or advantages!) to you, so take them as such). The index is very weak (less than 3 pages for a book approaching 400 pages). I’d love to look up what they have to say about strong reciprocity, or whose views of claims they discuss, or their view of McCall’s PHI approach to wicked problems, or their thoughts on positions in IBIS, or numerous other topics—but such a short index just doesn’t provide adequate support for a lot of important queries. In addition, I often find that books suffer from a certain myopia when it comes to the authors’ favored approaches, though there’s somewhat less fan-dom in this book than is seen in many books of this type. They certainly show a favoritism to IBIS and Compendium, but it’s the authors’ prerogative in writing a book to choose approaches to focus on and how much to talk about the weaknesses of a favored approach. More generally, they took the “depth over breadth” approach in this book, with heavy details about a few approaches rather than touching on a more inclusive set. It’s great to see examples, but not at the exclusion of alternatives. Somewhat telling, the references list contains only 122 references—there’s no mention of the work of Schön, Toulmin, McCall, Moran, Carroll, or others who have had important (nay, foundational) things to say about the topics in this book.

So who should get this book? The book targets technology managers who are looking for a way to address complex problems, and plenty of software professionals (e.g., ones who want to “deprogram” their managers) could benefit from it as well. Certainly anyone who uses Compendium or, more generally, embraces IBIS as a design approach or wicked problems as a problem classification should read it. If you like Jeff Conklin’s book, then (dare I say it?) I bet you will like this one even more. To grossly oversimplify, this is like Conklin’s book but moreso: more motivation and framing of the problem type, lots more examples, 5 years more of experiences and Compendium advances, more history of where these ideas came from, and more positive and negative examples of Compendium’s utility. If that sounds appealing, you should get a copy of this book.

20+ years on from gIBIS and QOC

December 2, 2011 8 comments

The Issue-Based Information System (IBIS) approach to capturing and using design rationale is one of the leading design theories that addresses how groups identify, structure, and make decisions during the problem-solving process. IBIS was conceived by Horst Rittel in the 1970s as a way to deal with what he called wicked problems, unique and novel problems with no stopping rule or “right” answer. IBIS was the first of many argumentation-based solutions—spawning or directly influencing instantiations that include PHI, QOC, DRL, gIBIS, and Compendium—with a common trait that outlining the problem space is equivalent to outlining the solution space. This post outlines the evolution of these design approaches, briefly exploring some key questions about when these approaches are (and aren’t) well-suited, particularly for the field of human-computer interaction.

Two writings were central to this historical review, connecting to all the other papers referenced here: Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC by Buckingham Shum, Selvin, Sierhuis, Conklin, Haley, and Nuseibeh (the title of which inspired this post’s title); and Rationale Management in Software Engineering: Concepts and Techniques by Dutoit, McCall, Mistrik, and Paech. And a great many more readings were highly influential and enlightening: the classic “Design Rationale” book by Moran and Carroll that captured the 1996 state-of-the-art for the design rationale field, the recent pair of special issues of the Human Technology journal on Creativity and Rationale in Software Design, and ongoing practitioners’ views on wicked problems and IBIS via blog posts and white papers by Paul Culmsee, Kailash Awati, Jeff Conklin, and others I’m surely forgetting.

Rittel is often pointed to as the initiator of design rationale due to a series of papers from the early 1970s to the early 1980s. He laid out an extensive definition for wicked problems, featuring ten distinguishing properties, with Melvin Webber in a 1973 paper Dilemmas in a General Theory of Planning (though Rittel and his colleagues had been discussing the issue and postulating approaches for at least five years prior to the paper). Other Rittel papers suggested that the right approach to address wicked problems was through issues, situationally-dependent questions that are “raised, argued, settled, ‘dodged’, or substituted” during a design session. Rittel’s concept of an issue is core to his Issue-Based Information Systems (IBIS) approach to group design and decision making, in which issues start a questioning process that links each issue with facts, positions, arguments, and other structures through knowledge relationships. The result is a knowledge space that doesn’t solve the issue, but rather creates an environment of “support” and “planning” where people better understand each others’ points of view.

IBIS was refined and simplified in subsequent years, and the IBIS tree-like structure led to many automated tools. Rittel’s student Ray McCall created a Procedural Hierarchy of Issues (PHI) refinement to IBIS that included many of the early tools: PROTOCOL, MIKROPLIS, PHIIDIAS, and JANUS. Other approaches to design rationale management drew inspirations from this early IBIS/PHI work: QOC by McLean, Young, Bellotti, and Moran; and DRL from the work of Potts and Bruns and of Lee. Perhaps the first widely-used tool was gIBIS, a graphical IBIS tool developed and popularized in the 1990s by Jeff Conklin and his collaborators, and its follow-up tool QuestMap. Much of the reflective literature on design rationale groups these techniques together, with Dutoit et al. noting that “there are so few significant differences in the schemas of IBIS, QOC, and DRL” (though there’s an excellent detailing of the differences in that paper). In general, these tools tended to be less intrusive than the original IBIS approach (e.g., less formality, resulting in simpler structures) and more prescriptive outcomes (with specific solutions). This simpler model enabled more immediate value to the participants, for whom value from the tool was imperative for any time investment.

Conklin joined with numerous other researchers and practitioners—Simon Buckingham Shum, Al Selvin, and others—to create the most current and widely-used instantiation of the IBIS ideas in the Compendium dialog mapping tool. I discussed Compendium in a previous tool review post, though I didn’t use it to its fullest capacity—in a collaborative situation in which divergent opinions need to be drawn together toward a common understanding. The big issues that I had with Compendium were with scalability and history: it’s hard to see more than a dozen nodes at once, and support for rolling back to previous views was limited. But it’s much more usable than gIBIS, and it seems to have attracted a fairly sizable following among usability consultants. Features like scalability and history don’t seem to be a focus of the Compendium tool. In fact, it seems that the biggest contribution of Compendium is not in how knowledge is represented (which had been done before) or in how it is manipulated (simplified…or in some cases ignored or deferred to a future version of the tool), but in the social processes around how the tool is used: an expert in knowledge management and the IBIS/Compendium provides real-time guidance during the analysis process, toward helping the participants debate directions moving forward.

In summary, two related trends that I notice in these IBIS-based tools are that (1) the “hard” stuff is left for experts; and (2) the approaches seek more immediate value to designers. Perhaps this is a response to a shift from academia to consultant environments—consultants certainly need to carve out an “expert” role for themselves, and they’d better make sure there’s value to the participants at the end of the day.

Another trend from IBIS to QOC to gIBIS to Compendium is that the approaches seem to be increasingly question-driven—as opposed to issue-driven—with progressively fewer structuring options for the knowledge that is generated. Does the path of simpification and certainty of IBIS tools violate the original wicked problem mandate that problems don’t have solutions, merely different states of being? Or does the simplification actually match the vision of wickedness that Rittel initially posed? I worry that the increasingly tree-like structure of many of these graphs draws designers further away from the initial problem and doesn’t encourage revisiting issues (though I’ll acknowledge again that Rittel’s original IBIS graph (non-tree) structure with its many loops and cycles is far harder to understand).

One big drawback I see with all of these approaches is their inability to deal with the changing truth that occurs in most design efforts and is prominent in the field of human-computer interaction. I think that’s central to my issues with Compendium and other tools—regarding scalability and history—in which problem spaces become more complex over time. Lots of factors—the state of technology, the skill sets of the designers, the knowledge, skills, and acceptance levels of the target user population—change over time, and decisions that were made at any single point may not apply later. Recall two other key feature of wicked problems: that solutions (or problem states) aren’t right or wrong, and that there’s no stopping rule. Popper, as paraphrased in Rittel and Webber’s 1973 paper, suggested that solutions to problems should only be posited as “hypotheses offered for refutation”; otherwise, you can end up pursuing tame solutions to wicked problems.

Finally, we must be careful that we’re not reducing the wickedness of a problem to the creation of a claims map, or the mapping of a dialog, or the removal of storms from brains—in effect, turning the wicked problem into a tame one. Or, if we choose to do that, we must ensure that, when a design team goes back to look at a DR representation, each element in it is appropriately questioned. Sometimes computer tools can hurt in that regard—they help designers violate some tenet of wickedness by providing a “memory” that captures truths that don’t exist, or by encouraging the capture of knowledge at the wrong granularity.

Compendium review

October 7, 2011 8 comments

Compendium is a hypermedia mapping tool created by a consortium of universities and research labs in Europe and the US. It is rooted in Rittel’s wicked problem conceptualization and IBIS approach to design and design rationale capture, building on combined efforts of Al Selvin, Simon Buckingham Shum, Jeff Conklin, and many others. Compendium allows designers to create a node-link graph of interrelated concepts, including questions, ideas, pros and cons, references, and decisions. It’s similar to a lot of the mind map tools that are out there, though its scientific basis makes it

I downloaded Compendium (version 2.0b1) and used it as part of a personal brainstorming session. I wanted to exercise the things that could be done with Compendium, then share my results with a remote group of colleagues–as seems to be the great strength of Compendium. Specifically, I wanted to explore how an app or set of apps on mobile interfaces can be used to encourage physical activity among K-12 students (mainly middle schoolers)–assuming each has a phone with the apps installed. We’ve come up with a small handful of games and activities ourselves, and we’ve been inspired by the process and products from the SICS-Microsoft “Inspirational Bits” effort. What we need is ideas, and that why I turned to Compendium.

It was a bit slow to get started using Compendium–even/especially with the 42-page (!) getting-started manual (though there’s also a 2-page quick reference sheet as well, which was helpful once I had a basic understanding of Compendium). There are the typical “starting-with-a-blank-slate” problems where the initial actions aren’t obvious, full of “aha” moments as I played around with it. The palette of nodes on the left of the screen was helpful–but clicking on them does nothing (aha, you can click-and-drag them onto the screen). I couldn’t quickly figure out how to link elements (aha, it’s not just a click-and-drag motion, that’s for moving nodes, it’s a right-click drag motion for linking). Hitting return when I finish typing a node name pops up a node dialog instead of just naming the node (aha, I can click elsewhere to defer adding node details).

My first pass in using the tool resulted in a single question and around 8-10 ideas that address the question. For better or worse, I then felt compelled to use other nodes–namely, the “pro” and “con” nodes (that are quite similar to the upsides and downsides of claims). I considered using the other nodes, but they didn’t seem as applicable–I didn’t want to “decide” anything, and I didn’t delve deep enough to include any references or videos or web sites. Alternating between ideas and pros/cons worked out well, as thinking about pros/cons typically inspired other ideas, and thinking about new ideas led to new pros and cons. Being a good engineer, I ended up with a four-level tree: a question at the root, four idea categories, a bunch of ideas, and a bunch of pros and cons for each idea.

The layout aspects of Compendium are a major weakness of this tool. There’s an automatic layout feature, but it seems to use a poor layout algorithm…thus there’s no way the Compendium graphical representation could support a large node set (beyond 20-30 nodes) that would emerge from many brainstorming sessions.
For example, my 25-node graph, created in a few hours of thinking about, interconnecting, and evaluating ideas–couldn’t fit on the screen either horizontally or vertically in a readable manner. The zoom isn’t very smart, simply shrinking the nodes and thus losing the context that goes with them (rather than, say, keeping the icons visible and/or 1-2 keywords readable) and it doesn’t seem to be possible to zoom down only a portion of the graph. There are so many great graph layout algorithms, and so many ways to lay out and zoom graphs (e.g., radial, fisheye, hyperbolic), that it feels very limiting not to have them available in this tool.

The computer-supported features of Compendium are what I feel makes it worthwhile. When I “lost” a list that I created, I was able to search for it to locate the node where I stored it. You can limit search just to the visible nodes or extend it to embedded lists, notes, etc. The search also extends to deleted nodes, so as a project ages I could have found ideas from weeks or months before. There seems to be some sort of back/forward buttons and history bar which (I assume) allow you to revert back to previous versions of the page–but I couldn’t get this to work. A well-implemented history feature seems like one of those things that would really be worthwhile; e.g., to view what transpired between the beginning and end of last week’s meeting, or to revert back to the state of inquiry from the start of the meeting. But a well-implemented history also seems hard to implement!

After completing a Compendium graph, I sent it to my six remote colleagues, both in HTML form (which can be viewed by any web browser) and in XML form (which could be loaded if you install Compendium). Two of the colleagues responded back, both of whom seemed to look at the HMTL version but not the XML version. There were positive comments about the ideas that emerged, though little specific. One of the two seemed interested in using it in a design session on her end, so I may update this paragraph with more details (and if there are comments from the other collaborators).

Any sort of design tool has overhead associated with it. At times I suspect a paper version of Compendium might be better, at least from a usability standpoint. When leading a design session, I want to get ideas up there quickly, I want to move them around quickly, I want to stack them and move them around and hand them to breakout groups to flesh out themselves. Those sorts of things seem to go faster with Post-Its than with Compendium. And I don’t think it’s the fault of Compendium–I noticed the same sort of thing with PIC-UP (from our lab) whereby the richer and more communicative interactions occurred with paper versions of cards. Of course, those were short one-off sessions, and I suspect the real value of tools like Compendium lies in its use over long periods of time.

Compendium has been widely used, and there are tons of comments on their web site. Compendium’s heydey seems to be the 2003-2007 time frame, when there was an annual meeting on it, and there were lots of case studies and papers emerging about it. It’s certainly still active–the beta version that I tried was released earlier this year–but much of the web site seems dated, and there’s currently no formal support for the tool (though there’s a somewhat active online forum for reporting bugs). It’s hard to judge how large and active the user community is right now, but historically there’s been a fair amount of use.

So is this a good tool? I think it can be great for the right type of situation–when you want to save and revisit and search a collection of ideas, and when you need some encouragement to balance questions, ideas, pros and cons, and associated rationale. The tool really guides you to do these things, and if you’ve got a task that could benefit from it then it can really be worthwhile. And a final note: Compendium seems to be one of those “all-or-nothing” tools–if you buy into its philosophy, there can be great value…but you have to buy into it. The project leader must decide that Compendium will be used, and the team members must agree to use it during their design. If you’ve just got a small portion of the team using it, then the results won’t be nearly as meaningful. But if everyone is on board, it can be a great repository for design.