Archive for the ‘Professional activities’ Category

What Comes After CHI? The Systems of Truth Workshop

March 5, 2018 1 comment

The Center for Human-Computer Interaction (CHCI) at Virginia Tech just wrapped up its third workshop in the “What Comes After CHI?” series, this one focused on the theme “Socio-technical Systems of Truth”.  Kurt Luther was the primary organizer, and information about the workshop is at  The workshop is described as such:

This two-day workshop, held March-1-2, 2018 … will explore interdisciplinary perspectives on designing socio-technical systems of truth. We advocate for human-centered systems of truth that acknowledge the role of belief, testing, and trust in the accretion of knowledge. We focus on processes of questioning and accountability that enable a deeper understanding of the world through the careful, comprehensive gathering and analysis of evidence. We will consider the entire investigative pipeline, from ethical information gathering and archiving, to balanced and insightful analysis, to responsible content authoring and dissemination, to productive reflection and discourse about its implications.

This post lists some of my own observations about the things of interest to me and is not meant to be at all comprehensive. Look to the workshop site for more comprehensive summaries.

The workshop kicked off with faculty lightning talks, featuring 8 faculty from 4 different departments and centers around campus.  I talked about the intersection of core HCI topics—particularly things that I care about, like claims and personas—connect with the themes of this workshop.  I included results from surveying my 105-person introductory HCI class. I used Shuo Niu’s AwareTable system to mine the student answers for occurrence and frequency, revealing workshop-relevant terms (e.g., social (40), media (14), bias (15), ethical (20)), key course concepts (e.g., claim (4), persona (6), artifact (6), scenario (6), constraint (3)), and topics mentioned in the invited guest bios and abstracts like dead (3), nude (4), and the scary gap between academia and industry (3).  You’ll have to read up on the invited guests to learn the relevance of those last few terms!

The big highlight of the workshop was to have four invited fellows in attendance: Mor Naamen, Alice Marwick, Travis Kriplean, and Jay Aronson. Each gave a talk, followed by discussant comments and open discussion.  There were also several breakout groups that explored relevant topics, and a reception and dinner.  A quick take on each of the talks and the other events.

Mor Naamen spun off the notion of “systems of trust”, where trust is the result of truth.  He focused on his research into Airbnb, showing (among other things) that longer profiles, and profiles that cover more topics, correlate with high trustworthy ratings.  So what’s the right thing to say in your Airbnb profile? Things like “We look forward to hosting you.”  And the wrong thing? Providing a life motto.

So what about fake news? Mor noted that there’s not a good reliability/credibility signal.  Possible solutions? Local news, where familiarity and relevance is high.  Proof that statements are true (but how to do that?).  Discussant Tanu Mitra pushed that notion, seeking to identify ways to encourage people to call out fake news, with the danger of risking (or helping?) their own reputation.

Alice Marwick talked about fake news: how it is created, why it is shared, how it evolves into our consciousness, and how it is (and can be) debunked.

Are people who share fake news “dupes”?  That’s been proven false multiple times over.  They share stories that support pre-existing beliefs and signal identity to like-minded others.  Algorithmic visibility and social sharing contribute to this.  What to do? Understand where fake news resides in media ecosystem, take polarization and partisanship into account in fact checking, and scrutinize algorithms and ad systems.

During the Q&A led by Carlos Evia (and afterward), Alice noted that it’s difficult for the average citizen to know what to do when someone you know (someone you’re related to) puts forth information that’s clearly false.  It’s hard to foster dialog when people are repeating stories that mirror a deeply-felt belief. The many fact-checking sites out there (Snopes, Politifact) do not seem to influence behavior, and corrections often lead to more repetition of misinformation.

Travis Kriplean put forth 3 categories of systems of truth, with examples of systems that fall into each category that he has crafted.  The categories (and systems) include:

  • empirical (fact$,
  • intersubjective (, reflect, deslider,com,
  • reflective (cheeseburgertherapy, dellider,

Andrea Kavanaugh took the lead on the discussion. One statement by Travis during the discussion that resonated with me was his statement that people have to be part of the loop—though it was unclear how that could happen with a web site.

Travis used the notion of claims a lot. But not in the Toulmin or Carroll or Sutcliffe or McCrickard sense of the word. He seemed interested in claims as hypotheses, to be debated with the help of systems.

Jay Aronson talked about methods to organize and analyze event-based video. The early part of Jay’s talk addressed how technology is a double-edged sword. It can be used for “good”, but also for harm. He emphasized the need for a trusted human in the loop, which I read as an “Uncle Walt”; i.e., Walter Cronkite, or a Billy Graham, to work the controls.

The bulk of Jay’s talk featured an examination of a video he created to show a murder that took place at a Ukraine protest.  He stitched together a collection of mobile phone videos that were taken at the protest.  There are often tons of videos of disasters, so how can you sync them?  The obvious way seems to be to look for similar (visual) objects in the videos, but that’s hard. Audio proved to be easier: by identifying and connecting similar loud sounds, Jay could connect videos taken from different locations.

Jay hired animators to connect the videos, which made him somewhat uncomfortable. These sketch-based animations make assumptions that aren’t present on the video, though they stitch together a compelling argument. Jay cautions against de-coupling the video from the people; they need to be coupled to maintain “truth”.

Deborah Tatar, in her discussion, noted the ability to query video is very important–YES!  But it took around a year to produce the video, so a query system that doesn’t take 6 months to answer anything more than a trivial question seems far away.

Breakout groups were each centered on a series of questions. A common theme was the effort to define terms like “system” and “truth”, and efforts to define people’s role in systems of truth. This section details my perspective on some of the discussions in breakout groups.

So who do we need?  Is it Walter Cronkite or Billy Graham? Mor’s work suggests that someone local may help to turn the tide, like the “News 2 at 5pm” anchor. Were/are any of these people more trustworthy than Rachel Maddow, Bill O’Reilly, and the like?  Or just less criticized?  Or is there some different sort of person we need?

How do we determine what’s true?  And do so by avoiding provocative phrases like “pants on fire” (and “lying”, per the Wall Street Journal controversy from 2017. So is there a set of words that are provocative, that should be avoided?  And if a system helps with that, could it avoid such words From Snopes:

I realize this is quite possibly a novel idea to the Daily Caller writer, but here at we employ fact-checkers and editors who review and amend (as necessary) everything we publish to ensure its fairness and accuracy rather than just allowing our writers to pass off biased, opinionated, slanted, skewed, and unethically partisan work as genuine news reporting.

Perhaps some Daily Caller writers could give that approach a try sometime.

I realize that it may be fun for an organization like Snopes to put the smackdown on an organization that puts forth factually-inaccurate articles like Daily Caller. But is the closing snark helpful for advancing the argument, particularly to those who wish to think positively about Daily Caller?

The systems that Travis developed helped to prompt a lot of discussion in one breakout group on systems that help with decision-making.  IBIS-based systems were a big part of that, including gIBIS, QOC, and Compendium.  Steve talked about his thesis work, which was related to IBIS.  And I interjected about claims as a hypothesis investigation technique.

The reception and dinner provided a great venue for further discussion.  Students presented posters at the reception, held in the Moss Arts Center lobby area.  Big thanks to my students, Shuo Niu and Taha Hasan, for putting together posters about their work for the event. The dinner was upstairs in the private room at 622 North.

Next steps seemed to start with a writeup that would appeal to a broad population, including a VT News posting and possibly an interactions article. Some sort of literature review might fit well into someone’s Ph.D. dissertation.  Design fictions, part of one breakout session, might help spur thoughtful discussion.  And follow-up workshops at CHI and elsewhere seem like a good next step.

I suggested putting forth a series of videos, perhaps as a class project for students in the Department of Communication at VT–they’ve put together other compelling video collections.  The videos could be made available on YouTube for use in classes and other meetings.

It was great to see the different perspectives at the workshop, and I’m particularly grateful to the invited speakers for taking the time to connect with us.  Looking forward to the next steps!


NCWIT 2016

June 22, 2016 Leave a comment

ncwit2016The National Center for Women & Information Technology (NCWIT) held their 2016 annual summit last month in Las Vegas. The big news is that Virginia Tech received a NCWIT NEXT Award for our work on recruiting and retaining women in computer science (CS) and related areas. I’m particularly proud of my own work in reaching out to minority-serving institutions and in helping to craft CS-related minors (hopefully to be augmented with an HCI minor soon!), but this was definitely a team effort that included efforts by Barbara Ryder, Libby Bradford, Greg Farris, Deborah Tatar, Margaret Ellis, Bev Watford, and many others at Virginia Tech, plus a long list of NCWIT folks highlighted by our consultant Cathy Brawner, the Extension Services team, and the Pacesetters team.

NCWIT is a collection of companies, academic institutions, government agencies, and other groups working to increase women’s participation in computing-related fields through recruitment, retention, and advancement. As usual, the summit was an impressive event, packed with notables from academia and industry with keynotes and meet-and-greet events featuring exciting themes. Particularly motivating was the plenary by Melissa Harris-Perry from Wake Forest, who talked about getting more black women engaged in computing, particularly as professors. She called our Virginia Tech as a leader in this regard, particularly given the relatively large number of black women who have received PhD’s from here. But there’s certainly a need for more concerted efforts toward crafting welcoming environments for people in underrepresented groups.

Breakout groups help focus on topics of interest and importance to schools and groups with needs similar to our own. I attended meetings for the Academic Alliance and Extension Services, and workshops focused on diversity with respect to makerspaces, growth, pedogogy, and evaluation. One theme repeated at multiple venues that really resonated with me was the need for peer mentorship. We do a good job with this, but other ideas worth considering involve credit-based opportunities and other rewards for participation that enable and encourage a breadth of participation. This breadth can encourage diversity in the mentorship pool, and corresponding diversity in our student population. UC Irvine and the University of Wisconsin both have credit-based programs in place that reportedly are working well for them, and others have been considering adding them.

So who should attend the NCWIT Annual Summit?  It’s great to keep a foot in the door and make sure some people from your institution attend every year. But it’s also important to invite a few different people each year—we had myself, Barbara Ryder, and Libby Bradford there as regulars, but also our Associate Dean for Academic Affairs and Director of the College Center for the Enhancement of Engineering Diversity, Bevlee Watford, for just the second time.  I’m hopeful that we’ll get some repeat attendees again next year, but it’s also good when there are new faces as well. Our departmental Diversity Committee will be under new leadership starting in the fall, so hopefully the new chairs will attend!

Watching smartwatches

April 26, 2016 13 comments

Smartwatches provide easy access to personal data in a wearable device. Modern devices sparking the latest wave of use include Pebble, Android Wear, Apple Watch. An important aspect of the popularity of these platforms is their open programming and app distribution platforms. For little or no cost, anyone with programming knowledge can develop and distribute an app. However, excitement about the platform and availability of a programming platform does not necessarily translate to useful and usable apps.

Two big hurdles exist that are particularly relevant for app designers: domains of use and continued use. First, it’s not yet clear what the domain for the smartwatch “killer app” will be—the apps that are so necessary and desired that people will pay for the technology necessary to use them.  Candidate areas for the killer app include health and fitness, highly accessible notifications for email and messaging, and social media. Second, an unanswered question is whether people will use them long term–there’s lots of attrition for even the most popular hardware.

We set out to understand these questions in my CS 3714 mobile software design class. An assignment asked that students perform an analytic evaluation of a smartwatch over the course of at least 5 days. Pebble, Android Wear, and Apple Watch smartwatches were available for checkout. Students were asked to identify at least three smartwatch apps to use prior to the 5-day period, then use the smartwatch and apps over the course of the 5 days for several hours each day. It was asked that at least one of the apps be a health- or fitness-related app, and at least one of the apps (perhaps the same one) was to have a companion app for the smartphone.

Students completed a form indicating whether they generally wore a watch (standard or smartwatch), which smartwatch they chose to wear for the assignment, how long they wore the smartwatch for the assignment, and which apps they used. The students were asked to craft a narrative to describe the experience with your selected hardware. The narrative covered display and interaction experiences as well as experiences with each of at least three different apps. It is expected that the narrative cover about 800-1000 words.

Students tended to complete this assignment with a higher completion rate than the other (programming) assignments for the class–68 out of 71 students submitted it. 24 students used the Pebble, 38 used an Android Wear watch, and 6 used the Apple Watch. Most used the smartwatch for longer than the requested 5 days; the median usage time was 7 days and the average was 8.9 days. Only 40% of students reported that they regularly wear any sort of watch, and only 10% reported having worn a smartwatch regularly.

Students tended to use more than the 3 apps that the assignment asked them to use. Most students used fitness apps that came with the smartwatch (e.g., Android Fit, Apple Activity). Others used run tracking apps, and a few tracked other diet or exercise. Map alerts and other notifications were popular, as were games. Surprisingly, only a few people reported using social media in a meaningful way (i.e., beyond receiving text messages); perhaps that is because of the short usage time.

Comments from student narratives reflected a general interest in the technology. They found the smartwatch “pleasant”, “nice and convenient”, and “very handy”.  Notifications seemed to be an advantage, with the smartwatch “a great way to read and dismiss notifications” (though others found notifications annoying or “glorified”). However, few people seemed poised to purchase or use the technology based on their experiences. The most common complaints were that the hardware was “ugly”, “awkward”, “incredibly silly”, and “not aesthetically pleasing”. Others found the technology hard to use, with comments like “my finger takes up half the screen”, “small buttons”, and “no way for users to type”. Lots of students admitted that they were “just not a watch person” or that they “disliked watches”, and there was nothing about the smartwatch that they wore to change their minds.

An important side effect of the smartwatch watching assignment is that students better understood the capabilities of smartwatches. In prior semesters when students did not have the experience of wearing a smartwatch, designs tended to be unrealistic or impossible to implement. Students in this semester seemed to have a better understanding of how a smartwatch would be used, and as such their homeworks and projects were targeted more appropriately for the smartwatch. There’s a danger that their experiences may stifle their creativity by highlighting what has been done, but that seemed outweighed by a realistic understanding of capabilities and scenarios of use.

There’s an interesting history for smartwatches, from the Dick Tracy vision to the poorly-received models from Seiko, IBM, and others through the 1980s and 1990s. The new wave of smartwatches seems to be booming, but it’s unclear whether that boom is here to stay. My research group has been exploring smartwatch use in the classroom as reported in a SIGCSE paper, demo, and poster in 2015.  And we put together an app set to look at reactions to smartwatches in an elementary school outreach experience.  A previous in-class activity comparing games across platforms (smartwatch, smartphone, and laptop/web). It seems likely that young people will help define whether and how smartwatches will be used (or whether the movement will fizzle, or appeal only to niche groups) in upcoming years.



April 6, 2016 Leave a comment
2016-03-05 14.04.23

Virginia Tech students and alums at SIGCSE 2016

SIGCSE 2016, the flagship conference on computer science education, took place in Memphis TN in March, with a big collection of Virginia Tech students, faculty, and alumni taking on a variety of important roles. My grad student Mohammed Seyam and I presented a paper on teaching mobile software development with Pair Programming. Cliff Shaffer and his students and alums had multiple papers and exhibits. Greg Kulczycki served on a panel.  And, most notably, Steve Edwards was program co-chair this year!

Mohammed Seyam’s paper and talk focused on Teaching Mobile Development with Pair Programming. It explored his investigation of Pair Programming (PP) when teaching mobile software design in an upper level CS course. PP has been shown to be useful in some teaching situations, but Mohammed is the first to look at it in teaching mobile. He also had an entry in the graduate Student Research Competition that took a broader look at the balance between PP, hands-on activities, and traditional lectures when teaching mobile software design, for which he was named a finalist.

As always, SIGCSE featured interesting and engaging keynotes. John Sweller talked about the impacts of cognitive load theory on CS education. Barbara Boucher Owens and Jan Cuny received service awards from SIGCSE and gave keynotes that reflected their life experiences. It was particularly good to see Jan Cuny receive an award given her contributions to diversity in leading broadening participation in computing programs at the NSF. Karen Lee Ashcraft talked about breaking the glass slipper, and how organizations historically (and continually) have crafted jobs and workplaces that encourage stereotypes. This was a bolder and more developed version of a talk she gave at NCWIT 2015.

One of my favorite emerging things at SIGCSE is the Common Reads initiative, which returned for its second year. It’s an effort to encourage SIGCSE attendees to read a common set of CS-related materials. There are stickers for conference badges that are handed out at registration to highlight who’s read what, thus providing another avenue to start conversations. And there’s a conference session one evening to discuss the readings, how they relate to CS, and how they can be used with students. This year’s books were all science fiction: The Diamond Age by Neal Stephenson, Ancillary Justice by Ann Leckie, A Logic Named Joe by Will F. Jenkins, and Seven Years from Home by Naomi Novik. These books and stories touch on core CS themes like AI, parallel computing, fault tolerance. While thee themes are certainly relevant to CS, it seems important to me to promote topics other than just science fiction to support a breadth of interests.  As such, for SIGCSE 2017the most intriguing common read to me is The Thrilling Adventures of Lovelace & Babbage: The (Mostly) True Story of the First Computer by Sydney Padua. It’s a comic-style reimagining of CS heroes Ada Lovelace and Charles Babbage, exploring a world in which they collaborated closely to build and use a computer. There are a couple of other sci-fi entries included as well, Andy Weir’s The Martian (yes, the book that the movie is based on) and Isaac Asimov’s The Last Question short story.

It was fun to connect with the VT crowd on the LONG van ride across Tennessee to Memphis. The Memphis area is a little depressed, but there seem to be efforts at renovation, and the food and music were a great indulgence. It was fun to be just a few feet from the Mississippi River during the conference, and we were able to duck across the border to neighboring Arkansas and Mississippi on our drive.  We also had quick visits to Nashville and Kingston going to and from the conference. Next year’s SIGCSE will be in Seattle, so it’s unlikely we’ll drive to that venue!

Several others put together writeups about this event as well. CS@VT blogged about VT’s participation in SIGCSE (excerpts from this post), and Georgia Tech put forth a press release about the event. Mark Guzdial from Georgia Tech has several blog posts about Jan Cuny’s SIGCSE Outstanding Contribution award and a description of one of his posters replicating his earlier work. It was enlightening to read about the frustrations in publishing replicated work. There’s real value there but so many venues put much more value on innovation rather than replication. Janet Davis blogged about her experiences at SIGCSE from her perspective as a faculty member starting a new CS department. Georgia Tech and NCWIT had groups there too, and it was great to connect with them. And I’m sure there’s much more writeups about SIGCSE that I missed–feel free to include other relevant links in the comments.


NSF Graduate Research Fellowships: Maximizing Chances for Success

September 10, 2015 Leave a comment

The U.S. National Science Foundation (NSF) offers Graduate Research Fellowships (GRF) to applicants who are beginning or about to begin a Ph.D. I’ve advised a student who has written a successful one, I’ve reviewed applications internally for people in my department, and I’ve become intimately familiar with the current review process for the NSF. There’s no magical formula for getting one that I’ve discovered, but there are definitely things you should and shouldn’t do to maximize your chances. This post seeks to capture my experiences and advice—of particular relevance to those in computer science and human-computer interaction but perhaps applicable in other fields as well.

My grad student Greg Wilson received an NSF GRF in his first year at Virginia Tech. His proposal discussed solid and interesting ideas related to mobile and ubiquitous computing, but what really appealed to the reviewers was his outreach efforts. He has a passion for K-12 education, and his application discussed that in detail. He described prior outreach efforts in his personal statement, thus demonstrating an interest and ability in similar efforts in his graduate work. Receiving this fellowship allowed Greg to pursue his own ideas and really make a difference with his work. He completed his MS at Virginia Tech and went on to a Ph.D. in education at the University of Georgia.

The Virginia Tech Computer Science Department hosts an internal review process for national and international graduate scholarships and fellowships like the NSF GRF. It is organized by faculty member T.M. Murali and includes work sessions, early reviews by fellow grad students, and reviews by faculty in the department (including myself some years). It’s a great way to get feedback both from peers and from potential committee members, and I feel like it really made a positive difference for my student Greg. If you don’t have this available to you, find a way to get feedback from a breadth of other people.

I am very familiar with the reviewing process for NSF applications. For the last couple of years, it has taken place via teleconference, in which reviewers read and comment on applications prior to a pair of online meetings. The meetings present a listing of ratings, then ask for champions of lower-rated proposals that seem particularly worthy. The 20+ person online panel breaks into smaller 3 person groups to discuss moving proposals up (or down) the ranking if a proposal’s champion makes a compelling case for why it should be moved. If you can attract a champion, you’re greatly improving your chances. The final listing serves as a recommendation to NSF program officers and other personnel, who make the final determination as to who receives an award.

A few summary thoughts and recommendations that can help with a successful submission:

  • Follow the guidelines. Yes, there are lots of them, and I’m sure you have great ideas that you might feel should carry your proposal even if you don’t pull together your application just right.  But failing to follow the guidelines can obfuscate your expected contributions. You risk annoying the reviewers and the program managers by making them dig for (or guess at) certain elements of your proposal.
  • Provide a roadmap for your proposal. Keep in mind that reviewers will be looking at lots of proposals, and secondary reviewers and program managers will be looking at even more—sometimes for very short periods of time. As such, make sure the key points of your proposal can be found at a glance. Label sections and subsections, highlight key terms, craft figures and tables that are both descriptive and easy to understand. And don’t use a tiny font just to squeeze more in—find a way to say what you want to say concisely. Of course, none of this matters if the content isn’t good, but good content that can’t be understood easily can also sink a proposal.
  • Think about intellectual merit. The NSF cares a lot about this (and the next bullet, broader impacts). Read the full description on the NSF site and specifically address ways in which your work will have intellectual merit. Even if you feel your entire proposal is all about intellectual merit, make sure to explicitly highlight your expected contributions.
  • Think about broader impacts. This one is even harder, but as with my student it really matters. It’s important to show how your work will make a difference, keeping in mind that reviewers will be generally knowledgeable about your field but not necessarily deeply knowledgeable about your topic. As such, don’t just make a laundry list; e.g., stating that your work will lead to improved interfaces for scientists, bricklayers, moms, bartenders, etc. Instead really draw the path to the future utility of your work—and if you can show yourself guiding the research down the path, all the better.
  • Get good letters. This one, to some degree, is out of your hands—but that doesn’t mean you can’t make choices that maximize your chances for good letters. The best letters are from people who BOTH know you AND know how to write good letters. A letter from someone who knows you very well but doesn’t understand NSF GRFs might be a poor choice, just as a letter from a highly regarded individual who clearly knows nothing about you and has little to say about you likely will be unhelpful. Seek to approach people who’ve been part of successful NSF GRFs in the past, and from people who will help you toward your proposed goals. But make sure these are people who can either say good things about your prior work and/or good things about your proposed work—people who have been a meaningful and integrative part of your research life.

Finally, keep in mind that, for better or worse (usually better), the NSF regularly changes the guidelines and procedures for fellowships, so make sure to verify that your submission matches the way things are done. There’s lots of other advice out there, so seek to find it and identify the path that is most promising to you. There’s always a bit of randomness to the procedure, but there are steps you can take that can increase your chances of receiving an award. Most of all, pursue interesting and important ideas that appeal to you and your collaborators. Good luck!


Reading a Professional Paper in Seven* Minutes

August 23, 2015 Leave a comment

Reading professional papers is an important part of a researcher’s life, and it’s an important part of every grad class that I teach. I’ve endeavored to identify an approach that works for my students that I present at the start of each semester…someone labeled it the “7-Minute McCrickard Method” (and yes, I embraced the label). The approach seems well-suited for an introductory grad class that focuses on 3-4 papers each class session–even on a busy week you can be poised to get a whole lot more from class with 20-30 minutes of prep time. It’s often easy to distinguish “Student A” who has spent even a little time looking through a paper from “Student F” who didn’t manage to do so. I recommend you endeavor to be an “A” student, and an “A” researcher!

So give each of these seven steps a minute each before going into class:

  1. Read the title, author list, affiliations, and venue. The title is a half-dozen or so words that the authors selected to represent their paper–read them and think about what they mean! Consider whether you’ve encountered the authors’ work before, and think about where the authors are from (academia, industry, government labs) and what that might imply about the work. And consider the venue where the paper appears–a conference or journal or magazine article or workshop paper, a venue highly specialized or fairly broad in the work that it accepts–as these factors will help understand the scope of the paper, the intended audience, and the degree of rigor in the review process.
  2. Read the abstract. In general, an abstract briefly captures the intended contribution of the paper, and since the authors were kind enough to supply a summary of their work…take advantage of it! You’ll usually be able to read the entire abstract in about a minute.
  3. Skip ahead to the references. Take a brief look at the papers cited by this paper. Do you recognize any names? Do the authors cite any of their own prior work? Are there familiar venues? Are there other papers from the same venue as the one you’re reading? Even a one-minute pass through this section should help situate the paper within the field.
  4. Look through the introduction. This section typically provides a framing for the issues addressed in the paper and the approach that the authors undertook in addressing the issues.
  5. Look through the sections/subsections. A quick one-minute pass through the body of the paper should give you an idea of the structure and directions of the work.
  6. Look at the pictures. By “pictures” I mean figures, tables, charts, graphs…anything visual that the author spent time on to summarize or exemplify the paper’s findings. So pause when you get to these and see what message the authors are seeking to deliver.
  7. Read the conclusions. Here’s where you can learn what the authors think that the paper contributes, and hopefully this will inspire you to think about impacts and future directions for you, your class, and your research.

Now the asterisk: what do those seven minutes NOT get you? Well, you won’t know much. You won’t be able to question deeply. You won’t be prepared to present the paper to a class or reading group. You won’t be sufficiently knowledgeable to cite the paper in your own work based on such a brief reading, as a citation is a type of endorsement that the paper might not be worthy to receive. But even after just seven minutes you should have a general idea of the paper’s intended contribution, and you should be in a position to listen to a talk about the paper, to understand how the paper connects with other contributions in the area, and to make the decision whether (and how) to read the paper in more depth.


NCA&T Mobile Computing Faculty Development Workshop 2015

July 30, 2015 1 comment

Last week I attended a faculty development workshop on mobile computing at North Carolina A&T State University (NCA&T). ncatgroup2015The workshop was funded by the NSF HBCU-UP program as part of a 3-year grant (with one year remaining).  A goal of the grant is to assemble modules and materials that could be adopted or adapted for use in undergraduate courses. The modules, which were core to the workshop, are described at Attendees came from universities, 4-year and 2-year colleges, community colleges, and one K-12 specialist!

I was struck by the breadth of ways in which mobile computing is taught: freshman-level courses, multi-course tracks, upper-level courses, module-based topic-centered modules.  I was invited because I’ve taught a junior-level mobile design class for a number of years.  I talked with one of the organizers at SIGCSE earlier this year, and he encouraged me to apply.  Some of the modules were spot-on, really hitting on topics that I should have been including in my course all along–particularly those related to security and performance. Some were topics that I already covered (maps, sensors) and others were better suited for more introductory courses. But overall it was worthwhile to hear about the modules.

Even more valuable than the modules were the discussions.  There was a great interactive session in which we brainstormed implications of the differences in mobile (sensors, multiple cameras, multiple changing networks, touchscreens, security at download) vs desktop (virtual memory, peripherals, multi-user support, runtime security) and how that impacts teaching.  The introductory session, the breaks, and the reception gave opportunities to talk with other attendees about their teaching approaches.  And the workshop wrap-up session gave the subset of us who could stick around a chance to brainstorm ideas for how to organize the modules and materials, explore ways that an EDURange-style approach could be used for dissemination, and possibilities for a SIGCSE paper that details successful teaching modules.  With the grant continuing, I look forward to taking part in follow-up efforts.0721151331b

The NCA&T campus is lovely, tucked in near downtown Greensboro right across from (the even more beautiful?) women’s college Bennett College.  (Alas, as with many places they choose the summer when students area away to do their campus improvements, so some key landmarks were being repaired.) NCA&T is a historically-black university with strength in computing security and information assurance. I’d been to NCA&T before as part of another grant, and I grew up in Greensboro so I’m certainly familiar with the school and area, but it was great to go back and visit again.