SIGCSE 2016

April 6, 2016 Leave a comment
2016-03-05 14.04.23

Virginia Tech students and alums at SIGCSE 2016

SIGCSE 2016, the flagship conference on computer science education, took place in Memphis TN in March, with a big collection of Virginia Tech students, faculty, and alumni taking on a variety of important roles. My grad student Mohammed Seyam and I presented a paper on teaching mobile software development with Pair Programming. Cliff Shaffer and his students and alums had multiple papers and exhibits. Greg Kulczycki served on a panel.  And, most notably, Steve Edwards was program co-chair this year!

Mohammed Seyam’s paper and talk focused on Teaching Mobile Development with Pair Programming. It explored his investigation of Pair Programming (PP) when teaching mobile software design in an upper level CS course. PP has been shown to be useful in some teaching situations, but Mohammed is the first to look at it in teaching mobile. He also had an entry in the graduate Student Research Competition that took a broader look at the balance between PP, hands-on activities, and traditional lectures when teaching mobile software design, for which he was named a finalist.

As always, SIGCSE featured interesting and engaging keynotes. John Sweller talked about the impacts of cognitive load theory on CS education. Barbara Boucher Owens and Jan Cuny received service awards from SIGCSE and gave keynotes that reflected their life experiences. It was particularly good to see Jan Cuny receive an award given her contributions to diversity in leading broadening participation in computing programs at the NSF. Karen Lee Ashcraft talked about breaking the glass slipper, and how organizations historically (and continually) have crafted jobs and workplaces that encourage stereotypes. This was a bolder and more developed version of a talk she gave at NCWIT 2015.

One of my favorite emerging things at SIGCSE is the Common Reads initiative, which returned for its second year. It’s an effort to encourage SIGCSE attendees to read a common set of CS-related materials. There are stickers for conference badges that are handed out at registration to highlight who’s read what, thus providing another avenue to start conversations. And there’s a conference session one evening to discuss the readings, how they relate to CS, and how they can be used with students. This year’s books were all science fiction: The Diamond Age by Neal Stephenson, Ancillary Justice by Ann Leckie, A Logic Named Joe by Will F. Jenkins, and Seven Years from Home by Naomi Novik. These books and stories touch on core CS themes like AI, parallel computing, fault tolerance. While thee themes are certainly relevant to CS, it seems important to me to promote topics other than just science fiction to support a breadth of interests.  As such, for SIGCSE 2017the most intriguing common read to me is The Thrilling Adventures of Lovelace & Babbage: The (Mostly) True Story of the First Computer by Sydney Padua. It’s a comic-style reimagining of CS heroes Ada Lovelace and Charles Babbage, exploring a world in which they collaborated closely to build and use a computer. There are a couple of other sci-fi entries included as well, Andy Weir’s The Martian (yes, the book that the movie is based on) and Isaac Asimov’s The Last Question short story.

It was fun to connect with the VT crowd on the LONG van ride across Tennessee to Memphis. The Memphis area is a little depressed, but there seem to be efforts at renovation, and the food and music were a great indulgence. It was fun to be just a few feet from the Mississippi River during the conference, and we were able to duck across the border to neighboring Arkansas and Mississippi on our drive.  We also had quick visits to Nashville and Kingston going to and from the conference. Next year’s SIGCSE will be in Seattle, so it’s unlikely we’ll drive to that venue!

Several others put together writeups about this event as well. CS@VT blogged about VT’s participation in SIGCSE (excerpts from this post), and Georgia Tech put forth a press release about the event. Mark Guzdial from Georgia Tech has several blog posts about Jan Cuny’s SIGCSE Outstanding Contribution award and a description of one of his posters replicating his earlier work. It was enlightening to read about the frustrations in publishing replicated work. There’s real value there but so many venues put much more value on innovation rather than replication. Janet Davis blogged about her experiences at SIGCSE from her perspective as a faculty member starting a new CS department. Georgia Tech and NCWIT had groups there too, and it was great to connect with them. And I’m sure there’s much more writeups about SIGCSE that I missed–feel free to include other relevant links in the comments.

Advertisements

Being smart about games for smartwatches: Preliminary evaluations and feedback

November 18, 2015 1 comment

Smartwatches like Pebble, Android Wear, and Apple Watch provide user interface challenges given their small size and limited input means. At Virginia Tech, we’ve taken part in development and outreach efforts that have explored the use of smartwatches through app development, K-5 outreach, and, most recently assessment. This post presents findings from an in-class investigation in an undergrad human-computer interaction class.

The class explored performance on a game that is common across multiple platforms: Minesweeper. We configured the game to use an 8×8 grid (64 total squares) with mines hidden at 10 of the locations. The game requires players to identify the 10 mines and uncover 54 squares with no mines. A player can select a square to reveal how many mines are in adjacent squares, or mark a square as having a mine. The game ends when all squares are reveled or marked, or when the player reveals (as opposed to marking) a square with a mine. Score is reflected in number of mines found (so 10 is the worst score and 0 the best) and time taken to complete a game (lower times are faster/better).

We looked at Minesweeper on three platforms: laptop (Web), smartphone (mainly Android, some iOS), and smartwatch (original Pebble with 3-button input). We endeavored to find Minesweeper apps that were similar in appearance, though the limited graphics of the Pebble resulted in a less visually appealing game for that platform. Also, we note that the interaction styles are very different across platforms: laptop users used the touchpad and buttons to highlight mines, smartphone users performed a press to reveal and long-press to tag, and watch users scrolled with the top and bottom buttons, revealed with the middle button press, and tagged with a long press.

We explored whether platform has an effect on performance and enjoyment, with the thought that a more interaction-rich platform like the laptop and smartphone would be easier to use. Students in the class played the game for 12 minutes, with encouragement to complete as many games as possible within the 12 minutes but seek to be successful in playing the game. Students divided into groups of 4, with one person recording data and the other three playing the game on the three platforms.

Major disclaimer: this study was conducted in a classroom setting by students with minimal training in running user studies (i.e., a single lecture). Some students didn’t understand the activity, collected the wrong data, and entered data incorrectly. In some situations, the data seemed highly questionable and was eliminated from consideration. I would expect that the results would be of interest toward crafting future studies rather than in and of themselves. This activity was primarily done as a learning experience for the students, but it was interesting to see the results that students generated as part of the activity. Certainly this doesn’t belong in a peer-reviewed venue, but I’m hopeful it will serve as a launching point for future investigations of smartwatch interfaces.

Some key results:

  • Participants attained better scores with the Web (4.9) and smartphone (4.4) than with the Pebble (7.5), p=0.000001. The difference between Web and smartphone is not significant, p=0.21.
  • Participants found the Web (4.3) and smartphone (4.2)  versions of the game easier than the Pebble (0.3) based on a 0-5 point scale, p<3×10-30. Similarly, they found the Web (4.1) and smartphone (4.4) versions more fun than the Pebble (0.5) on a similar scale, p<5×10-31. There was no difference between Web and smartphone for either measure.
  • There were no differences in time spent on each game between Web (40.1), smartphone (45.2) and smartwatch (51.2), p=0.31.

Students expected that Pebble would result in the worst performance and would be most disliked, but they found it surprising that there was not a significant difference between Web and smartphone. People speculated that a more in-depth questionnaire and deeper examination of the tasks (slips, errors, strategies) would reveal more actionable differences. I was glad they saw this as the start of understanding smartwatches and other platforms.

This was the first time that I had the students in my class run a class-wide study of this type, and it resulted in mildly-controlled chaos. Usually I encourage them to take part in user studies run by grad students to get a sense of how studies work, but I feel there was a lot more participation and understanding by having them take part in all aspects of a study.

Games is one of the six categories of apps for Pebble, with 44 games that have over 100 downloads (compared to 29 for notifications, 44 for health and fitness, 66 for tools and utilities, 34 for remotes, and 56 for daily use as of 11/18/2015). Some of the games leverage definitive characteristics of smartwatches (e.g., Maze, Pong, and Ledge Jumper’s use of the accelerometer), but many are ports of common games (e.g., Tetris, chess, Flappy Bird, Minesweeper) that aren’t good matches for the Pebble display and interface capabilities.

We focused on games in this investigation because of the breadth of knowledge of our participants—everyone was familiar with Minesweeper and understood basic strategies. I would consider undertaking such a study at the start of a mobile computing class, to get across to people that apps should be targeted wisely for their platform. However, our research efforts continue to focus on health and wellness apps, leveraging lessons about appropriateness for the platform but considering how the smartwatch can provide unique value to the user.

Categories: Uncategorized

NSF Graduate Research Fellowships: Maximizing Chances for Success

September 10, 2015 Leave a comment

The U.S. National Science Foundation (NSF) offers Graduate Research Fellowships (GRF) to applicants who are beginning or about to begin a Ph.D. I’ve advised a student who has written a successful one, I’ve reviewed applications internally for people in my department, and I’ve become intimately familiar with the current review process for the NSF. There’s no magical formula for getting one that I’ve discovered, but there are definitely things you should and shouldn’t do to maximize your chances. This post seeks to capture my experiences and advice—of particular relevance to those in computer science and human-computer interaction but perhaps applicable in other fields as well.

My grad student Greg Wilson received an NSF GRF in his first year at Virginia Tech. His proposal discussed solid and interesting ideas related to mobile and ubiquitous computing, but what really appealed to the reviewers was his outreach efforts. He has a passion for K-12 education, and his application discussed that in detail. He described prior outreach efforts in his personal statement, thus demonstrating an interest and ability in similar efforts in his graduate work. Receiving this fellowship allowed Greg to pursue his own ideas and really make a difference with his work. He completed his MS at Virginia Tech and went on to a Ph.D. in education at the University of Georgia.

The Virginia Tech Computer Science Department hosts an internal review process for national and international graduate scholarships and fellowships like the NSF GRF. It is organized by faculty member T.M. Murali and includes work sessions, early reviews by fellow grad students, and reviews by faculty in the department (including myself some years). It’s a great way to get feedback both from peers and from potential committee members, and I feel like it really made a positive difference for my student Greg. If you don’t have this available to you, find a way to get feedback from a breadth of other people.

I am very familiar with the reviewing process for NSF applications. For the last couple of years, it has taken place via teleconference, in which reviewers read and comment on applications prior to a pair of online meetings. The meetings present a listing of ratings, then ask for champions of lower-rated proposals that seem particularly worthy. The 20+ person online panel breaks into smaller 3 person groups to discuss moving proposals up (or down) the ranking if a proposal’s champion makes a compelling case for why it should be moved. If you can attract a champion, you’re greatly improving your chances. The final listing serves as a recommendation to NSF program officers and other personnel, who make the final determination as to who receives an award.

A few summary thoughts and recommendations that can help with a successful submission:

  • Follow the guidelines. Yes, there are lots of them, and I’m sure you have great ideas that you might feel should carry your proposal even if you don’t pull together your application just right.  But failing to follow the guidelines can obfuscate your expected contributions. You risk annoying the reviewers and the program managers by making them dig for (or guess at) certain elements of your proposal.
  • Provide a roadmap for your proposal. Keep in mind that reviewers will be looking at lots of proposals, and secondary reviewers and program managers will be looking at even more—sometimes for very short periods of time. As such, make sure the key points of your proposal can be found at a glance. Label sections and subsections, highlight key terms, craft figures and tables that are both descriptive and easy to understand. And don’t use a tiny font just to squeeze more in—find a way to say what you want to say concisely. Of course, none of this matters if the content isn’t good, but good content that can’t be understood easily can also sink a proposal.
  • Think about intellectual merit. The NSF cares a lot about this (and the next bullet, broader impacts). Read the full description on the NSF site and specifically address ways in which your work will have intellectual merit. Even if you feel your entire proposal is all about intellectual merit, make sure to explicitly highlight your expected contributions.
  • Think about broader impacts. This one is even harder, but as with my student it really matters. It’s important to show how your work will make a difference, keeping in mind that reviewers will be generally knowledgeable about your field but not necessarily deeply knowledgeable about your topic. As such, don’t just make a laundry list; e.g., stating that your work will lead to improved interfaces for scientists, bricklayers, moms, bartenders, etc. Instead really draw the path to the future utility of your work—and if you can show yourself guiding the research down the path, all the better.
  • Get good letters. This one, to some degree, is out of your hands—but that doesn’t mean you can’t make choices that maximize your chances for good letters. The best letters are from people who BOTH know you AND know how to write good letters. A letter from someone who knows you very well but doesn’t understand NSF GRFs might be a poor choice, just as a letter from a highly regarded individual who clearly knows nothing about you and has little to say about you likely will be unhelpful. Seek to approach people who’ve been part of successful NSF GRFs in the past, and from people who will help you toward your proposed goals. But make sure these are people who can either say good things about your prior work and/or good things about your proposed work—people who have been a meaningful and integrative part of your research life.

Finally, keep in mind that, for better or worse (usually better), the NSF regularly changes the guidelines and procedures for fellowships, so make sure to verify that your submission matches the way things are done. There’s lots of other advice out there, so seek to find it and identify the path that is most promising to you. There’s always a bit of randomness to the procedure, but there are steps you can take that can increase your chances of receiving an award. Most of all, pursue interesting and important ideas that appeal to you and your collaborators. Good luck!

Reading a Professional Paper in Seven* Minutes

August 23, 2015 Leave a comment

Reading professional papers is an important part of a researcher’s life, and it’s an important part of every grad class that I teach. I’ve endeavored to identify an approach that works for my students that I present at the start of each semester…someone labeled it the “7-Minute McCrickard Method” (and yes, I embraced the label). The approach seems well-suited for an introductory grad class that focuses on 3-4 papers each class session–even on a busy week you can be poised to get a whole lot more from class with 20-30 minutes of prep time. It’s often easy to distinguish “Student A” who has spent even a little time looking through a paper from “Student F” who didn’t manage to do so. I recommend you endeavor to be an “A” student, and an “A” researcher!

So give each of these seven steps a minute each before going into class:

  1. Read the title, author list, affiliations, and venue. The title is a half-dozen or so words that the authors selected to represent their paper–read them and think about what they mean! Consider whether you’ve encountered the authors’ work before, and think about where the authors are from (academia, industry, government labs) and what that might imply about the work. And consider the venue where the paper appears–a conference or journal or magazine article or workshop paper, a venue highly specialized or fairly broad in the work that it accepts–as these factors will help understand the scope of the paper, the intended audience, and the degree of rigor in the review process.
  2. Read the abstract. In general, an abstract briefly captures the intended contribution of the paper, and since the authors were kind enough to supply a summary of their work…take advantage of it! You’ll usually be able to read the entire abstract in about a minute.
  3. Skip ahead to the references. Take a brief look at the papers cited by this paper. Do you recognize any names? Do the authors cite any of their own prior work? Are there familiar venues? Are there other papers from the same venue as the one you’re reading? Even a one-minute pass through this section should help situate the paper within the field.
  4. Look through the introduction. This section typically provides a framing for the issues addressed in the paper and the approach that the authors undertook in addressing the issues.
  5. Look through the sections/subsections. A quick one-minute pass through the body of the paper should give you an idea of the structure and directions of the work.
  6. Look at the pictures. By “pictures” I mean figures, tables, charts, graphs…anything visual that the author spent time on to summarize or exemplify the paper’s findings. So pause when you get to these and see what message the authors are seeking to deliver.
  7. Read the conclusions. Here’s where you can learn what the authors think that the paper contributes, and hopefully this will inspire you to think about impacts and future directions for you, your class, and your research.

Now the asterisk: what do those seven minutes NOT get you? Well, you won’t know much. You won’t be able to question deeply. You won’t be prepared to present the paper to a class or reading group. You won’t be sufficiently knowledgeable to cite the paper in your own work based on such a brief reading, as a citation is a type of endorsement that the paper might not be worthy to receive. But even after just seven minutes you should have a general idea of the paper’s intended contribution, and you should be in a position to listen to a talk about the paper, to understand how the paper connects with other contributions in the area, and to make the decision whether (and how) to read the paper in more depth.

NCA&T Mobile Computing Faculty Development Workshop 2015

July 30, 2015 1 comment

Last week I attended a faculty development workshop on mobile computing at North Carolina A&T State University (NCA&T). ncatgroup2015The workshop was funded by the NSF HBCU-UP program as part of a 3-year grant (with one year remaining).  A goal of the grant is to assemble modules and materials that could be adopted or adapted for use in undergraduate courses. The modules, which were core to the workshop, are described at http://williams.comp.ncat.edu/mobile/. Attendees came from universities, 4-year and 2-year colleges, community colleges, and one K-12 specialist!

I was struck by the breadth of ways in which mobile computing is taught: freshman-level courses, multi-course tracks, upper-level courses, module-based topic-centered modules.  I was invited because I’ve taught a junior-level mobile design class for a number of years.  I talked with one of the organizers at SIGCSE earlier this year, and he encouraged me to apply.  Some of the modules were spot-on, really hitting on topics that I should have been including in my course all along–particularly those related to security and performance. Some were topics that I already covered (maps, sensors) and others were better suited for more introductory courses. But overall it was worthwhile to hear about the modules.

Even more valuable than the modules were the discussions.  There was a great interactive session in which we brainstormed implications of the differences in mobile (sensors, multiple cameras, multiple changing networks, touchscreens, security at download) vs desktop (virtual memory, peripherals, multi-user support, runtime security) and how that impacts teaching.  The introductory session, the breaks, and the reception gave opportunities to talk with other attendees about their teaching approaches.  And the workshop wrap-up session gave the subset of us who could stick around a chance to brainstorm ideas for how to organize the modules and materials, explore ways that an EDURange-style approach could be used for dissemination, and possibilities for a SIGCSE paper that details successful teaching modules.  With the grant continuing, I look forward to taking part in follow-up efforts.0721151331b

The NCA&T campus is lovely, tucked in near downtown Greensboro right across from (the even more beautiful?) women’s college Bennett College.  (Alas, as with many places they choose the summer when students area away to do their campus improvements, so some key landmarks were being repaired.) NCA&T is a historically-black university with strength in computing security and information assurance. I’d been to NCA&T before as part of another grant, and I grew up in Greensboro so I’m certainly familiar with the school and area, but it was great to go back and visit again.

NCWIT 2015

May 31, 2015 1 comment

The National Center for Women and Information Technology (NCWIT) held its annual summit last week in Hilton Head, SC. NCWIT’s goal is to increase women’s participation in computing and technology fields through high-visibility events and activities that engage universities, colleges, companies, and government institutions. The NCWIT Summit balances headline events (keynotes, flashtalks, award announcements) with opportunities for focused discussion, posters, and workshops.

It seems like NCWIT always does a great job of lining up keynotes; my favorites were Ben Jealous and Karen Ashcroft. Former NAACP leader Ben Jealous talked about his experiences with prejudice and how it’s important to work hard to recognize and overcome it. CU communications prof Karen Ashcroft highlighted how changing the work environment is key to improving diversity, focusing on a re-valuing of skills, priorities, and communication habits. Most keynotes will be available on the NCWIT site in the near future (Ashcroft’s slides are already there).

Pacesetters wrapped up its latest two-year session last November, but the group met to discuss future directions. Pacesetters is a subset of NCWIT members that seeks to define and exercise approaches to diversity. The last few cohorts focused on increasing the number of “net new women”, though it seems that term was used in so many different ways that it lost much of its meaning. (And some companies were reluctant to supply such numbers.) It seems decided that the next two-year focus will be on “retention of women”, which seems could suffer from the same definitional issues. I look forward to seeing how the definition evolves, and how it might fit CS@VT’s needs and goals. In addition, several companies demoed software to help remove bias from the hiring and promotion bias; most relevant was Textio’s tool that automatically reviews job ads for biased words and phrases and suggests alternatives. Perhaps focusing on the retention and mentoring of women faculty, e.g., seeking to ensure a lack of bias in annual review letters, would be a possible Pacesetter direction.

NCWIT consists of several alliances—Virginia Tech is part of the Academic Alliance—that held various breakout activities during the summit. I took part in a panel on undergraduate mentoring of women and underrepresented minorities, providing direction and tips to about 15 attendees. I also showed off a poster of our work with Extension Services. I attended other breakout groups on pursuing funding, and I viewed posters on diversity activities at other institutions.

There were a few little hiccups along the way.  The workshops were overfull, and I couldn’t get in either I signed up for in advance.  (Unlike last year, when there were people lining the sides of the rooms of the most popular workshops, they just closed the doors when it was full…not sure which is better/worse, but neither worked for me!) The venue was beautiful but a bit small for our growing group—something that the organizers seem to realize and plan to move to correct. And the Aspirations award winners, so visible at the summit two years ago, seemed to be lurking in the shadows for the second year in a row. Our students seemed to enjoy the event, and they made lots of great contacts, but they weren’t as visible around the venue as I hoped. I realize this becomes increasingly difficult as the program grows.

Virginia Tech was well-represented since this was a “local” conference. Also in attendance from VT were fellow change leader Libby Bradford and four Virginia Tech undergrads. Libby had several presentations and meetings related to our participation in the Aspirations program. Our undergrads were invited to lots of dinners and other meetups with companies and organizations. I served on a panel titled “How to be an award-winning mentor”, focused on ways to engage with young women through research experiences. We’re putting together a collection of tips that will be posted online soon. I also presented a poster about our involvement with NCWIT’s Extension Services—big thanks to Cathy Brawner of NCWIT for helping out with that.

As always, a big plus was connecting with the great people in attendance. It’s always wonderful to talk with the positive and energetic NCWIT staff, especially Kim Kalahar and Jill Ross. Organizations from the state of Virginia had an informal meetup, corralled by JMU department head Sharon Simmons. There were countless regulars in attendance that I connected with, and I caught up with lots of VT alums, including Cheryl Seals and Felicia Doswell, And there were plenty of new faces and new ideas; e.g., Hai Hong from Google told me about CS-first.com as a possibility for expanding what our CS Squared outreach club does, and several new faculty attendees discussed the possibility of joint hackathons.

It’s always great to connect with this enthusiastic group. Pacesetters kicks off a new cohort in November, and Virginia Tech may be part of that. And next year’s summit will be in Las Vegas—no longer local, but certainly a popular destination.

SIGCSE 2015

March 26, 2015 1 comment

Earlier this month a large group of Virginia Tech faculty and grad students attended the ACM SIGCSE Conference in Kansas City, MO—the flagship conference in computer science education. It’s an interesting conference, full of people at lots of levels of CS education: K-12, small colleges, big universities, and companies and book publishers that support them. The conference was is Kansas City, known for its BBQ, downtown plaza, long walks, and BBQ. VT faculty Steve Edwards, Cliff Shaffer, Manuel Perez, Dwight Barnette, and I all attended, along with a large number of grad students. Within my research group, Andrey Esakia, Shuo Niu, and Mohammed Seyam joined us on the trip to SIGCSE. I connected with lots of VT alums, along with with colleagues from Georgia Tech, UNC, NCWIT, Colorado, and elsewhere.

SIGCSE 2015 Pebble demo

our SIGCSE 2015 Pebble demo session with Shuo and Andrey

Our highlight of the conference was a full paper talk, focusing on our use of Pebbles in CS 3714. As far as I can tell, we were the first to use smartwatches in the classroom, and we had a well-attended talk. We touched on the assignments and activities from spring 2014 and summer 2014 sessions. We covered the core lessons related to smartwatches—including multi-device connectivity issues, wrist-mounted accelerometer use, and limitations in graphical and processing resources. At the end of the talk, Andrey demoed VT undergrad Jared Deane’s music synthesizer app—a class project in one of our classes—which was a big hit among those in attendance. There were lots of good and on-point questions at the end, showing that the audience was plugged in to our talk. Most of the questions focused on use cases for smartwatches that we were considering, though there was one on security issues with regard to smartwatch-phone pairings that merits future consideration.

We also had a demo in which we showed off the many apps that VT students have developed, including Jared’s and several by VT students Sonika Singh and Shuo Niu (favorites were Pebble-Paper-Scissors and Selfie Watch). That was a fast and furious hands-on session with some good discussions that hopefully inspired interest in using smartwatches in the classroom–as well as future outreach efforts like our Pebbles and kids program at a local elementary school. In addition, we had several posters as well. Seyam’s work on Pair Programming in the classroom was chosen as a finalist in the Microsoft Student Research Competition–yeah!  Andrey had a forward-looking poster on Android Wear in CS 3714.  These can both serve as stepping stones to bigger and better things.

VT folks on the SIGCSE road trip

VT folks on the SIGCSE road trip

Big thanks to everyone for helping make this trip a big success. Particular thanks to everyone for getting the apps working and available on the online store, and thanks to those who gave feedback on talks and posters and such.  And it was great to get to know the VT crowd on our massive road trip. CS Education is a big deal at VT, and it’s great that we were able to contribute to a big VT presence–over 20 faculty, students, and alums.  Next year’s event is in Memphis–a bit closer–so I’m hopeful we’ll have an even bigger presence there.