Commentary Guidelines


The goal of paper commentaries are to get you to think critically about the research that a paper presents and why that research is important. You will write brief commentary reflections—around three to four paragraphs—for each reading in the course.

Writing a strong commentary means grappling with the core ideas being presented in the reading. The goal is not to summarize the paper — everyone reading your commentary will have already read the paper!

Suggested strategies for writing a commentary:

  • Don’t
    • Nitpick low-level details
    • Harp on already-acknowledged limitations or future work
    • Bring expectations from other HCI paper genres (“needs a user study!”)
    • Spend too much time summarizing
    • Levy judgment (“I like this!”) without digging into the why or the implications of that agreement or disagreement
  • Do: engage with the core contributions. To achieve this, we suggest the following three step process:
    • Step 1: Ask yourself, what is the point that this paper is trying to make? (You don’t need to write out the answer to this question in your commentary, but you do need to know the answer in order to write a good commentary.)
    • Step 2: How effectively does it convince you of that argument? How could the argument be even more persuasive, on its own terms?
    • Step 3: What are the implications of the argument? What future frontier projects might be suggested by this, to push the idea farther?

Some appropriate topics to address in a commentary are:

  • future research directions that this paper inspires for you
  • why the paper does/doesn’t seem important
  • observations of novel methodology or methodology that seems suspect
  • why the paper is/isn’t effective at getting its message across
  • how the paper has changed your opinion or outlook on a topic

Commentaries are due at 5:00 PM the day before the lecture on Canvas. After 5pm, your commentary will be viewable by students in your section so that the discussant can begin work on their metacommentary. Late submissions will not be accepted. We will drop the four lowest commentary grades at the end of class: meaning, you may drop four readings’ (not three days’) worth of commentaries.

Commentaries will be graded on a check-minus/check/check-plus scale. The rubric will be:

  • Check-minus: Surface-level engagement with the readings, or a repeat of a style of critique that the staff told the class to avoid. Examples of surface-level engagement include: comments about whether the commenter likes or would use the technology, a summary of the paper rather than a reflections on the ideas, or critiques that engage only obliquely with the paper or indicate that the commenter didn’t fully read it. Partially complete submissions may also earn a check-minus if appropriate.
  • Check: Effective engagement with the readings. Example commentaries involving check grades often indicate that they understand the main ideas of the papers, and the reflections are reasonably nontrivial observations worth discussing.
  • Check-plus: Excellent engagement with the readings. Check-plus grades are reserved for rare instances where a commentary really hits on an interesting, unique, and insightful point of view worth sharing. Generally only a few submissions in each session earn a check-plus.

Commentary Examples


The following examples demonstrate the format expected from the commentaries.

Example 1: As We May Think


This paper was fascinating because it forces us to consider technologies that nowadays we take for granted. In some ways Bush was overly optimistic; for example walnut-sized wearable cameras are uncommon (even though they are possible), likely because optical and physical constraints favor handheld sizes. In other ways he underestimated, such as the explosion of data. For example, some modern cameras can store ten thousand photos rather than a hundred.

Underestimating the data explosion is also apparent in the disconnect between the initial problem description (“publication has been extended far beyond our present ability to make real use of the record”) and the first two-thirds of the paper, which describe technologies that would (and did!) exacerbate the issue by further proliferating data. Yet, he recognizes this issue later in the paper, and then goes on to predict search engines!

It is remarkable how many technologies are predicted in this paper: digital photography, speech recognition, search engines, centralized record-keeping for businesses, hypertext (even Wikipedia?). At the same time, many of the predicted implementations are distorted by technologies and practices common at the time, like “dry photography” or “a roomful of girls armed with simple keyboard punches”. While these presumably served to make the hypotheses more accessible to readers of the time, is it even possible to hypothesize technology without such artifacts?

Aside from predictions, this paper is important for the way Bush frames science in the support of the human race, by augmenting the power of the human mind. It is likely that many of the scientists (and physicists in particular) that were his audience felt guilt and despair from the destruction wrought by advances in nuclear, and even conventional, weaponry in the war. In that social context, seeing science described as a powerful constructive tool for good must have been inspiring.

Example 2: Direct Manipulation Interfaces


This paper goes a long way to refining the concept of ease-of-use. While we have an intuitive understanding of the concept (as Justice Stewart said, “I know it when I see it”), having a model of why something is easy to use is critical to the design of successful interfaces. The metaphors of cognitive distance, and gulfs of execution and evaluation, are now widely-used.

Despite the huge value of its contribution, I rated this paper low because it is poorly-written.* It is significantly longer than it needs to be, as many ideas are repeated with slight changes of phrasing but without new insight. Worse still, some terms are woefully undefined; for example, how do we know when an interface is “unobtrusive”? Saying a “feeling of direct engagement” requires “semantic and articulatory directness” is circular. Many conjectures are unqualified or uncorroborated.

The first example of visual programming is a bad use case for direct manipulation; these interfaces have been tried and are almost universal failures. The success of direct manipulation in this example is not visible until Figure 2, where the user can circle the desired data subset and perform a specific action (a linear regression) on that subset; this is something that would be very difficult without direct manipulation. In contrast, using the mouse to position and wire-up a log operator is demonstrably less efficient than typing “y = log x”. I was looking forward to a meaningful discussion of the problems of direct manipulation, but this did not come until the very end, and was insufficient.

The model of interface-as-representation, as opposed to interface-as-conversation, is valuable for designing direct interfaces. It reminded me of the inconsistency between using the trackpad to scroll documents on a MacBook and using the touchscreen on an iPhone. The representation implies a “way of thinking about the domain”: on the MacBook, brushing down scrolls down, because you are pushing the “view”; while on the iPhone, brushing down scrolls up, because you are pushing the “page”.

*Fortunately, Norman’s book is very well-written.

Example 3: User Technology: From Pointing to Pondering


This paper is an excellent example of a scientific approach to Human-Computer Interaction: hypotheses are proposed and then tested via user studies. Models of human behavior that are not directly corroborated are cited from earlier studies (e.g., working memory and unit-task behavior), adding significant credence to the arguments.

Furthermore, it demonstrates the value of using simple models of human behavior: it is feasible to test them through experiments. And yet low-level theories (such as the Keystroke-Level Model) are straightforward to apply to a wide domain of problems (e.g., predicting expert user performance when you can only observe novices). The crux of cognitive psychology is breaking down the complex process of thought into fine-grained steps than can be modeled and studied.

The idea that user interface should simplify the necessary mental model of the system is also powerful, and practical advice for system designers. Though I believe the authors were correct in saying that it is “under-appreciated”; it continues to be so, as much of design in industry becomes mired in superficial aspects like the size and placement of buttons, the legibility of text, and choice of wording.

As a minor issue, some concepts could use further clarification, such as the distinction between “task space”, “methods space” and “model space”. In addition, this paper is atypical of traditional research papers in that it is written (intentionally) in a narrative style.

Example 4: As We May Think


Holy moly, this guy hit so much right on the nose. First of all, I find his thought process amazing. He starts with the premise that the technology we have today will be better in the future. From that he managed to predict the invention of numerous devices and services, all of which have come to being in some way or form. Granted, he is led down the garden path a few times due to his reliance on current technology (e.g. that every memex holds its own copy of its information. The transmission of information did not really occur in his predictions as we see it today with the internet), yet it is still amazing how much he got right (He does note, at the end, that there is the possibility of unforeseen inventions that could accelerate achievement beyond belief).

His central insight is that man is creating a record, i.e. producing new observations and information about the world, at such a rate, that we cannot effectively store or access it with the technology of the time. And, of course, we now have Google taking the world by storm as one of the most successful companies in search. Bush takes this insight and forecasts what he believes are possible ideas for how we can cope with such a flood of information. I admire his desire to be realistic by using current tech as a basis (for he would have otherwise just have been a Sci-Fi novelist).

Another central insight he seems to hold is that society is now able to deal with complexity of systems. He notes that the Egyptians would not have been able to construct an automobile, even with all of the plans and instructions. I think that his faith in humanity to conquer complexity opens his thoughts up to the wild possibilities and combinations of ideas that he presents. The achievement of reliability, according to Bush, has opened the door to new heights and achievements.

One problem I have with Bush’s writing is his failure to connect the dots between his predictions of logical analyzers and the creation of associative indexes of information. His idea of repetitive thought is a good one. Anything repetitive should be automatable. If is funny, thought, to see where he still things humans are necessary. All associative trails, for example, will be man-made. Some human must code in the trails that connect works together. His original thoughts about the rate of production of information would imply that using humans to do such a repetitive task would be impractical and impossible.

Example 5: Direct Manipulation Interfaces


I think this paper does a great job distinguishing their terms and definitions from other concepts that would be easily confused. For example, while a direct-manipulation interface feels more, well, direct, it does not mean that the interface has ease-of-use. Creating and modifying 1000 images would be tedious and time consuming in a direct manipulation environment, but easy with some script or programming language. The same goes when they distinguish the effects of automation with those of smaller gulfs of execution.

This paper is clearly very influential. It seems like the modern successor to direct-manipulation interfaces are tangible interfaces. Their work seems primarily screen/keyboard-based, as those were the I/O devices they were limited to at the time. Today, with the increase in accessibility of electronics, physical interfaces are becoming easier to build.

I agree with their claim that it is hard to pin down what makes a direct-manipulation interface. That being said, I think Laurel’s requirements for such an interface are a good guideline. I agree with them that it ultimately depends on your problem domain. For some issues, a direct-manipulation interface will feel more natural and “just work,” but for other tasks, a DM interface might feel forced and tedious.

I wish they had broken out of the box of current technology and made an effort to envision future interfaces (even fantastical ones) that could be considered direct-manipulation interfaces and explain why. Similar to Bush above, their observations are colored by the technology that exists at the time. I recognize that it is hard to just make guesses about what will exist in the future, but it could help explain the point in this case.