Wednesday, April 29, 2009

Paper from CHI 2009

Comments:
Comment #1
Comment #2
Comment #3

Focus on Driving: How Cognitive Constraints Shape

the Adaptation of Strategy when Dialing while Driving
Duncan P. Brumby, Dario D. Salvucci, Andrew Howes

Link to CHI 2009 paper.

Overview:
As mobile devices become more common, more distracted drivers take to the roads. Over the years, studies have concluded that interacting with other people or devices while driving increases the risk of being involved in an accident. This results from the driver's need to perform multiple tasks at once--diverting their attention off of the road. These tasks must be interleaved together in order to accomplish them. This paper focuses on identifying the different methods people use to interleave concurrent tasks.

Experiment:
The experiment was conducted using a driving simulator. Subjects drove a vehicle down the center lane of the highway which was lined with construction cones on either side. The vehicle moved at a constant speed so the subjects only had to focus on keeping the vehicle in the center lane and dialing numbers on a cell phone. Key presses on the phone were recorded and timestamped.

In the first part of the experiment, participants were timed on how fast and accurately they could dial the number. Then, they interacted with the simulator, trying to keep the vehicle centered in the lane while traveling at a constant speed. In the second part of the experiment, subjects interleaved the tasks based on different priorities: dialing focus or steering focus.

Results:
The data shows that when the subjects used their phones, their performance was impaired because their attention was focused on things other than driving. Improvements in lane keeping only occurred during long delays between key presses, and the vehicle moved further away from the lane center during shorter key press intervals. The duration of time between key presses can be used to determine when drivers were focusing their attention on the driving task.

They also saw that participants use hierarchical chunks to determine when they were to switch from one task to another (ex: xxx-xxx-xxxx). The experiment shows that they would break the dialing up into these chunks and lane keeping would improve during the period of time between these chunks.
From here, they tried to determine which strategy of task interleaving would be most efficient in terms of driver performance. For this, they used a Cognitive Constraint Modeling (CCM) framework to explore alternate task interleaving strategies. They were able to derive performance predictions for each strategy that was implemented.

They found that drivers who didn't return their attention to the driving task at least twice experienced large lane deviation. Those who made a correction to steering after dialing the first chunk experienced less lane deviation while dialing the second chunk. Though it might have not been needed the first time, it decreased the amount of lane deviation they experienced in the future.

Conclusion:
Their findings support the idea that the total time a driver is distracted from their primary task is less important than the extent to which they glance back at the road while trying to complete the secondary task. Designing mobile devices that don't require long periods of interaction could be helpful in reducing the "deleterious effects of distraction."

Tuesday, March 24, 2009

Paper from UIST 2007

Comments:
Comment #1
Comment #2
Comment #3

LucidTouch: A See-Through Mobile Device
Daniel Wigdor, Clifton Forlines, Patrick Baudisch, John Barnwell, Chia Shen

Link to UIST 2007 paper and video.

Overview:
A potential drawback of touchscreen devices (on mobile phones) is that the user's fingers may obstruct his/her view of the screen or item they are selecting. This paper discusses a device (LucidTouch) that allows the user to control the application by touching the back of the device. This is done by overlaying an image of the user's hands onto the screen, making it appear that the device is semi-transparent. This allows users to easily select things on the screen that could otherwise potentially be obscured by their fingers or hands.

Device:
It combines a multi-touch input surface with a semi-transparent display that overlays an image of the user's hands onto the screen (illuminated semi-transparent bi-directional diffuser layer). It makes use of a camera mounted on the back of the device to capture video of the user's fingers and hands. Since there are eight possible contact points, something was needed to help users distinguish between fingers on the display. The LucidTouch makes use of red dots (hovering fingers) and blue dots (in contact with device).

Experiment Features:
The traditional QWERTY layout was used. Additionally, a modified version was also used that split the keyboard in half and reoriented it so that the user could maintain the usual "home row" while holding the device. While it might benefit touch typists, it could potentially be confusing for users who are unable to do this (look at keys when typing on a regular computer keyboard).

Since one finger can't span the entire screen space, coordinated actions between hands may be required (handing-off items from one to the other). LucidTouch expands small items as they get closer to the center of the screen to support easier hand-off between fingers.

Traversing a map was made easier by allowing it to take advantage of the multi-touch interface. The map-browsing application supported rotation, translation, and scaling, while remaining under the user’s fingers.

Experiment:
Map Browsing- users were presented a map of Cambridge, Massachusetts and asked to find the location of the lab that they were in. The task was presented four times: 1) Use only one thumb on the front of the device. 2) Use a thumb on the front and a finger on the back. 3) Make use of all fingers on the back of the device. 4) Overlaying image of the hands were removed and only the touch-cursors could be seen.

Text Entry- users were asked to type their name using both of the keyboard layouts. Each layout was used twice. First, they only used their thumbs on the front of the device. Then, they were allowed to use fingers on the back. When entering on the back of the device, first the semi-transparent image of the hands was used, and then it was removed.

Drag & Dock- users were asked to select an item and drag it to a specified location on the screen. They followed the same conditions used during the text entry part of the experiment.

Results:
For most tasks (all except for non-inverted QWERTY keyboard), participants found that using the multi-touch interface on the back was useful in accomplishing their tasks. Most users found that the semi-transparent overlay was useful. Without it, they found it difficult to determine which touch-cursor corresponded with which finger. They found that it would be useful to vary the pseudo-transparency, in order to minimize its intrusiveness.

Paper from CHI 2008

Comments:
Comment #1
Comment #2
Comment #3

Electronic Voting Machines versus Traditional Methods:

Improved Preference, Similar Performance
Sarah Everett, Kristen Greene, Michael Byrne, Dan Wallach,
Kyle Derr, Daniel Sandler, Ted Torous


Link to CHI 2008 paper.

Overview:
An estimated 66 million U.S. citizens submitted their vote in the 2006 Presidential Election using a direct recording electronic (DRE) system. Though they are widely used, little research has been done to prove they are more usable than more traditional methods of voting. This paper compares usability data from a DRE with that of other methods (level machines, punch cards, paper ballots).

Experiment 1:
The first experiment presented the usual DRE voting system. Subjects were presented first with an instruction screen. After that, they were able to vote on each item/proposition in the ballot. Navigating through each page was done using a "Next Page" and "Previous Page" button. After all items had been voted on, all of their selections were summarized on a screen so users could double-check their choices. From there, the ballot could be submitted. After submitting their votes, participants completed a survey (System Usability Scale, measuring satisfaction). The first experiment composed of two separate parts.

Experiment 1A:
In this experiment, the subjects were divided into two groups. The first group was instructed to vote whichever way that they wanted. The second group was given a piece of paper that instructed them how to vote on each issue. Each participant voted using the DRE and an additional method (lever machine, paper ballot, punch card).
Undirected, Directed with no roll-off.

Experiment 1B:
This experiment was similar to the previous, except for one aspect. The subjects were divided into three groups, instead of two. The third group was also given a piece of paper that instructed them how to vote on each issue, but on some issues they were told to not vote.
Undirected, Directed with no roll-off, Directed with moderate roll-off.

Results:
There wasn't a large distinction among the average voting times between the DRE and other methods. The only distinct improvement was with the lever machine. Also, people with more computer-experience took less time to vote on the DRE than other participants with less experience. Based on the results of the SUS, they found that participants preferred using the DRE to all other voting methods, regardless of any individual characteristics (age, computer-experience, etc). They also found that the DRE didn't reduce the number of voting errors that occurred. In conclusion, they found that changing the voting technology didn't result in less voting errors.

Experiment 2:
Voters rarely choose to vote on every issue that they are presented with on a ballot. The further they go down the ballot, the less likely it is that a voter will vote in a given race. Experiment 2 focused on comparing the traditional "sequential" voting design with a system where users could start on an overview page and jump to the individual elections that they wanted to vote on. Though it obviously would reduce the time users spent voting, would it also affect usability or increase voting errors?

Again, participants were divided into two groups. The first group voted using the "sequential" voting system. The second group used the new webpage-like system. The same information-conditions were used: undirected, directed with no roll-off, directed with moderate roll-off. In addition, a fourth condition was added--directed with additional roll-off. In this experiement, all participants voted three times: first using the DRE, then using one of the three traditional methods, then again with the same type of DRE. This time, no surveys were given after voting.

Results:
Of course, they found that the average completion time for the direct DRE (269.9s) was lower than that of the sequential DRE (442.3s); however, no reliable differences were found between the direct DRE and the other traditional voting methods. No differences were determined in any of the three directed information conditions. For the undirected condition, they found that the sequential voting method (910s) was much slower than the direct voting method (205s). This is because participants that used the direct DRE system voted in far fewer races than those using the sequential DRE.

They also found that voting errors occurred more often on the direct DRE than on the sequential DRE. This is due to the fact that almost 25% of participants using the direct DRE prematurely submitted their ballots before they intended. Regardless of the voting method used, over 34% of all ballots that were cast contained at least one error.

Though the direct DRE was much faster than the sequential DRE, the subjective usability ratings for the sequential system were higher. This was attributed to the fact that the direct DRE systems were inaccurate and resulted in more voting errors.

Sunday, March 22, 2009

Sixth Reading Assignment

Comments:
Comment #1
Comment #2
Comment #3

Emotional Design
Donald A. Norman

Summary:
In this book, Norman continues his discussion of "proper design." He states in his previous book that products shouldn't be frustrating, confusing, or irritating. Here, he explains that good designs must also be fun, attractive, and pleasurable. Why do we continue to use something that may not necessarily be the easiest to use?

Norman notes that designs appeal to us on three different levels:
  • Visceral- the initial impact of a product, about its appearance, touch, and feel
  • Behavioral- function (what it can do, what it is meant to do), performance (how well it does those desired functions), and usability (how easy it is to understand how it works and how to get it to perform)
  • Reflective- self-image, personal satisfaction, and memories

The author uses the teapots to illustrate the idea of different levels of emotional design. The middle teapot is enjoyable to look at (visceral). Both the teapots in the middle and on the right are easy and fun to use (behavioral). The teapot on the left is referred to as the "teapot for masochists," and it tells a story (reflective). Though a product might not appeal to its user on one level, it might on another. This is why we keep around and continue to use things that aren't necessarily the easiest to use.

Discussion:
In my opinion, this book was pretty boring. It may appeal to some, but I found it difficult to continue reading. He definitely seems to contradict some of the points that he made in his previous book (The Design of Everyday Things). I guess it's alright to make things that don't necessarily work, as long as they look "pretty?" Honestly, I think most of this was common sense. Of course you want your design to be functional, but you're also going to make it look as appealing as possible.

The section on robots was pretty interesting, though. The concept of emotions in robot design was something that I had never given much thought. Norman does a good job of explaining how essential robotic emotions are for them to be useful or function properly/effectively. Unfortunately, programming real emotions into a machine is something that I think we are a long way away from doing.



Ranking:
#1- The Design of Future Things
#2- The Mole People
#3- The Man Who Shocked the World
#5- Emotional Design
#4- The Design of Everyday Things
#6- The Media Equation
#7- Doing Ethnographies

Fifth Reading Assignment

Comments:
Comment #1
Comment #2
Comment #3

The Man Who Shocked the World
Thomas Blass

Summary:
This book discusses the life of Stanley Milgram and his studies in the field of social psychology.
  • Obedience to authority: In this study, subjects were instructed by someone of scientific authority to deliver different levels of electrical shocks to other protesting "participants." He found that 65% of these subjects were willing to deliver the highest voltage shock (450V).
  • Small-world method: This experiment attempted to prove that any two random individuals could be linked through a series of acquaintances. He determined that an average of six people were required to connect two people. This phenomena is sometimes referred to as the "six degrees of separation."
  • Lost-letter technique: Milgram created an experiment to investigate how helpful people would be in a given situation. He did this by writing letters and addressing them to both favorable and unfavorable organizations. Then, the letters were distributed ("lost") around the city. He found that people were more likely to mail letters that were addressed to favorable organizations, rather than mailing those addressed to unfavorable organizations.
  • Familiar stranger: This study investigated the social interactions between people that saw each other on a regular basis. They found that though people might share daily experiences, they are unlikely to interact or speak with others. They attributed this to the stimulus overload that occurs from living in an urban environment.
Discussion:
Overall, I thought this book was interesting. In particular, I thought his "familiar stranger" study was very applicable. In my case, I see most of the same people everyday at the rec center. Have I ever spoken with any of them? No. The same could be said for any large class on campus. My management class has over 300 people in it, but there are hardly many people interacting with each other.

Some people think his obedience experiment was borderline unethical. Certainly, discovering that you are capable of doing such things could be traumatizing. As with anything, there are ethical lines that shouldn't be crossed, but I'm unsure how the experiment could be changed.

The "six degrees of separation" idea can also be applied to Wikipedia. In particular, I remember a thread about this on texags.com. The idea was that any Wikipedia page/topic could be accessed from the front page in under six links. The forum thread was several pages long, and links were found for every case presented... a lot of them were pretty random, too.

In my opinion, this was a decent book. I've never read anything about social psychology before, so I thought the experiments were pretty interesting. It seemed to ramble along in some places, but I suppose that is to be expected.



Ranking:
#1- The Design of Future Things
#2- The Mole People
#3- The Man Who Shocked the World
#4- The Design of Everyday Things
#5- The Media Equation
#6- Doing Ethnographies

Monday, February 23, 2009

Paper from UIST 2008

Comments:
Comment #1
Comment #2
Comment #3

An Application-Independent System for
Visualizing User Operation History

Toshio Nakamura & Takeo Igarashi

Link to UIST paper.

Overview
This paper describes an application created to record and annotate visual representations of operation history. Its purpose is to allow users to find application states easily and quickly.

    Related research:
  • Chimera system- sequence of small snapshots
  • Su's system- history as an annotated diagram
  • Mondrian- screen snapshots before and after operations are executed
The downside to these applications is that the history-recording system is deeply integrated in the application. These systems can't be easily applied to other existing software. In this paper, the researchers developed an application-independent system that can be used without modifying the existing application.

Visual Design
  • Operations with motion are denoted using an arrow
  • Operations without motion are denoted using an icon (left-, right-, other-clicks)
  • Keyboard operations are identified using word balloons
  • Snapshots: strobe style (shows movement), inset style (bounding boxes show active regions)
Implementation
GUI events are first sent to an event queue where they are later sent to the specific application to be handled. Instead of modifying the existing application to monitor actions, this system acts as a "middle man" (proxy) between the event queue and the application. First, events are performed and sent to the event queue. Then, before the queue can send them to the application, they are intercepted by the recording system. The recording system tracks all events in an event log.

The system first breaks up these recorded events into basic operations such as "press," "release," and "click." These operations may consist of several lower-level recorded events (click = click + action + repaint). After these are determined, sequences of the same event are converted into "general events." For example, using the mouse wheel to scroll down a page would generate many "wheel" events. These events would be consolidated into one general "iterative operation event." Finally, "semantic operations" can be determined from the different operations. The mouse dragging an object across the screen could consist of clicking the mouse, then dragging, then releasing the mouse button.

Filtering rules can be customized to limit what type of actions the system records. Some user actions, such as erratically moving the mouse around the screen, are pointless to record. Other interactions can be clustered together to be considered a single action.

Finally, the snapshots are annotated using the collected event information using one of the styles (normal, strobe, inset).

Prototype System
  • Two areas: upper shows details of particular scene and lower shows the storyboard
  • Right-click on scene to toggle focus on entire window or only region of interaction
  • Specify an area to search for all related user interactions in that area
  • Execution states can be restored from each scene in the storyboard
User Study
A study was performed to determine if users could understand the annotated history generated by the system, and whether viewing the history would improve the performance of the users.

The users were first allowed to view a 150sec video (as many times as they wanted) about how to use the system. Then, the users viewed an operation history (sequence of annotated images). Once they started, a dialog containing an image that represented an operation (drawn manually and not system-generated) popped up. They were tasked to find that image in the operation history. Users did this for ten different images. Two groups of six people were formed. Each group consisted of three expert users and three inexperienced users. The experiment was performed using three different types of applications: sketch-based modeling system, icon manipulation, misc GUI widgets.

Results
  • No significant differences between completion times among different applications
  • Some interfaces were much easier to use with the annotated history
  • Annotated history generally outperformed the snapshot history
  • Annotated history could be useful for applications that have sketches or gestures
  • Using the annotated history resulted in lower error rates for the modeling and GUI systems
  • Though the number of errors associated with using the annotated history was higher for the icon manipulation application, the users still finished faster overall
  • A survey revealed that all users preferred the annotated history and thought it was easy to understand

Fourth Reading Assignment

Comments:
Comment #1
Comment #2
Comment #3

The Design of Future Things
Donald A. Norman

Summary:
This book discusses some important issues that designers will soon face when developing the technology of the future. As technology becomes increasingly sophisticated, more of the everyday tasks that we performed in the past are now being handled by machines.
From making smart-homes to creating safer vehicles, the author discusses real problems that designers today are trying to solve.

In the end, the author provides design rules for human designers of "smart" machines:
1.) Provide rich, complex, and natural signals
2.) Be predictable
3.) Provide good conceptual models
4.) Make the output understandable
5.) Provide continual awareness without annoyance
6.) Exploit natural mappings

These are similar to the rules that he discussed in his previous book: The Design of Everyday Things.

Discussion:
In interesting analogy discussed in the book deals with horses and the degree of control that is allowed by their riders. He suggests that the concept of varying the degrees or modes of control ("tight-" and "loose-rein") could be applied to the development of human+machine interfaces. Among other strategies, this could be used to communicate to the system a person's intentions, and vice versa... drive of be driven.

Another point he made that I found interesting was about reducing accident rates by changing people's perception of safety. He notes that with the addition of additional safety features and services, people's behavior has become increasingly reckless. Sometimes, these things give us a false sense of security. When we assume these things are working, we tend to "push the limits." He proposes that by removing some of these safety features (traffic lights, stop signs, wider streets, etc) people will be more cautious about their driving habits.

Though it may not be appropriate for all places, I think it can certainly be applied in some instances. For example, driving back home I don't think twice about driving 75mph down I-35. When it's really pouring down rain and my wipers are barely keeping up, I notice I slow down considerably, try to maintain a LOT of distance between myself and the car in front of me, and rarely change lanes to pass someone. Under these conditions, I'm noticeably more cautious and alert. Good weather definitely gives me a false sense of security when I'm driving. This concept of reverse risk compensation could be applied to automobiles to make drivers more alert and cautious, potentially reducing their risk of accidents.

I found the thought of automated "smart homes" alarming in a way. I'm not sure I would want the house giving me advice about how to live my life. I work out every day and eat fairly healthily during the week, but there's some weekends that I really don't care. I just want to eat the entire 1/2 gallon of ice cream, and I do. I don't want some house telling me that I shouldn't be doing that. Who cares? I'm sure we'd all love to get the call from our insurance providers, "Hey, your house just informed us that you ate two pizzas and a 1/2 gallon of ice cream last Friday night. Expect a rate increase in next month's bill. Have a nice day!" Eww.

Overall, I really enjoyed this book. It provided some insight into some very interesting issues that we will face in designing the technology of the future.

Ranking:
#1- The Design of Future Things
#2- The Mole People
#3- The Design of Everyday Things
#4- The Media Equation
#5- Doing Ethnographies