Hands-On Math: A page-based multi-touch and pen desktop for technical work and problem solving
Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, and Hsu-Sheng Ko
Robert Zeleznik is the director of research at Brown University, specifically with the Computer Graphics Group.
Andrew Bragdon is a second year PhD student at Brown University focusing on human-computer interaction.
Ferdi Adeputra is currently attending Brown University.
Hsu-Sheng Ko is currently attending Brown University.
This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
This paper documents how researchers attempted to combine the free-form features of physical paper with the problem solving ability of computers to form a system that allows for greater efficiency when solving problems. The hypothesis for this research is that if Computer Algebra Systems (CAS) were combined with a more familiar note-taking environment, people will work more efficiently. For the purposes of this research no more than high school level math was implemented.
The first element of the project discussed is the page management features including bezel gestures, a panning bar, and page folding. The bezel gestures are used to pan across the desktop as well as page creation and deletion. These actions are all performed at a low cost to users and are easily performed as only the number of fingers used is truly important. Although users can pan easily with one finger motions across the bezel, the researchers also added support for a panning bar that is displayed after a two-finger upward swipe from the bottom bezel. This bar displays all open documents in a top-down manner allowing for broad movements at once in stead of constant swiping from side to side. Page folding was added as a way to hide parts of a solution that have no reason to be visible any more keeping the screen uncluttered. To fold a page a user simply pinches the desired portion of the paper to hide and it is replaced with a small shadow indicating a fold. To unfold a page, the user taps the shadow previously created.
The next element explained by the researchers are the gestures recognized by the system to allow for interaction with pages and text on them. Under-the-rock menus allow for hidden operations to present themselves in a semi-opaque radial menu to users once an object is being moved by one finger. If a second contact is detected in the middle of the menu, it becomes active and can then be used to select an action to perform on the selected object. Touch-Activated Pen (TAP) Gestures allow for easy switching between ink mode and anything else a user may want to do. Gestures supported include making 2D selections, inserting space, and clipboard pasting. TAP Gestures are activated by drawing a specific symbol or shape with a pen and then selecting an option from a menu, that appears simultaneously, before removing the pen from the surface. Removing the pen from the surface or continuing to draw when these TAP Gesture menus appear will result in a continuation of ink mode and will draw like normal. The last two gestures provide faster selection of common commands that do not necessarily involve objects on the surface. PalmPrints allows a user to access common commands, like changing font, by placing a palm on the surface and then selecting an action from a menu that appears beneath the fingers. FingerPose allows multiple actions to be stored for similar input because a single finger gesture with a more vertical pose can be accurately differentiated from a horizontal pose with the same finger. This allows for a user to either pan the desktop or move an object with the same finger, just in different poses.
The last element of this system is the math functionality provided to users that use the system after pages and entries have been made. By using a series of pinch, drag, and stretch gestures on specific parts of an equation, certain operations will be performed with the results displaying directly below the affected equation actively showing significant changes made by the action.
The system was tested on a group of 9 participants. Each participant was asked to perform a series of tasks and provided help when needed, although verbal help proved rather ineffective when compared to hands-on demonstrations. The conclusion of the participants was overwhelmingly positive when considering the potential but the need to simplify gestures and shrink the form factor were made apparent. Overall the system was looked upon with success, as their tests did not contradict their hypothesis, and the researchers discussed the feasibility and usefulness of integrating more complex math as the system grows.
Discussion
I think this paper is interesting as it discusses the merging of "old ways" with "new ways" and presents a very plausible way of doing it. I do not think the researchers proved their hypothesis, they just did not disprove it, which seems like a short coming. The implementation seemed to be very creative and intelligently designed but the testing lacked in tutorial material for testers resulting in confusion. The commands also seemed to be memory driven in some ways which decreases its immediate accessibility although, in the long run this would become irrelevant as people would learn it if they wanted to use it. I think ideas like this will lead to more advancements in the big picture than altogether new products because this research combines what people already do with what computers already do.

No comments:
Post a Comment