Monday, October 31, 2011

Paper Reading #25: Twitinfo: aggregating and visualizing microblogs for event exploration






TwitInfo: Aggregating and Visualizing Microblogs for Event Exploration


Authors - Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, and Robert C. Miller


Authors BiosAdam Marcus is a graduate student at MIT and researches with the Artificial Intelligence Lab.
Michael S. Bernstein is a graduate student at MIT with emphasis in human computer interaction.
Osama Badar is a student at MIT and also researchers with the Artificial Intelligence Lab.
David R. Karger is a professor at MIT and has a PhD from Stanford University.
Samuel Madden is an Associate Professor in the EECS department at MIT.
Robert C. Miller is an associate professor at MIT and leads the User Interface Design Group there.


VenueThis paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Summary


Hypothesis - In this paper, researchers discuss information gathering about events using twitter and why its current implementation can lead to unbalanced and confusing tweets being displayed. The hypothesis is that the researchers can build a system that organizes and displays information from twitter in a much more coherent and easily understood manner that people can use to understand events quicker than ever.


Content - The TwitInfo System consists of:

  • Creating an Event - Users enter in keywords (like Soccer and team names like Manchester) and label the event in human readable form(Soccer: Manchester)
  • Timeline and Tweets - The timeline element displays a graph showing the popularity of the keywords entered in and detects peaks which are then marked as events which users can click on which singles out relevant tweets made during that time
  • Metadata - Displays overall sentiment (positive or negative) towards the subject matter, shows a map of where tweets are coming from, and gives popular links cited in the tweets
  • Creating Subevents and Realtime Updating - Users can create subevents by selecting an event on the timeline and labeling it separately and the system updates as often as twitter does allowing users to always have up to date infromation.
Some technical contributions made by TwitInfor are:

  • Event Detection - Keeps track of a mean twitter rate and when the rate is significantly higher than the mean, an event is created
  • Dealing with Noise - Keywords that have higher volume than other keywords in the same event are dampened by comparing each keyword with its global popularity and acting accordingly
  • Relevancy Identification - Relevant tweets are determined by matching keywords and looking at how many times it has been retweeted
  • Determining Sentiment - The probability that a tweet is positive and negative is calculated and if a significant difference is present then it is added to whatever sentiment it is more like



Methods - 1) The researchers first evaluate the validity of their event detection algorithms by examining on their own major soccer and earthquake events of interest without looking at twitter. They will compare these findings with what TwitInfo returns for the two subjects.
2) 12 people were selected for a user study and evaluation of TwitInfo's interface. The participants were asked to look for specific data first such as single events and comparing them. They were then asked to gather information about a topic for 5 minutes and present their findings. The final thing required of the participants was to have an interview with the researchers to discuss the system as a whole. 


Results - 1) The researchers found that the system was quite accurate but reported many false positives. They also found that interest is a significant factor in finding information meaning an event that happens in a minor area or an undeveloped area is more likely to go unnoticed in TwitInfo.
2) The researchers found that most people were able to give very insightful reports on current events even with little to no prior knowledge of the events being studied. Common usage of the system found that many people skimmed the event summaries to get an overview of what happened when. The timeline was found to be the most useful part of the interface for most participants. Users also reported that the map would have been more interesting if volume was reflected in the display so as to show where events had the greatest impact. 


Conclusion - The researchers conclude by stating that Twitter is quickly becoming a news reporting site for many people so something like TwitInfo could be very useful to them in getting an overview of information quickly.


Discussion


I think the researchers achieved their goal of developing a system that helps people understand events quickly through aggregated tweet analysis. I found the interface particularly interesting and quick acting allowing for overviews or more in depth reporting if desired. I think this system will inspire sites in the future to develop and eventually become dedicated news sources themselves much like Wikipedia has become a leading source for encyclopedic knowledge even though it is entirely user generated.

Thursday, October 27, 2011

Paper Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures




Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures


Authors - Hao Lü and Yang Li 


Authors BiosHao Lü is a graduate student at the University of Washington and focuses on topics in human computer human interaction.
Yang Li is a researcher at Google and earned his PhD from the Chinese Academy of Sciences.


VenueThis paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Summary


Hypothesis - In this paper, researchers state that mobile devices must compensate for the inaccuracies in touch-based interaction by wasting space on bigger than required buttons and widgets. The researchers propose a system, Gesture Avatar, that recognizes gestures associated with specified GUI widgets and performs the desired function. The hypothesis is that this kind of a system will reduce errors and be preferred by GUI designers that won't have to wast real estate on over sized buttons anymore.


Content - Gesture Avatar operates by cycling through 4 state:

  • Initial State - Begins when user presses finger to surface.
  • Gesturing State - The touch trace is underlayed onto the underlying interface.
  • Avatar State - If the bounding box of the gesture indicates that it is a gesture and it is recognized as a gesture, then it becomes a translucent avatar that can be manipulated.
  • Avatar Adjusting State - Allows user to change association of avatar, move avatar, or delete avatar.
A gesture can either be a character or a shape. If it's a character, then Gesture Avatar will recognize it and use it to search the content on the page. If it's a shape then a shape recognizer attempts to match the gesture to an interface widget as close as possible. Distance from the gesture is also a factor when deciding what object is to be associated with the gesture. To correct for mismatched objects, users can draw a next gesture that will find the next closest similar object and associate the gesture with that.



Methods - Some predictions as to how Gesture Avatar will help users is:

  • Gesture Avatar will be slower than Shift, a touch accuracy enhancer, for bigger targets and faster for smaller ones.
  • The error rates for Gesture Avatar will always be lower than Shift especially when a user is walking or moving.

The researchers implemented Gesture Avatar in a Java based Android 2.2 app. 12 smartphone users were selected to participate in an evaluation of Gesture Avatar. Half of the participants did the Gesture Avatar evaluation first and half did the Shift first but both groups did both. The test consisted of 24 small objects being displayed in the top half of the screen with a target object in red so as to be easily identified. The target object is also blown up to 300 pixels below the objects so the user can easily see the contents of the target object. The test begins when the user taps the large target. The objects have both ambiguous and non-ambiguous cases and are distributed evenly on the screen. Participants practiced with the app they were evaluating, tested the app for the first 12 sessions while sitting on a stool, and tested the app for the last 12 sessions while walking on a treadmill. Each session consisted of 10 individual tasks.


Results - The time results showed what was predicted in that Gesture Avatar was slower than Shift when the targets were large but was significantly faster when target size decreased. Also as expected, Gesture Avatar showed no change in error rates in all of the test cases whereas Shift was affected by all changes and higher in all tests than Gesture Avatar. All of the researchers' hypothesis were supported by the study.


Conclusion - The researchers conclude by stating that Gesture Avatar has proven itself better than many similar products on the market and just needs to begin integration with existing software in order to emerge as a preferred method of input for users.


Discussion


I think the researchers proved their hypothesis by creating a system that results in lower success rates when attempting to select small objects on a device. I think this application is particularly valid because if wide support emerged for this product and a standard was created, UI designers could stop wasting space on overly large interactive objects.

Monday, October 24, 2011

Paper Reading #23: User-defined motion gestures for mobile interaction

User-Defined Motion Gestures for Mobile Interaction



Authors - Jaime Ruiz, Yang Li, and Edward Lank


Authors Bios - Jaime Ruiz is a PhD student at the University of Waterloo and has been a visiting researcher at the Palo Alto Research Center.
Yang Li is a researcher at Google and earned his PhD from the Chinese Academy of Sciences.
Edward Lank is an Assistant Professor at the University of Waterloo and has a PhD from Queen's University.


Venue - This paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Summary


Hypothesis - In this paper, researchers state that not enough is known about motion gestures for designers to create intuitive input methods so they conduct a study to see what people view as being useful for certain tasks. The hypothesis is that the researchers will develop a standard for motion gestures that is useful to mobile developers and hardware makers as well as the users that will be able to finally use motion gestures in an intuitive way.


Methods - To develop a set of motion gestures that are intuitive to certain tasks, the researchers conducted a study of 20 people. The participants were asked to give a reasonable gesture to perform specific tasks. Tasks were divided into action and navigation categories and then further divided into system and application subcategories within each of the two broader categories. Participants were selected from people that listed using a smartphone as their primary mobile device. The screen was locked and had special software on it that recorded movements but did not give any feedback that could sway user perceptions. The participants were told how to perform the study and then given a quick survey for their results and an interview.  


Results - 4 common themes were found in the study and were:

  • Mimic Normal Use - A majority of participants preferred natural gestures for common tasks that already involved that motion (putting phone to ear)
  • Real-World Metaphors - Gesturing the phone as a non-mobile phone object and using it appropriately (hanging a phone up by turning it face down)
  • Natural and Consistent Mappings - doing what users expect (right for one thing and left for the opposite)
  • Providing Feedback - confirmation of actions happening and during the action.
The researchers described the 380 gestures collected by their gesture mappings and physical characteristics. Finally the researchers combined all of the results and formed a representative mapping diagram showing actions and functions.





Content - This user-defined set of gestures has many implications that the researchers discussed. First of all is supporting a standard set of gestures like the ones described in this study on all platforms so as to establish consistency. Adjusting how mobile OS's recognize gestures would also be beneficial in accomplishing this because the gestures should always be recognized without fail. 


Conclusion - The researchers conclude by stating that they still need to do some follow up research that establish these gestures as easily understood by all culture groups and age ranges. This research shows that their exists a broad agreement on some of these gestures performing the actions proposed here meaning that they would be accepted fast and used for greater efficiency. 


Discussion


I think the researchers achieved their goal of developing gestures for common tasks that can be widely accepted by many people and proved their place in the grand scheme of things such as how system and software design can be easily modified to accept these gestures. I think this study was interesting because it points out a gaping hole in the current line of mobile software that should have a standard by now.

Paper Reading #22: Mid-air pan-and-zoom on wall-sized displays





Mid-air Pan-and-Zoom on Wall-sized Displays


Authors - Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay


Authors BiosMathieu Nancel is a Ph.D. student in Human-Computer Interactions in the insitu team at the University of Paris-Sud.
Julie Wagner is a Postgraduate Research Assistant at the insitu lab and has a Master's from RWTH Aachen University.
Emmanuel Pietriga is the interim leader of the insitu lab and has a PhD from Institut National Polytechnique de Grenoble.
Olivier Chapuis is a team co-head (by interim) of the InSitu research team and has a PhD from the University of Paris VII Diderot.
Wendy Mackay is a Research Director with INRIA Saclay in France and has a PhD from MIT.


VenueThis paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Summary


Hypothesis - In this paper, researchers state that wall-sized displays are growing and popularity and yet have no standard for interaction. The researchers propose a series of possible interaction techniques with these displays. The hypothesis is that these mid-air interactions will be more effective than traditional hardware peripherals and will be preferred by users.


Content - The researchers intend to propose several possible interactions and then test them in user studies to see what is most effective. They rule out anything requiring 6 degrees of freedom and only suggest things using 3 degrees of freedom or less. They also exclude any predefined gestures that are used in touch interfaces as they do not relate in a good way. They narrow the gestures down to 3 key dimensions: Hands, Gestures, and Degree of Guidance. The comparison between unmanual and bimanual input has the researchers expecting bimanual to do better. Linear and circular gestures are also under consideration and the researchers propose using circular gestures for zooming and linear gestures for scrolling. The degree of guidance dimension has the researchers questioning the right feedback to give users as they interact in a primarily free space. The design choices were decided as follows:

  • Panning - ray-casting using dominant hand
  • Zooming - both linear and circular gestures accepted but most important part is pointing where the center should be while zooming in and out
  • Bimanual interaction - dominant hand used for panning and pointing while non-dominant hand is used for zooming.
These interactions will be tested in the 1D path, 2D surface, and 3D free space allowing for input via device, touch-screen, and free hand respectively.



Methods - The researchers will study 12 unique interaction techniques in a user study. The 12 interactions come from using linear and circular gestures in both unimanual and bimanual modes in the 3 spaces mentioned above (1D, 2D, and 3D). The researchers expect to find that:

  • Two handed gestures will be preferred to one handed gestures and be more accurate and faster
  • Linear gestures will be preferred for zooming actions while circular gestures will be preferred for everything else
  • 1D and 2D gestures will be faster than 3D gestures
  • 1D gestures will be the fastest and 3D gestures will be the most tiring
There will be 12 participants testing these gestures on a wall-sized display consisting of 32 screens powered by 16 dual core computers capable of displaying 20480 X 6400 pixels. They will be performing a pan-zoom task that requires users to navigate and zoom appropriately among a series of circles with some designated as targets that should be focused on. The participants will rank the gestures during and after the study.



Results - The researchers found that as predicted two-handed gestures were faster than one handed ones and the 1D path gestures were the fastest of the 3 spaces. To the researchers surprise, linear gestures were faster than circular ones. Overshooting with circular gestures helps to explain why linear gestures performed better. Participant feedback supported the claims that two-handed gestures would be preferred to one-handed ones and that linear was preferred over circular gestures. The fastest overall gestures were two-handed linear ones in both the 1D and 2D space. 


Conclusion - The researchers conclude by stating that they have provided more data to help with developing gestures for the mid-air space. They also state that gestures in the 1D and 2D space should not be forgotten as wall-sized displays become more common because they are less error prone and less tiring than gestures in the 3D space.


Discussion


I think the researchers achieve the goal of developing effective mid-air interaction techniques for wall sized displays. It's very interesting that the hardware based input methods were found to be preferable in many instances. We get caught up in movies today and think that free hand interaction is the ultimate goal for all input but studies like this show that those gestures actually have some hefty side affects that cannot be ignored when compared side by side with devised based input.

Thursday, October 20, 2011

Paper Reading #21: Human model evaluation in interactive supervised learning



Human Model Evaluation in Interactive Supervised Learning


AuthorsRebecca Fiebrink, Perry R. Cook, and Daniel Truema


Authors BiosRebecca Fiebrink is an assistant professor at Princeton University in the department of computer science and music and has a PhD from Princeton.
Perry R. Cook is a professor emeritus at Princeton University and has a PhD from Stanford.
Daniel Truema teaches various music courses at Princeton University and is an accomplished composer and performer on the fiddle and laptop.


VenueThis paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Summary


Hypothesis - In this paper, researchers explain that machine learning can be a powerful tool in processing large amounts of data and generating output on actionable items but the manner in which these systems learns if often static providing little to no feedback to the user performing the training. The researchers propose a system that allows trainers to supervise machine learning and provide valuable information regarding why certain output is generated and suggesting ways to fix problems. The hypothesis is that a system such as this can give users more of the information they want and help them build better machine learning applications.


Content - The researchers develop a generic tool, called the Wekinator, that implements basic elements of supervised learning in a machine learning environment to recognize physical gestures and label them as a certain input. They chose this application because gesture modeling is one of the most common uses of machine learning and music naturally is gesture driven at times like recognizing a certain gesture as a certain pitch.


Methods - 3 studies were conducted in evaluating the Wekinator:
A) The first study consisted of 7 composers working to refine the Wekinator to control new instruments that existed on the computer only and responded to gesture input. The participants trained the system once a week and made suggestions that were acted upon in between sessions.
B) The second study focused on the supervised learning aspect of the system and observed 21 students in their use of the Wekinator to produce new instruments controlled by certain gestures (one continuously controlled adjusting to changes in gesture in real-time). The students' actions were recorded by the software and they filled out questionnaires.
C) The final study consisted of a professional cellist working with the researchers to classify several gestures that capture many properties of a bow as it is used to play the cello. After the system captures this data, it should be able to capture the notes being played and add them to a composition on a computer.


Results - The studies found that most of the participants chose to train the system by editing the training data. In studies B and C, the researchers found that the participants used cross validation on occasion to observe how accurate their systems were performing and evaluating new ones at the same time. In contrast direct evaluation was used more frequently in all 3 studies and allowed for quick validation that a system was performing as expected. Subjective evaluation allowed the cellist in study C to fix mistakes made in training not caught in cross validation. Many participants noted that they got better in providing training data as the study went on indicating success in a primary goal of the system. At the conclusion of all the studies, most participants praised the Wekinator as working extremely well in accomplishing the tasks they had wanted.


Conclusion - The researchers conclude by saying that supervised learning has a place in machine learning but users must evaluate what qualities they are looking for beforehand as some things are not yet handled in a particularly successful fashion like accuracy and should not be relied on.


Discussion


I think the researchers achieve their goal of proving that supervised learning benefits the users in building a better model. This is interesting because machine learning is still in a young stage of life and studies like this make adoption easier and more practical than ever before. 

Tuesday, October 18, 2011

Paper Reading #20: The aligned rank transform for nonparametric factorial analyses using only anova procedures






The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures


AuthorsJacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins


Authors BiosJacob O. Wobbrock is an Associate Professor at the University of Washington and has a PhD from  Carnegie Mellon University.
Leah Findlater will be an Assistant Professor at the University of Maryland next year, has taught at the University of Washington, and has a PhD from the University of British Columbia.
Darren Gergle is an Associate Professor at Northwestern University and has a PhD from Carnegie Mellon University.
James J. Higgins is a Professor of Statistics at Kansas State University and has a PhD from the University of Missouri-Columbia.


Venue This paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Summary


Hypothesis - In this paper, researchers explain that common procedures used in analyzing non-parametric data tend to be error prone and propose a new system for analyzing data. This new procedure, called the Aligned Rank Transform (ART), allows researchers in the field of HCI to accurately analyze non-parametric data without fear of error and has easy to use tools to go with it. The hypothesis is that ART is a better way to analyze data than any other method currently in use in HCI and can be very useful in real world applications.


Content - ART corrects for requirements of ANOVA statistics that may not be true in HCI research such as normality. This is done by computing residuals, computing estimated effects for interactions, computing aligned responses, assigning average ranks, performing an ANOVA on this new data, and checking for correctness by checking sums.


Methods - The ARTool and ARWeb products make using ARTs easy and accessible. The ARTool parses long-format data and produces rankings based on this data. ARTool makes only 2  assumptions for use: The first column of data is an identifier and the last is a numeric response. Everything in between is assumed to be factors for the response. ARWeb performs exactly the same as ARTool just in a web setting that is more accessible to everyone with an internet connection. 


Results - The ARTool was used in a real-world case in evaluating results of a satisfaction survey given to users of a test interface. The results were analyzed using a standard ANOVA first and then an ART to see if any observable interaction concurred in the study. The ANOVA did not detect anything and noted their was no significant interaction but the ART found significant interaction and questioned the findings found earlier which agrees with the researchers initial perception that interaction was obviously present. Other examples were also tested with the ART and were well received as rules could be broken that traditional AVOVAs required.


Conclusion - The researchers conclude by stating that ART is useful when analyzing non-parametric data and has been useful in several cases that the researchers had worked on.


Discussion


I think the researchers achieve their goal of providing a better way to provide to analyze data for HCI research. This paper is interesting because it changes something that is already established to work with HCI.

Paper Reading #19: Reflexivity in digital anthropology





Reflexivity in Digital Anthropology 


AuthorsJennifer A. Rod


Authors Bios - Jennifer A. Rod I am an Assistant Professor at Drexel's School of Information and has a PhD in Computer Science from UC Irvine.


VenueThis paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Summary


Hypothesis - In this paper, the researcher compares different styles of anthropology with the goal of forming a style to be coined digital anthropology that people can use reliably in the future when studies like this are needed. The hypothesis of this paper is that through examination and study digital anthropology can be created and will benefit CHI in future research.


Content - There are 3 approaches to ethnography and anthropology that can apply to HCI although only one is currently used in digital anthropology. The realist approach focuses on the process of collecting information over a long period of time and determining causes for what was observed through careful observation and experimentation without much connection to the author performing the study. The confessional approach relies on the ethnographer explaining their own involvement with the subject being studied and allows for a greater understanding of the perspective being observed. The impressionistic approach focuses on telling a story and letting detailed observation speak for itself in setting the stage for the ethnography and allowing for future analysis. 


Methods - The 3 approaches to ethnography mentioned above each bring some unique new aspects to the field of digital anthropology. Discussing rapport, the relationship between the observer and subjects being studied, allows for readers to have a better understanding of how data was collected which is crucial when acting on ethnographic results. Participant observation is important in ethnography because it gives us an experiential account of the subject being studied and gives more voice to the author. The use of theory in ethnography is very fundamental considering that it is usually on this level that any advancements are made from ethnography and theory allows for quicker immersion of the author into the setting they are studying. 


Results - There are 3 specific forms of ethnography that can applied to HCI developed from the information above. Formative ethnographies study current use of a product or tool and make observations as to how it can be improved. Summative ethnographies focus purely on how a digital tool is used by a specific group of people and explain this relationship in depth with no suggestions for improvement allowing these studies to be untainted by goals but not providing too much usefulness to the field of HCI. Iteratively evaluative ethnographies are studies that are conducted by putting a prototype product or tool in use and studying how it becomes used by participating in its use and observing others allowing for more iterations to be created and evaluated all over again until a compelling and complete product has been created.


Conclusion - The researcher concludes the paper by stating that standards need to be adopted for ethnography in HCI in order to continue improving on technology and its relationship with users. The results from this paper help to explore some specific areas where HCI can get these standards from and allow for a more reflexive approach to digital anthropology than what is currently being used preserving the author's voice in these studies which is crucial when moving forward with technological advancements.


Discussion


I think the researcher partially succeeds in developing digital anthropology practices  that can result in better HCI research in the future because although many suggestions are made in the paper most of them are just regurgitation of facts from other sources just applying them a little more towards HCI than they have in the past. I did find the paper mildly interesting as I do think that HCI needs a better way to evaluate interaction with technology instead of just producing "better" offerings in theory only to frustrate and fail.

Thursday, October 13, 2011

Paper Reading #18: Biofeedback game design: using direct and indirect physiological control to enhance game interaction




Biofeedback game design: using direct physiological control to enhance game interaction

Authors - Lennart E. Nacke, Michael Kalyn, Calvin Lough, and Regan L. Mandryk

Authors Bios - Lennart E. Nacke was a postdoctoral associate researcher at the University of Saskatchewan at the time of this paper's publication.
Michael Kalyn is a student researcher at the University of Saskatchewan and holds a degree in Computer Engineering.
Calvin Lough is also a researcher working under Dr. Mandryk in the Interaction Lab at the University of Saskatchewan.
Regan L. Mandryk is an Assistant Professor at the University of Saskatchewan and has a PhD from Simon Fraser University.

Venue - This paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.

Summary

Hypothesis - In this paper, researchers explore the use of physiological input in video games and attempt to find a place for them in the modern industry. The hypothesis is that physiological input has a place in modern gaming and this paper will find where that place is.

Content - The researchers built a 2D side scrolling using the Mircrosoft XNA framework and XBOX 360 control mappings to test their hypothesis. The game consisted of many features that were controlled by physiological input in addition to the standard XBOX controller. They also built a library that received and processed sensor data before sending it to the game.
The physiological controls are:
  • GAZE - Eye tracking



  • EMG - Muscle sensor
  • GSR - Sweat detection
  • EKG - Heart Rate sensor
  • RESP - Breathing rate sensor
  • TEMP - Temperature sensor (require blowing on)

  • Methods - To evaluate the plausibility of direct and indirect physiological control, 10 participants were asked to play through 3 test conditions. 1 of the conditions was a control and therefore had no physiological control. The other 2 conditions added 2 indirect and 2 direct methods of physiological control to the game in addition to the GAZE system and standard XBOX controller that were present in both. The participants played through the 3 conditions in random order, for about 10 minutes on each condition, and then filled out a survey concerning their experience with the physiological control.

    Results - The first variable analyzed was how fun users found the conditions to be. The participants found the conditions with the physiological control to be significantly more fun than the control condition but their was no difference between the 2 physiological control conditions. Users responded by saying the physiological controls helped to immerse them into the game and were intuitive. Participants also found the physiological control to be very novel and noted that there was a learning curve but it rewards those that learn. The participants greatly preferred the GAZE and RESP sensors to all of the others. Users easily chose direct controls over indirect controls and responded accordingly when asked for opinions on the GSR and EKG sensors noting that neither of them felt controllable. 

    Conclusion - The researchers conclude by saying that physiological input is feasible to implement in games and when used in the direct control format can be very popular and immerrsive however, indirect control does not have a foreseeable future as a mechanic control but maybe can be used to control environmental variables like background color and activity similar to a mood ring. 

    Discussion



    I think the researchers prove their goal of establishing physiological input as a viable means for control in video games. I think research has a long way to go in establishing this as a real product though. The controllers that we currently have exist because they can be mass produced and used for many different tasks. Physiological input sensors are not intuitive to set up and have no precedent in the market. The only way, sensors like this stand a chance is if they are shipped with consoles directly as opposed to add-ons that can be bought later.

    Wednesday, October 5, 2011

    Paper Reading #17: Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment


    Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment


    Authors - Andrew Raij, Animikh Ghosh, Santosh Kumar, and Mani Srivastava


    Authors Bios - Andrew Raij is a Post Doctoral Fellow in the Wireless Sensors and Mobile Ad Hoc Networks Lab (WiSe MANet) at the University of Memphis and has a PhD from the University of Florida.
    Animikh Ghosh is a Junior Research Associate at Infosys Technologies Ltd. and has a Masters of Computer Science from the University of Memphis.
    Santosh Kumar is an Associate Professor at the University of Memphis and advises the WiSeMANet Lab.
    Mani Srivastava is a Professor of Computer Science at UCLA and is also highly involved as an electrical engineer.


    Venue This paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


    Summary


    Hypothesis - In this paper, researchers point out that mobile sensors used in the study of one's health may also be used in conjunction with machine learning algorithms, or just basic reasoning skills, to reveal private information that one does not wish to be known such as smoking or drinking habits. The study proposed by the researchers will evaluate how much individuals care about their private habits being made public. The hypothesis is that people do not want certain personal attributes, such as habits or health conditions, exposed in a way that other people can decode and potentially use against them, i.e. seemingly innocuous physiological data.


    Content - The researchers developed a framework to better explain the scenarios being discussed in the paper and it consisted of measurements, behaviors, contexts, restrictions, abstractions, and privacy threats. Measurements are raw data that comes from sensors such as acclerometer readings and heart rates. Behaviors are actions the user performed that can be inferred from the measurements such as seizures. Contexts explain behaviors by observing the environment that the behavior occurred in such as time, place, and people that were nearby. Restrictions can be applied to all 3 of the previous elements and produce a limiting effect such as limiting access to accelerometer data or keeping the time private. Abstractions offer a way to restrict data to a desirable amount so as to have enough data to monitor whatever is being studied but leaving out other details. Privacy threats are harms that result from matching data to an identity.


    methods for the 2 groups


    Methods - The study had 3 goals: assess privacy concern of individuals before and after the study (when their data was at stake), use the framework to examine restrictions and abstractions, and assess how identification of the data affects the concern levels of participants. 66 participants were recruited for the study and divided into 2 groups. 1 group, Group NS, only filled out a privacy survey and the other, Group S, gave physiological, behavioral, and psychological data to a sensing device, the AutoSense, for 3 days with questionnaires throughout the study. Upon completion of the data collection participants viewed the data collected in a system called Aha visualization, developed for this study, that showed the data at different abstraction levels and responded to a privacy questionnaire knowing what was collected. 


    Results - Privacy concerns between Groups NS and S before the study were minimal but Group S after the study had significantly higher concerns and did not like the idea of someone having access to that data. The impact restrictions and abstractions had on the concerns showed what data is most sensitive when grouped with other data. For example, the highest concern level was shown when physical and temporal data was available making the times and locations of the user visible. Adding timestamps in general always increased concern. Participants also showed higher levels of concern when asked how they felt regarding releasing the data linked by identity to the general public compared to releasing the data anonymously.


    Conclusion - The researchers conclude by saying they can conclude 3 things from this study: people don't properly analyze the risk of releasing data unless they have a stake in it, concern raises when physical and temporal data is connected together but can be reduced through restriction and abstraction, and people are willing to share their data (identifying or not) with researchers but not the general public.


    Discussion


    I think the researchers proved their hypothesis that people do not want certain information regarding tracking exposed to the public but interestingly only when they have a stake in the data being released. This study can be used by developers in the future in determining what data to keep sensitive and more importantly what to tell users so that they know exactly what is being recorded before consenting to anything.

    Paper Reading #16: Classroom-based assistive technology: collective use of interactive visual schedules by students with autism



    Classroom-Based Assistive Technology: Collective Use of Interactive Visual Schedules by Students with Autism


    Authors - Meg Cramer, Sen H. Hirano, Monica Tentori, Michael T. Yeganyan & Gillian R. Hayes


    Authors Bios Meg Cramer, Sen H. Hirano, and Michael T. Yeganyan are students in the Department of Informatics at the University of California.
    Monica Tentori is a student in the School of Computer Science at the Universidad Autónoma de Baja California.
    Gillian R. Hayes is an assistant professor at UC Irvine and has a PhD from the Georgia Institute   of Technology.


    Venue - This paper was presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


    Summary


    Hypothesis - In this paper, the researchers make reference to vSked which is an interactive and collaborative assistive technology for visual schedules created primarily as a way to assist teachers of autistic children. The hypothesis being tested in this paper is that vSked, when implemented in a classroom setting, can help students with autism be more independent and positively respond to community building activities.


    Content - vSked is an interactive and collaborative assistive technology that replicates the functionality of visual schedules, choice boards, and token-based rewards in an integrated system. The system consists of a central touch screen display and an ultramobile PC 
    (UMPC), a  touch screen handheld device, for each child connected to said device wirelessly. The teacher controls activities and schedules of students with the large, central touch screen. Each student's device allows for three kinds of interaction: multiple choice questions, voting, and alerts. Appropriate feedback is given based on the nature of the question and the students' answer.


    Methods - The study conducted in this paper consisted of 3 deployments of vSked to classroom environments. During each study, teachers and aides were interviewed in both group and individual manners on a regular basis. Special video interviews were also conducted on a less frequent basis and were scored based on a set of criteria determined by the researchers to be important. Participants in this study included 2 teachers, 8 teacher assistants, and 14 students devided between 2 classrooms. To analyze the effectiveness of vSked, the researchers performed a quantitative analysis of certain variables (amount of work to transition, prompt students, etc.) before the implementation of vSked and after the use of vSked. An ANOVA was used to compare the before and after results of the survey and decide on the effectiveness of the system. All notes and interviews were also taken into consideration when making the final analysis.


    Results - The first variable analyzed by the researchers was that of independence and the need for teachers to prompt students for an answer and further prompting for a right answer if incorrectly chosen at first. vSked resulted in a 54% decrease in prompting from teachers and a large drop in prompting from aids, although not enough to be statistically significant. The feedback from the device proved to be very valuable as many students responded positively when the fireworks display appeared after answering correctly. Also, the shaking of the correct answer after a student selected incorrectly was effective in getting students select the right answer. Consistency and predictably was the next variable analyzed and is important because scheduling allows a student to see what is happening today and prepare for it reducing surprises or anything that may upset a student. Also electronic scheduling makes transitioning faster as teachers can move everybody through the schedule with the touch screen. Transition time was lowered 61% using vSked which helps prevent students from down time in between activities which can trigger an undesirable reaction. When the students were to be rewarded tokens for their acheivments during the day, the teachers could use vSked to provide instant gratification to both parties through a fireworks display that stimulated many students at a time. Interacting with the community is very important when instructing students with autism and vSked helped in this are as well introducing new behaviors for the teachers to practice. First, students seemed to take an active interest in what their friends were doing and second, teachers began presenting the final token before rewards to students in front of the whole class to better involve all students in the rewarding process and encouraging others to do the same.


    Conclusion - The researchers conclude by stating that although similar assistive technologies exist for special education class, very few attempt to unite individual and group accomplishments like vSked. They also suggest that this study has shown that technologies like vSked can help to alleviate high student-to-teacher ratios by prompting and rewarding students on the device itself.


    Discussion


     I think the researchers proved the plausibility that their system helps with key areas in special education but they failed to provide any real data on the topic and further studies will be needed to develop said data. This study will hopefully make future studies possible for this reason. I think unifying individual and group activities through technology is a powerful tool that teachers will be able to use to a much higher degree in the future.