Friday, September 30, 2011
Book Reading: Gang Leader for a Day
Gang Leader for a Day answered some questions I had concerning the topic and raised many more. Among those answered was how people get trapped in the projects of major cities and, even more important, how they manage to survive there. Some questions this book has me asking now are how objective can an observer remain when participation becomes necessary to continue observing and what is the state of poor neighborhoods in 21st century America.
My first thoughts on this book concern the subject being studied, low-income housing. Sudhir explores this world by diving into it head first when he meets JT in one of the buildings within his territory. Throughout the book I gradually became more aware of the environment Sudhir was studying mainly because Sudhir decided to go further than JT in his studying and branched off into the community on his own. In a way I was learning about the projects at the same time Sudhir was in the book which is an interesting way to present information given that most instructional books are told by someone that clearly already understands everything being said making it harder to pick up from a beginner's perspective. Mrs. Bailey helped in learning about this setting the most as she was the director of the community. What I quickly picked up on was that the whole system is unregulated in the projects and selfishness is allowed to run freely leading to some of the most corrupt individuals being put in power. The most interesting elements I got from Mrs. Bailey were actually from everyone else's view of her. Anytime Sudhir talked to anybody that he wasn't studying, he learned something new about the people he was studying like Mrs. Bailey's "death grip" on the residents of Robert Taylor. If someone made any money they surely owed some of it to Mrs. Bailey and JT was more than happy to loan foot soldiers to here cause at any time because she helped him in return, and so the vicious cycle continues. JT and Mrs. Bailey loved to talk about how the police and government were ruining people's lives but they were actually doing just as bad. I would've liked to have seen Sudhir study some of the residents in more detail than he does but I have a feeling that the reason this didn't happen is because JT and Mrs. Bailey didn't want that at all.
The horror stories presented in this book regarding some of the people's lives were truly terrifying but the hopeful stories were just as potent in the opposite way. For all of the rapes and shootings mentioned in this book, their was an equal amount of success stories such as Autry running the boys and girls club at one of the other Robert Taylor homes. It was especially interesting to see how the women worked together to get things done such as pitching in to have a certain number of working showers and kitchens between many families.
Some of the book's most interesting moments were the ones in which Sudhir took an active role in the gang or community such as being gang leader for a day. The physically involving actions, like the stairwell with Bee-Bee or helping Price when he got shot, were the ones which I think confused everybody the most because after them people treated him differently. The more social actions, like teaching the women how to write or Sudhir's "school"-turned-babysitting job, offered a diverse look at the residents of the communities affected by JT and Mrs. Bailey. As an ethnographer, Sudhir may have been overstepping his bounds here but each action got him deeper into the community and gang so I'm not sure this question can ever really be answered. When it works out, like this, and an ethnographer accomplishes his goals through participation people will think positively of it but, when an ethnographer gets hurt or worse from involvement people will turn against it. In either sense the outcome can never be predicted and the risk will always be high.
The last question I have is one that I do not have an answer to yet but I am more motivated than ever to really look for some answers. The state of the poor in America in this century is not really known by me or most of the people that I affiliate with which tells me that the social divide in this country is not in a good place. I'm not sure if we're any better off than we were than at the time of this book but I am interested in looking nonetheless.
Overall, I think this book provided me with a great look into a part of this country that I have no experience with and also showed what ethnography can do to expose information in the best way. While methods can be controversial much can be learned through risky behavior and we can all learn something from each other.
Wednesday, September 28, 2011
Paper Reading #13: Combining multiple depth cameras and projectors
Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
Authors - Andrew D. Wilson and Hrvoje Benko
Authors Bios - Andrew D. Wilson is a researcher at Microsoft Research and has a PhD from MIT Media Laboratory.
Hrvoje Benko is a researcher at Microsoft Research and has a PhD from Columbia University.
Venue - This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
Hypothesis - In this paper, the researchers will develop a system capable of making non-instrumented surfaces interactive using a series of projectors and cameras. The hypothesis is that this system, called LightSpace, will explore more capabilities of depth cameras and help lead us to a future in which even the smallest corner of a room will be interactive.
Content - 4 interactions that will be implemented in this system include simulated interactive surfaces (support hand gestures and touch input), through-body transitions between surfaces (ability to touch 2 surfaces to transfer objects), picking up objects (ability to carry an object in a user's hand), and spatial menus (use human as interactive surface).
Methods - The researchers built their system using 3 InFocus projectors and 3 PrimeSense depth cameras mounted on a single aluminum truss and aimed so as to maximize effectiveness. The depth cameras use 2 types of cameras and a light source to accurately form a 3D model of what is being viewed. All cameras contribute to a single system-stored 3D representation of the test area to eliminating the need to know which camera is located where. The cameras are calibrated individually using 3 points of reference and them the projectors are inserted into the 3D grid created by the camera calibration. The interactive surfaces must be designated by identifying 3 corners of the surface in question and the surface must be rectangular, flat, and immobile. A virtual camera's view is created by using the data collected by all 3 cameras in order to form an overall view of the interactive area to better analyze what is being performed. Connectivity of two objects by a person is detected by checking for intersecting images with the interactive surfaces. Picking up and dropping objects is handled by simply checking what a user is touching. The spatial menu implementation can easily be applied to other spatial objects in the future. The system was demoed to an audience of 800 at a special event.
Results - The demo showed that a maximum of 6 users could use the system at any one time to any real effect although, any more than 3 users slowed the system down considerably. Whenever actions were not detected by the cameras, it was usually because some object or person was blocking the view which brought the question of camera placement back to the creators' minds. Some unique interactions were also discovered during the demo such as objects transferring between surfaces because users with those objects shook hands. Future plans for the system mostly center around making the system more robust and removing limiting requirements.
Conclusion - The researchers conclude by saying they have presented a system that is capable of transitioning the user space from a single computer to the entire room around them.
Discussion
I think the researchers prove their hypothesis that depth cameras provide a viable option in creating more interactivity in currently non-interactive environments by simply bringing in a somewhat portable system that can then calibrate to that space. I think the future in this study could produce great results that lead some day to an interactive classroom followed by the elusive "smart" house. The only faults I can find with this work is that the use of an external system, as opposed to an internal one, can be very limited by the location of objects and users if a standard for depth camera positioning is not established by an extensive study on the topic.
Monday, September 26, 2011
Paper Reading #12: Enabling beyond-surface interactions for interactive surface with an invisible projection
i-m-Flashlight
Enabling Beyond-Surface Interactions for Interactive Surface with An Invisible Projection
Authors - Li-Wei Chan, Hsiang-Tao Wu, Hui-Shan Kao, Ju-Chun Ko, Home-Ru Lin,
Mike Y. Chen, Jane Hsu, Yi-Ping Hung
Authors Bios - Li-Wei Chan is a PhD student at the National Taiwan University and researches with the Image and Vision Lab and iAgent.
Hsiang-Tao Wu is now a researcher at Microsoft Research Asia but previously attended the National Taiwan University.
Hui-Shan Kao, Ju-Chun Ko, Home-Ru Lin were students at the National Taiwan University during the time of this paper.
Mike Y. Chen is an assistant professor at the National Taiwan University specializing in human-computer interaction and mobile computing.
Jane Hsu is a professor at the National Taiwan University and focuses on multi-agent systems and web technologies.
Yi-Ping Hung is a professor at the National Taiwan University and researches human-computer interaction among other things.
Venue - This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
Hypothesis - In this paper, researchers propose using Infrared (IR) projectors to allow 3D interactions with traditional 2D tabletop surfaces. The hypothesis is that through the use of IR projectors, invisible markers can be created to allow for an entirely new set of interactions with touch surfaces interacting in the 3rd dimension.
Content - The researchers first had to build the devices to be used seeing as most of what is needed is not currently available on the market. The IR projector was built by upgrading a DLP projector and the table had to be made with the diffuser layer above the touch-glass because the touch-glass would reflect on observers and degrade the light. Detecting multi-touch was implemented using a combination of touch recognition and the Diffused-Illumination(DI) method in a clever way by simulating backgrounds, inspecting suspected objects in a region of interest (ROI) by projecting white regions onto it, and smoothing out feedback with a Kalman filter. Due to the camera and projectors being independent systems, it was necessary to use software synchronization to keep the cameras from feeding old data to the projectors.
Methods - 3 applications were developed including:
1) i-m-Lamp - Ordinary lamp structure with IR camera and pico-projector instead of a light bulb. Users move the lamp to point at an area of interest like a particular part of a map. The tabletop surface masks its projection at that location and the pico-projector takes over by projecting the same data shown on the tabletop beforehand only with more detail than before.
2) i-m-Flashlight - Mobile version of the i-m-Lamp allowing for quicker examination of certain areas of interest like those found on paintings.
3) i-m-View - A 3D viewer that lets a tablet, or tablets, view a 3D model of whatever 2D image is being shown on the tabletop surface. The boundaries of the tabletop surface are clearly denoted on these tablet devices to keep users aware of the system they are using.
Results - 5 users were asked to use the 3 systems to perform certain actions and provide feedback:
1) The i-m-Lamp was found to be the most stable of the devices and proper use was described as moving the lamp to a desired location to then search nearby for information.
2) The i-m-Flashlight was used more quickly and was found to be effective in finding information from a variety of locations in quick succession but a problem was encountered if users moved the device too close or too far from the table.
3) The i-m-View became lost when users tried to look up at tall buildings focusing the device away from the tabletop. Users also wanted touch actions enabled on the tablet for easy manipulation as well as a portrait mode. The i-m-View also immersed the users too much in the tablet, as expected, and made them unaware of the real scene on the table.
Conclusion - The researchers conclude by noting that they had accomplished the goal of creating a 3D interactive environment through the use of IR cameras and projectors. They also discussed future work in fixing problems found during testing and adding more features requested by users.
Discussion
I think the researchers accomplished their goal of creating a 3D interactive surface using invisible markers and the applications helped prove that but where this would fit in in the general public remains a mystery. This system is too bulky by itself and would need a good reason to exist. I can see this applied to Air Traffic Control systems. The table could display general airplane symbols and show direction but to get more data, the controller can highlight it with a device similar to the i-m-Flashlight. Because of things like this, I can see this technology developing and maybe even finding a niche somewhere.
Paper Reading #11: Multitoe
Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input
Authors - Thomas Augsten, Konstantin Kaefer, René Meusel, Caroline Fetzer, Dorian Kanitz, Thomas Stoff, Torsten Becker, Christian Holz, and Patrick Baudisch
Authors Bios - Thomas Augsten is a master student in IT systems engineering at Hasso Plattner Institute
Konstantin Kaefer is a master student at Hasso Plattner with interest in human-computer interaction
René Meusel, Caroline Fetzer, Dorian Kanitz, and Thomas Stoff were all undergraduate researchers at Hasso Plattner Institute at the time of this project.
Torsten Becker is a graduate student at Hasso Plattner Institute specializing in human-computer interaction.
Christian Holz is a PhD student at Hasso Plattner Institute working with Patrick Baudisch in human-computer interaction.
Patrick Baudisch is a professor at Hasso Plattner Institute and chair of the Human-Computer Interaction Lab.
Venue - This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
Hypothesis - In this paper, researchers argue that current tabletop computers are limited in size and could be made better if they were bigger. To accommodate bigger displays, the researchers propose using the floor as the surface to interact with increasing the size limit greatly. The hypothesis put forth by the researchers is that the floor can be a sizable, productive interactive surface if combined with the right technology and designed thoughtfully.
Contents - In order to propose an effective design of an interactive floor surface, a prototype was built by the researchers. They decided to use a combination of Front Diffuse Illumination (Front DI), gives position of feet by analyzing shadows, and Frustrated Total Internal Reflection (FTIR), makes pressure visible, to interpret user input. The floor was made of several tiles that consisted of many different materials and Rosco projection screens to display images. To keep costs down, the researchers decided to only build one tile that could sense user input.
Several tests were done in determining what features to support. Those tests and results are shown below with numbers like "3)" denoting corresponding trials and results.
Methods - 1) The researchers asked 30 participants to activate 2 paper buttons taped to the floor and not activate the others. The manner in which they activated the buttons was observed as well as what was done in the non-activation.
2) Invoking menus needed to be location independent due to the inherent size of floors and observations from the previous study was used in determining results.
3) A method for selecting and dragging objects was needed so 20 participants were asked to stand on a grid of "buttons" and tell an experimenter which ones were selected based on how they were standing.
4) Selecting a button needs to be precise so the researchers conducted a study that had 24 participants select, or what they perceived as select, a button with a crosshair in the center using 4 different methods (free-form, big-toe, tip of shoe, and ball of the foot). Studying this data, the researchers found what specific pressures were present when users made "selections" allowing for the development of a natural selection process.
5) Choosing the correct size for the smallest elements was also important to the researchers so they had 26 participants type a specified sentence 2 times on 3 differently sized keyboards and recorded the accuracy.
Results - 1) The researchers found tapping to be the most intuitive activation input and walking was used the most often in ignoring the others therefore tapping was used in this prototype.
2) Jumping seemed to be the most appealing input to invoke menus as users can jump anywhere when a menu is needed.
3) Most participants described the entire area under their shoe as being pressed therefore the Front DI component (described earlier) was used for determining object selection.
4) Results varied in the free-from method and accuracy greatly improves when telling users what method to use but the researchers did not want to alienate users so the end decision was to support all type but have users set their method by performing an easy machine training step.
5) As expected, as button size decreased error rates increased and the majority of users liked the largest keyboard the best.
Conclusion - The researchers conclude by saying they demonstrated why floor surfaces are viable touch interfaces and even introduced the concept of identity recognition based on sole patterns and that research will continue in this field and they will continue to build on the prototype shown in this paper by looking into building a smart room capable of monitoring the well-being of the people inside.
Discussion
I think the researchers support their hypothesis that floor interfaces are plausible replacements to similar tabletop devices and provide much larger spaces for interaction. I think this paper is particularly interesting because tabletop surfaces have yet to really prove themselves in the real world and presented here is an alternative choice for interaction. I also think it was cool that they used foot gestures to play a game showing real world potential right from the start.
Wednesday, September 21, 2011
Paper Reading #10: Sensing foot gestures from the pocket
Sensing foot gestures from the pocket
Authors - Jeremy Scott, David Dearman, Koji Yatani, Khai N. Truong
Authors Bios - Jeremy Scott is a graduate student at MIT and received his Bachelor's from the University of Toronto.
David Dearman is a PhD student at the University of Toronto.
Koji Yatani is a PhD student at the University of Toronto interested in interactive systems.
Khai Truong is an associate professor at the University of Toronto specifically interested in human-computer interaction.
Venue - This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
Hypothesis - In this paper, researchers discuss a way to use foot gestures to perform tasks in a mobile environment and develop a system that supports this. In doing so, they hope to prove their hypothesis that such a system can learn over time from users to recognize more accurately in addition to the primary goal of allowing users to perform tasks without having to focus on visual input and visual feedback.
Methods - The researchers decided the first thing to do was study possible foot gestures that could be used in the product. The four gestures explored were dorsiflexion, Plantar flexion, heel rotation, and toe rotation. Participants were asked to hold down a button on a mouse and perform one of these gestures rotating to a specified angle from the start position and releasing to indicate completion. The setup consisted of 6 Motion Capture cameras and a laptop, informing the participant of what task was to be performed and recording information received from the cameras. Participants began the study with the training phase that consisted of 156 gestures and visual feedback informing the user of their progress. Next, the testing phase began with no visual feedback and 468 gestures. Lastly, participants were interviewed and asked to rank the gestures in order of preference. A second study was conducted using the same procedure and equipment later in testing the researchers' machine learning algorithms by using different number of users' data as training data and the rest as test data. Also heel rotation and Plantar flexion were the only 2 gestures tested due to the results of the first study (see below). 2 different classification procedures were used for the machine learning portion of this study. Leave-one-participant-out (LOPO) consisted of using all but one of the particpants' data as training data and then testing on the remaining participant. Within-participant (WP) consisted of a single user performing a gesture many times with all but one of those trials used as training data and the remaining trial used as test data.
Results - The initial study found raising the heel, or Plantar flexion, to be the most accurate and preferred gesture for vertical angles. Plantar flexion also showed a consistent error rate across all angles whereas the other gestures increased in error as the angle increased. Among the rotation gestures, both heel and toe rotation were comparable in regards to error and range but heel rotation was greatly preferred by the participants. The second study tested gesture recognition using a phone located in a front pocket, back pocket, and hip mount which resulted in successful recognition 50.8%, 34.9%, and 86.4% of the time respectively. Higher percentages resulted when the algorithm only had to determine which gesture type was being performed (heel rotation or Plantar flexion).
Contents - The researchers developed a program to recognize foot gestures using data collected with a phone's accelerometer. The workflow for the program consisted of a user wearing a phone in a pocket, initiating the system by placing a foot at the origin and performing a double tab, and performing the desired gesture which would then be recognized by the system and executes the desired command. To optimize this method, the recognition algorithm would have to be very robust and able to adapt to an individual so the researchers integrated machine learning into the workflow and did several quick tests to help develop the machine learning algorithm. They found 34 features that could be used in gesture recognition and implemented them into an initial application that was used in the second study.
Conclusion - The researchers conclude that foot gestures are a viable option for user input. Because the WP procedure worked significantly better than LOPO, the researchers also conclude that this type of program would be best implemented by learning from an individual user before use by that person.
Discussion
I think the researchers achieved their goal of proving the viability of using foot gestures as input and they mostly convinced me of this in their testing. I would have also liked to have seen a framework built that other developers could use to begin using this technology in real-world applications because I think the usefulness of this ability is still questionable to most people. Nevertheless this technology could prove crucial in the future of device interfacing and help make device interfacing a very transparent and non-feedback dependent task.
Tuesday, September 20, 2011
Deliverable #2: Proposal and Week 1 Results
Project Proposal
Andy Hampton, Neal Roessler, Zach Casull, Chandler Cell
We chose professional gamers as our target group for our ethnography project. We chose this group because of all the stereotypes and assumptions that are usually applied to this group without a second thought. Although, some of us have experience with gaming as a pastime, we decided to proceed because none of us can even imagine the work that we know is necessary to compete on a serious level.
The expectation from this group is that it is male dominated with varying personalities from the quiet loners to the obnoxious attention cravers. Tendencies towards violence and verbal abuse will be interesting subjects to observe while studying this group. The scope of this study will range from individuals to local group competitions and whatever else we can find that can provide us some insight into the minds of professional gamers. From this study we hope to learn enough about how professional gamers tick so as to be able to suggest a product that could be useful to them.
This study is appropriate because of the many personalities and behaviors we anticipate to observe while conducting the study. Demographic observations will show trends in the gender and age of participants which can further be broken down and compared to our own experiences with what would be considered “normal” people in these ranges. The violence and verbal observations mentioned above will also be valuable given the recent Supreme Court case regarding violence in video games and its effect on people who play games which fit squarely in our target group. This study is not inappropriate as we have no reason to gather personal information beyond age, gender, and possibly appearance. Conducting interviews and attending public gaming events also prove appropriate and do not in any way break laws. This study will be exciting because of the breadth of people we anticipate on studying in thier very animated, high-powered environment that keeps them interested in this pasttime and motivated to practice in the massive volumes that they do.
This study is feasible because we will cover many different aspects of professional gaming conducting several interviews with acquaintances that play games professionally as well as observing competitions and tournaments. We also hope to eventually participate in the activity if the opportunity presents itself to one of us or one of our acquaintances whom we can follow.
Week 1 Results
Andy Hampton
We attended a Gears of War 2 tournament at Clockwork Games and Events in College Station, Texas which we found out about through a search engine on the internet. This experience proved to support some of my hypotheses and disprove others.
Upon arrival in the outlet area where Clockwork was located, I found it on the side of one of the buildings with poor signage out front. This led me to believe that people who go here are a fairly tight knit bunch because advertising was nowhere to be found and the hours of the venue were odd, seemingly catering to an after school crowd. Once inside, I noticed a pretty sophisticated setup consisting of a row of 8 XBOX 360 consoles mounted about 1 foot off the ground on the wall with 8, approximately 20 inch, screens and 2 larger screens above them. The lighting was poor, as to be expected, and the interior was actually quite nice with about 6 tables, for card games I assume, and 2 couches in front of the console laden wall along with other various chairs. There was also a counter as I assume this place also sold games, cards, and snacks.
There were about 15 people there representing ages 9 to 40. There was talk that one of the participants there was a former Major League Gamer. We learned that people that usually come to Clockwork play card games and video games. Everyone there was relaxed and enjoyed the company of fellow gamers and it became apparent that many of them were familiar with each other suggesting a regular crowd usually played at Clockwork. They were happy to converse with us and answered a lot of our questions. It was heavily dominated by guys but there were two girls that attended but did not play.
It was explained to us that the Gears of War 2 tournament would consist of 10 participants in a 1v1 double elimination tournament bracket following Major League Gaming rules. Julio is the tournament manager and assigns the rules of the game and acts as referee by preventing cheating that might take place such as screen peaking, the act of looking at the opponents screen to gauge where their character was located on the game map. The winner received a new copy of Gears of Wars 3 sponsored by FX Games, represented by Chris who has known Julio for many years, and some prize money. We entered in 2 of our own members, Neal and Zach, to support the event and immerse ourselves a little.
In the first game there was a 9 year old kid that beat a 18 year old and he was calling him out by saying “Now Suck This!” Everyone was animated and very into the tournament, yelling at the TV’s and having a good time. There was a lot of smack talk, sarcastic hostility, and jokes among everyone there but as a bit of a surprise everyone seemed pretty outgoing and swearing was kept under control, if it even occurred.
We met Jacob who is in high school and aspires to be a professional. He plays at least 3 hours a day and recently played a $1000 tournament but got beat by some “hardcore kids”. He said it is a lot about getting in some people’s heads especially if they are younger than you, you can get in their heads with some smack talk he says, and “put the kids in their place”.
Overall, participants of the competition exhibited unforeseen outgoingness which may have been a result of being in their comfort zone as was apparent given the pre-existing relationships between the gamers. As was predicted most of the gamers had spent large amounts of time playing the contest game and were therefore very good at them relative to us. Some interesting observations were the age ranges which varied wildly and the highly competitive environment that developed after the tournament started which differed with the very laid back atmosphere that existed beforehand.
Sunday, September 18, 2011
Paper Reading #9: Jogging over a distance between Europe and Austrailia
Jogging over a distance between Europe and Austrailia
Authors - Florian Mueller, Frank Vetere, Marting R. Gibbs, Darren Edge, Stefan Agamanolis, Jennifer G. Sheridan
Authors Bio - Florian Mueller was a PhD student at the University of Melbourne at the time of this paper's publication and he is now working as a Fulbright Visiting Scholar at Stanford University.
Frank Vetere is a senior lecturer at the University of Melbourne researching Human-Computer Interaction and Interaction Design.
Martin Gibbs is a lecturere at the University of Melbourne and is investigating interactive technology as it relates to society.
Darren Edge is a researcher at Microsoft Research and has a PhD from the University of Cambridge.
Stefan Agamanolis was a researcher at the Distance Lab in Scotland at the time of publication and has a PhD from MIT.
Jennifer Sheridan was a researcher at the London Knowledge Lab at the time of publication of this paper and has a PhD from Lancaster University in Lancaster, UK.
Venue - This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
Hypothesis - In this paper, researchers explain a framework they have built called Jogging over a Distance that allows jogging to be more social. In doing so, the researchers hope to support their hypothesis that this framework can be used to collect data on how people perform social exertion activities and use said data to build more socially conscious exercising software that promotes long term user participation.
Contents - Jogging over a Distance requires 2 users to agree on a time to begin jogging and input a target heart rate into the system. The users then put on a headset, heart rate monitor, and computer/phone. As the users jog, their heart rate data is sent to a server where the information is compared to the target rates of the users. If one of the user's heart rate is faster than their goal and the other user's is slower then the faster user's audio will seem to come from in front of the slower user increasing the desire to perform better.
Methods - To test the effectiveness of the system, 17 participants went on 14 runs using Jogging over a Distance. To evaluate the system's performance on the subjects, extensive interviews were conducted with all participants after use and analyzed. These interviews consisted of note taking during and after the sessions, discussing heart rate data to better unravel the events that took place, and using this data to form affinity diagrams. Because of this style of data collection and analyses, the researchers were developing only qualitative results to better discuss what the system did right and wrong rather than crunch some numbers that may be misleading due to the social nature of this project.
Results - Communication integration proved to be extremely valuable in getting users involved during the activity by creating a goal of side-by-side conversation that was only possible if target heart rates were being met. This also helped users feel safer and led to many users jogging further than they would have alone. Effort comprehension also proved valuable to participants as they learned about how their bodies actually exercise in a relative sense rather than a competitive sense because users were at different levels of fitness meaning just because one person is slower than another doesn't mean that the slower person is putting in any less effort. Keeping this fact in mind, participants said that working hard and staying right beside their partner led to greater satisfaction from the fact that both parties knew that equal effort was being exerted. The virtual mapping of the participants based on heart rates was overall viewed positively because some of the participants couldn't run with their partners before because they would be too fast or slow but Jogging over a Distance fixed that. Their were still negatives when a user entered in a bad baseline value or external factors got in the way.
Conclusion - The researchers conclude by saying they hope this study paves the way for more research and application regarding social integration with physical exertion.
Discussion
I think the researchers achieved their goal of showing how social integration with physical exertion can encourage people to exercise more. Putting myself in this situation has me convinced that this would motivate me to exert more if someone sounded distantly in front of me. I found this study especially interesting as a first step in what could be a great field in the future as group exercise is brought into the space. The only fault of this work that I could find was the lack of concrete data but it makes sense to take a qualitative approach because numbers are not as important in something that is attempting to increase participation and encouragement of physical activity. However, this approach could have been improved if the participants had been studied for a longer period of time to see if habits developed or not based on the system.
Wednesday, September 14, 2011
Deliverable #1: Prior Assessments and First Results
Pro-Gamers group
Neal Roessler
Zach Casull
Chandler Cell
Prior Perception
Neal Roessler
Zach Casull
Chandler Cell
Prior Perception
My group decided to study the world of professional gaming instead of a single group of people. Expanding to this larger scale allows for a more comprehensive glimpse into a lifestyle and does not limit us to a specific set of people. In this paper I will describe my expectations, perceptions, and future plans regarding this field.
My initial expectation of what I will see is a stereotypically geeky culture that is predominately male and focused on what they do. These people will probably tend to be fairly private individuals that have been doing this kind of thing for years probably on the same game. I expect my interactions with them to be minimal as I do not foresee long, drawn out responses to questions or prolonged conversations occurring.
Integration into this group may prove to be fairly difficult as I, or anyone else in the group, is anywhere near as skilled, or even familiar with, most of the games we will see. We will have to take the position of the audience for a majority of the study and will probably have only 1 or 2 people able to participate to any competitive degree in any of the events we plan to attend. We plan on attending gaming competitions and following some people that we have connections to during the study.
I hope that I, and the group, will gain some meaningful insight into this broad area and be able to contribute during the project.
Initial Results
The first interaction we had with professional gaming consisted of an informal meeting with one of our connections that is a semi-professional. We only met with this one person, Tyler, so as to give us an entry point into this field.
Our first meeting consisted of a meeting with Tyler at his residence on Tuesday, introducing ourselves and what we would like to do, and a small interview with Tyler consisting of some broad questions we had about the area and his personal experience as well. On first arrival, his apartment appeared how I expected it to be. Minimal, kind of messy, and what little furniture he did have related to his technology in some way (TV stand, consoles, etc.). We proceeded to introduce ourselves and describe the project in a general sense as not to influence anything Tyler would do or say later. Through questions we learned that about the top 1% of gamers in a particular game can be considered professional which usually equates to around 100+ players. An interesting made by Tyler was that at a certain point, a single gamer cannot get any better and they must begin working with a team to become truly professional which is why competitions usually consist of only team contests. In addition to being on teams, professional gamers must have a coach for their teams to act as an anchor and keep the team on task and off each other’s backs.
Overall, we gained a lot of insight from Tyler and hope to continue our study with him and others in the future. We are currently working on plans to attend as many professional gaming events as we can find and maybe even find one that Tyler can enter in.
Paper Reading #8: Gesture search
Gesture search: a tool for fast mobile data access
Yang Li
Yang Li is a researcher at Google and earned his PhD from the Chinese Academy of Sciences.
Summary
In this paper, the researcher creates a program that recognizes gestures and searches data on mobile devices based on said gestures. The hypothesis is that using such a system can improve access time and makes navigating the ever increasingly complex mobile interfaces quicker and easier than ever.
Next, the researcher discusses the basics of how Gesture Search works. A user gestures a letter and search results begin appearing ranked in order of most commonly accessed. The user can then continue entering more gestures to refine the search or select the desired data and perform a task on it such as calling a contact or opening an application.
The researcher then described many of the interactions with the device such as discerning GUI input (scrolling, tap selection) from gesture input, searching from a gesture query, and using the history to display better results. Telling GUI input apart from gesture input was difficult at first but resolved after a study was done involving users that helped develop a model for recognizing scrolling vs. gesture input quickly. Searching using a gesture query is extremely similar to any other searching done on mobile devices as search results change based on new input entered in real time. Using access frequency consists of storing the frequency in a local container and then applying a ranking algorithm that takes this information into account when displaying results.
Lastly, the researcher performed a study using employees at a specific company and had them use the app in their daily lives. The study found that a majority data access was in the form of contacts and applications and that most queries consisted of between 1 and 3 characters to get to the desired data. The survey conducted at the end of the study showed that the participants enjoyed the product and found it useful.
Discussion
I think the researcher achieved his goal of providing a quicker way to access data using gestures. I think letter recognition may not be the best way to do this though because, typing the first 3 letters of an application can be just as fast if not faster than any kind of gesture recognition that has a lot of wait time, in my experience, between letters. The aspect of this product that interested me the most was user-defined gestures which were barely mentioned because those could lead to consistent one-gesture use cases.
Paper Reading #7: Performance optimizations of virtual keyboards for stroke-based text entry on a touch-based tabletop
Swype
Performance optimizations of virtual keyboards for stroke-based text entry on a touch-based tabletop
Jochen Rick
Jochen Rick is an associate professor at Saarland University and is a previous research fellow at The Open University after earning his PhD from Georgia Institute of Technology.
This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
In this paper, the researcher explores the validity of the QWERTY keyboard as a viable way to handle input in a touch based world and develops a new model that supports stroke-based input that increases efficiency.
The researcher explains the history of keyboard layouts beginning in 1874 with the QWERTY keyboard after altering a alphabetic design that resulted in keys close together being pressed and jamming the typewriter. Since this keyboard became dominant, many people and companies have tried to improve upon it in both physical and digital environments to no real success.
Next, the researcher creates a model to be used in later steps by performing a study of 8 participants as they perform many tasks, that consist of connecting 4 randomly placed points in a specific order and recording the time it took to go certain distances in certain directions. The stroke was separated into three phases (beginning, middle, end) to further study. The results of the study led the researcher to create a model that is based on a Shannon formulation of Fitts's law.
The researcher then applied this model to the many keyboard designs introduced earlier in the paper and calculated the words per minute rate that could be possible, for stroke-based and tapping input, for each layout. The words used for testing came from the Project Gutenburg library and has shown to be very accurate for these kinds of tests. The results shows the OPTI II and GAG II designs working the fastest while the QWERTY and other more popular layouts performed poorly.
The researcher then applies a simulated annealing process to produce a square-key and hexagon-key layout that result in a faster calculated wpm than the best layouts in the previous analysis.
The researcher concludes by discussing why the QWERTY keyboard has remained a standard for so long despite being shown slower than some other models and explaining that adoption of a new typing layout could prove to be a difficult endeavor but worthwhile as we explore touch.
Discussion
The researcher did achieve his goal of developing an optimized stroke-based keyboard layout for touch input but the real question is does it matter? On this account, I think not because this entire study revolves around tabletop surfaces only which I don't believe will be in common use for the next 5-10 years if even then. It does however point out that keyboard input does need to be fixed for touch based devices.
There is a future in this field for research on a mobile level and in other areas as well. A big segment that this kind of technology would be useful is for disabled people that have to use keyboards such as Swype to track their eye movements and respond in a manner similar to those discussed in this paper. Using an optimized keyboard, this task could be simplified and made quicker.
Sunday, September 11, 2011
Paper Reading #6: TurKit
TurKit: Human Computation Algorithms on Mechanical Turk
Greg Little, Lydia B. Chilton, Max Goldman, Robert C. Miller
Greg Little is a PhD student at MIT working in the User Interface Design group.
Lydia Chilton is a graduate student at the University of Washington and attended MIT prior to that.
Max Goldman is a graduate student at MIT focusing on user interfaces and software development.
Rob Miller is an associate professor at MIT and leads the User Interface Design Group there.
This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary
In this paper, researchers propose a way to improve upon the Mechanical Turk application used for human computation by providing a toolkit that provides a framework upon which to build more complex algorithms. The hypothesis is that TurKit will make human computation more effective than it currently is by enabling complex algorithms.
The first item presented by the researchers is that TurKit Script which extends javascript's ability to communicate with MTurk. TurKit Script adds support for the functions waitForHIT, prompt, vote, and sort as well as fork and join to support concurrency.
The next thing discussed is crash-and-rerun programming which TurKit Script is based on. Crash-and-rerun programming involves recording all successfully executed lines of code, crashing, and rerunning all recorded lines. TurKit Script had to adjust this to fit their model due to some of the functions being called costing actual money so they developed a once function that tells the script to look for an output to this line before running it to see if it had already been run.
TurKit was written in Java and used Rhino and E4X for additional support. Implementation of the once function took the most work because a persistent data store had to be created to hold the values after the line is executed so that on the next run it can be retrieved without making unnecessary calls.
The researchers provide examples of where TurKit has been used to great affect and shows what cannot currently be done in standard MTurk. These include iterative writing, text recognition, and decision theory experiments. All of these examples take advantage of the crash-and-rerun nature of TurKit and allow users to build off of other users' answers.
The paper concludes by noting the many uses of TurKit that have already been put into practice as well as note new ways it can be used like for experimental replication or just exploring new algorithms using the crash-and-rerun framework.
Discussion
I think the researchers did not necessarily prove their hypothesis with this paper alone however, I feel that they did still convince me of the success of the project by the examples of people already using TurKit. They explored a new territory in human computation by supporting iterative improvement allowing users to build off of other users to get maximum feedback.
I think this paper was interesting because of the crash-and-rerun programming that they decided to use in TurKit. This approach supports the claim that the simplest solution is usually good enough although not optimal. Recording line executions that succeed is very smart but also very costly especially when trying to scale so there is a very niche audience that I think this is aimed and they better be willing to make some sacrifice. But as mentioned above, people are willing to make these sacrifices and are already using it so it sounds like a success.
Subscribe to:
Posts (Atom)