top of page
Search
  • as10476

Final Project: Eye Tracking For User Interfaces - Updated

Updated: May 16, 2019

Final Essay:

Introduction

After my group’s presentation of our midterm project of the Posture Pack, I considered to move forward with the project and work towards a new iteration of our prototype. However, I had difficulty in finding proper resources for interviews. I continued to focus on physical ability and thus decided to venture on a new idea: a hands-free user interface for mobile devices.

Research Problem

A lot of today’s technology, specifically user interfaces for mobile devices, television, and computers rely on the hands for navigation. This excludes a number of salient identities with conditions that inherently restrict the use of the hands or cause the hands to have no functionality at all. For my project, I explored the following questions: how can technology be used to make for more inclusive user interfaces for people with hand disabilities? Also, how have these been portrayed in society in general, and how can it change if this technology became more prevalent?

Salient Identities and Significance

I am focusing on a combination of the following salient social identities: physical ability, age, and language. Physical ability is the most directly addressed for the technology I’m exploring and the following two fall under its branch. I will later discuss more in detail the application of the latter two salient identities in my project. When deciding on choosing a specific group to explore in order to get more direction for research, I went with Parkinson’s Disease. Parkinson’s results from degeneration of special nerve cells in the brain that enable movement, both voluntary and involuntary. A lack of these nerve cells causes multiple symptoms which may appear in different extents for different patients. Some of these symptoms include tremor, stiffness of the limbs, gradual loss of spontaneous movement, voice changes, and decreased facial expression (AANS). Age plays a role as well considering that a majority of those with Parkinson’s are over fifty years old, still the symptoms can appear before hand from as early as in the twenties.

Technology

When considering what technology to explore and soon pursue for my final project, I considered the current shortfalls of assistive technology when it came to providing a better experience for patients with Parkinson's disease or other movement disorders and areas of potential growth and development. The problems that these patients experience because worse based on how much each of the different social salient identities intertwine and add on to one another; I discuss this more in detail in the Persona Design Exercise and a connection with the intersectionality. Essentially, technology that only addressed parts of the symptoms and challenges faced by those that are disabled without taking into consideration other aspects have been unsuccessful.

The first assistive technology that I looked at was voice assistive technology, and initially considered it to be a practical option since I discovered from my research that although slurred speech was a symptom of Parkinson’s, it is generally understandable by people. However, this is not the case with the popular voice recognizing software of today. Siri, Amazon Echo and Alexa are just a few of these interfaces. In a study that appeared in Biomedical Engineering Online, researchers used a conventional speech-recognition system to compare the accuracy of normal voices and those with six different vocal disorders. The technology was 100 percent correct at recognizing the speech of normal subjects but accuracy varied between 56 and 82 percent for patients with different types of voice ailments. Furthermore for those those who are are not native English speakers may not find a voice recognizing software that accommodates their language.

Another thing I took a look at was the accessibility features on Apple’s IPhones, and checked which of those features can be used by Parkinson’s patients. These are from AbiliyNet’s website, the company is made of a team of accessibility consultants and testers who work with digital teams. The post was made towards the beginning of 2017, when iOS 10 was released. It is important to keep in mind that the latest version is iOS 12. I took a look at some of these features myself to see if any changes were made since the post. Below are a few screenshots of the settings.

Features that may be useful for those with hand tremors and motor skills are found underneath the Interaction section of the Accessibility menu. There, the touch settings can be modified. One in particular is the Hold duration which is the “duration you must touch the screen before a touch is recognized (Apple).” This can be handy for Parkinson’s patients because it can be incredibly difficult to do a simple concise tap that is swift, on-target and is not interpreted as a series of taps or swiping gestures. According to Christopher, “this will allow you to fine-tune the phone’s response so that tremulous butterfly-light taps aren’t constantly activating items or sending keystrokes from the on-screen keyboard. Only more definite and intentional touches are processed.” Below the hold duration option is ignore repeat which treats multiple touches as a tingle tap. This would be necessary for hand tremors because there the oscillation of the fingers can register taps, leading to activating some application or feature not intended by the user.

These accessibility features available on the IPhone and a number of Apple’s other mobile devices are definitely a step forward in the right direction to make their products more inclusive. However, this doesn’t solve all the problems just yet. Certain movement disorders render the hands to have no functionality at all, so users have to resort to an alternative.

This is where I was considering eye tracking technology. Essentially what the technology entails is a “hands free” user interface mode which can would be applied on an iPhone or android device. Hands free is in quotes because the actual use of the hands would depend on whether or not the user would be able to face the device by holding the phone or by using some extension, like an arm or a stand. This would be the case for someone that may have injured hands or in the more extreme cases permanent disabilities as found in patients with ALS for example. Communicating using an eye tracker is not something new, the technology has been around for many years and is still in its early stages and isn’t mobile. Combining the eye tracking technology with our mobile devices will allow for smoother navigation across different platforms. Users won’t just have to stay in one place and can have the freedom to go wherever it’s possible for them. This in combination with friendlier application especially for those that are older in age can make for an easier experience. These are all essential elements of user-centered design, which is defined as “the potential to acknowledge groups that are often labeled as “special needs,” such as the elderly and individuals with disabilities, by including them as a part of the mainstream” (Tauke). I wanted to make sure these were present went choosing my technology, and things I considered when finalizing my options.

Persona Design Exercise

During the Persona Design Exercise, I was still working on the Posture Pack idea, however I did gain many incites on collecting information that would be necessary later on for the project. Personifying the salient identity we were exploring made it easier to breakdown what would be the real life challenges and needs our technology would have to address. A few of the aspects we had to consider were daily routine, paint points, bias and oppression. As for routine, this tied directly to one of my interview questions I had prepared for my target salient identity. Because I would be speaking to someone with a physical health far different from my own, learning about the day to day life of this individual will give more incite on how a technology could fit into the picture. Pain points were easier to identify since the physical challenges are well documented in medical websites and journals. The interesting components were biases and oppression. These two required having more incite on the role of society on the lives of the target salient identity.

Interviews + Feedback

To prepare for the interviews, we wrote down some questions as a class with conversation starters, and transition questions to get the most out of our interviews. Because I was going to be interviewing a patient with health related problems, I wanted to make sure I was prepared and asked the correct questions. I first reviewed The Field Guide to Human-Centered Design Kit, which breaks down design into three main phases: Inspiration, Ideation, and Implementation. The process of conducting interviews falls under the Inspiration phase and the author stresses that "getting to the people you’re designing for and hearing from them in their own words" (IDEO). Furthermore I reviewed some vocabulary and word choice to have questions that wouldn't offend any of the patients. To get better understanding on how to communicate well with disabled patients I read through the National League for Nurses documentation on "Communicating with people with Disabilities." A few examples of tips they included were "talk to persons with disabilities in the same way and with a normal tone of voice (not shouting) as you would talk to anyone else." Although this may seem trivial, I know many people including myself forget this simple rule. The documentation also has specific guidelines for those with movement disorders: "When giving directions to people with mobility limitations, consider distance, weather conditions, and physical obstacles such as stairs, curbs, and steep hills" (NLN).

Unfortunately, I was unable to conduct an interview with the target social salient identity of my project. I reached out to NYU Langone to set up an interview with any one of the Parkinson's patients or with other conditions. The office requested for me to complete an IRB form, which is used Further details of this conversation can be found in a separate blog post.

The first expert that I reached out to was Sharon, who one of the Nurses from the Parkinson's Foundation that speaks to patients with Parkinson’s, answers any questions they may have, and provides them with resources for treatment. One thing that is important before looking at the interview details is that Sharon is not a practitioner, this was something I considered and influenced me to have a second interview later on with someone that works directly with the patients.


Documentation: (Please refer to my blog post for a word for word transcript)

Answer 1:

Answer 2:

Answer 3:


The questions that I asked Sharon included: what are some of the common symptoms of Parkinson's patients, what challenges are faced when dealing with technology, and how can eye tracking technology alleviate any of those problems. The responses for the first two questions reflected what I came across initially when I was doing my background research. One important point Sharon made, and I had not come across in my research was that she mentioned "Parkinson's also have trouble with eye movement. Sometimes the Parkinson's can affect the muscles of their eyes....It is not as common and it's under diagnosed and underappreciated."I found this really interesting, and after further research I fond that less than 10% of the patients diagnosed with Parkinson's have eye muscle movement problems. With this in mind, I wanted the technology I was exploring to be able to accommodate for these patients and make for other options that would substitute for eye tracking.

For my second expert interview, I wanted to speak with someone with a stronger reputation in the field and was aiming for a Doctor. I was able to reach out to Dr. Michael H. Pourfar, a neurosurgeon at NYU Langone. He specializes in movement disorders and is one of the doctors at the Fresco Institute for Parkinson's and Movement Disorders. I asked him the same questions as I asked in my first interview with Sharon. Dr. Pourfar reiterated the point made by Sharon about eye conditions; however I wanted to include other patients as well. He also gave some feedback and advised me to keep the interface simple, considering that most of the patients are 50+, and generally don't have much familiarity with many of the applications. Also, most of the age group tend to use a few number of applications. Having them available without moving around around while navigating will make life a bit easier.


Documentation:


Class Materials

When considering which salient identities my project would be geared towards, I found our class work on intersectionality to be helpful. Before we learned about the various implications of intersectionality, I considered each of the salient identities we looked at during class to be independent of the others. For instance, race and gender. Even though I knew of the challenges faced by each one individually, the new challenges introduced as a result of the combination of the two were unknown to me. To relate this to my project, I noticed that certain people’s pain points significantly increased when there exists a combination of disabilities. For example, of those that had a hand disability, voice recognition software would not be helpful if they also have speech problems. Another aspect of intersectionality is how age came into play. Those that are in their later years already have trouble using technology that involves multitasking and can sometimes be very confusing in the learning process. This wouldn't be much of an issue for the younger generation, as they tend to pick up on the latest technological features quickly and can do more complicated tasks with the technology. Adding a movement disorder to the equation changes the entire picture, and places those that may have the intellectual ability at a disadvantage, this is even worse for those that are older and have difficulty already. For this reason, it would be necessary to not forfeit the high-tech availability for performance and still offer these salient identities the best that can be offered.

Next Steps

I did not create any working prototype for my project; however, because the technology for eye tracking and retina scanning does exist, moving forward, I would look to implement them with a mobile application. Furthermore, I’ve only been able to understand the challenges of a small percentage of the population with physical ability disorders, and I was not able to speak directly to someone of my social salient identity. Reaching out to more salient identities including those who are hearing impaired, and permanently injured would allow for the technology to have a larger footprint. I have already prepared questions using the Guide to Human Centered Design, and these can be used to get further input on the technology I am proposing.

I would also like to see where other technologies are heading, specifically with smarter speech recognition software. As I mentioned before, the results have been unsatisfactory when it comes to recognizing speech of people with disabilities and disorders. However, in my research I have also come across efforts put forward to increase those percentages for more accurate results. AI for example, in combination with machine learning algorithms can rely on more than just sound waves from the vocal cords, but as good lip readers tend to do, use muscles and shapes produced by the mouth in conjunction with other facial structures to translate into speech. This is relatively a new field of research and has a lot of potential to become the standard tool for those with disabilities.

To conclude, physical conditions should not become a barrier when someone wants to express their thoughts and feelings. As engineers, creators, and makers, the technology we develop should break that barrier, and allow for more inclusion. I hope to carry this on beyond college and into my career.

 

Bibliography

(Class resources)

IDEO. The Field Guide to Human-Centered Design. 1st ed., Design Kit, 2015.


Tauke, Beth. “Diversity and Design : Understanding Hidden Consequences”, et al., Routledge, 2015. ProQuest Ebook Central, https://ebookcentral.proquest.com/lib/nyulibrary-ebooks/detail.action?docID=2195002.


(Other resources)

“IIF News Releases.” U.S. Bureau of Labor Statistics, U.S. Bureau of Labor Statistics, www.bls.gov/iif/.


“Movement Disorders.” AANS, www.aans.org/Patients/Neurosurgical-Conditions-and-Treatments/Movement-Disorders.


Muhammad, Ghulam, et al. “Formant Analysis in Dysphonic Patients and Automatic Arabic Digit Speech Recognition.” Biomedical Engineering Online, BioMed Central, 30 May 2011, www.ncbi.nlm.nih.gov/pmc/articles/PMC3120728/.


Mullin, Emily. "Why Siri Won't Listen to Millions of People with Disabilities." Scientific American. 27 May 2016. 1 May 2019 <https://www.scientificamerican.com/article/why-siri-won-t-listen-to-millions-of-people-with-disabilities/>.

Smeltzer, Suzanne C. "Communicating with People with Disabilities." National League of Nursing. 28 Jan. 2017. 5 May 2019 <http://www.nln.org/professional-development-programs/teaching-resources/ace-d/additional-resources/communicating-with-people-with-disabilities>.


17 views0 comments

Recent Posts

See All

Phase 2, Post 5 - External Speakers

During class, we had the wonderful opportunity to hear from different speakers from a wide range of backgrounds and fields share their experience with Diversity, Inclusion and Technology. One thing th

Missing Post - Phase 1, Post 10: Salient Identities

Initial Project: At first, when I was continuing with the Posture Pack, the salient identities included ability, age, and class. Ability and age made up the primary focus of our target salient identit

Phase 2, Post 10: Bibliography

The following are the sources I used for my final project: Two of the sources are from class material: The field guide to human centered design, and Diversity and Design. Further references are made w

Post: Blog2_Post
bottom of page