UI8-mockups4K-macbook.jpg

Image Enhancer

UI8-mockups4K-macbook.jpg
 
 

Image Enhancer

Ongoing Project (2019)

 

Overview

Over 90% of Canadians aged from 15 to 44 connected to the digital space every day. In digital space, image contents are one of the core media mediates the communication among users. However, perceiving these image-based contents are often challenging for people who are struggling with visual impairment. Although accessibility features provided by the social media platform is helpful, it does not fulfill various needs for people with visual impairment as their symptom varies based on its type and severity. In this project, I explore image enhancement technology as a mean to perceive image-based contents for people with a mild to moderate vision loss.

 
 

Research Question

“How image enhancement technology as an accessibility tool can help people with low vision to have better access to image-based content when it is integrated with the current web browsing system?”

When describing the visual disability, the word blindness is one that is commonly used among many people. One of the common misconceptions is that visual impairment is a black-and-white disability. However, there is a large grey area called “low vision” exists between the two extremes of being blind and being fully sighted. According to WHO(World Health Organization), approximately 1.3 billion people live with some form of vision impairment. Among them, about 188.5 million people have mild to moderate visual impairment. Vision loss is a complex disability. Depending on its type and severity, each individual’s visual condition significantly varies. Therefore, there are many users not getting full advantage from the use of low vision aid tools. In order to provide an effective solution, diversification of the tool is necessary. There are accessibility tools available for users who prefer perceiving images with their residual vision: a color inverter, a color filter(color blind specific), magnifier, and image-to-text. Except for image-to-text, these are the tools that directly modifies the attributes of the image. However, these tools only provide a simple modification to effective to certain visual impairment. On the other hand, image enhancement technology allows image manipulation at a deeper level. One of the recent research evaluating the usability of an image enhancement system as a mobile device conducted by Fahao Qiao, a graduate student from Michigan Technology University. In his dissertation published in 2015, he mentioned the scarcity of research on image enhancement for mobile devices compared to other accessibility tools. Using Qiao’s work as a basis of prototype development, I came up with an image enhancement technology in color images as a form of a digital application. This leads to me the question, “how image enhancement technology as an accessibility tool can help people with low vision to have better access to image-based content when it is integrated with the current web browsing system?”

 

I created an add-on app which users can easily install on their web browser. It enables users to build their own photo filters, and apply it to images shown on the web browser.

Solution



 

Research

I began looking at this problem by interviewing people with visual impairment, as well as accessibility technology expert, Shane Laurnitus, who works as a lead developer at CNIB(Canadian National Institute for the Blind). There are various accessibility tools to help people with visual impairment to perceive digital information. For users who prefer perceiving images with their residual vision, there are several accessibility options: color invert, color filter(color blind specific), magnifier, and image-to-text.(Fig 1.) Except for image-to-text, these are the options that directly modifies the attributes of the image. Inverting color and color blind tools changes the color of the images shown on the screen. Magnifiers scale the image up. Then are these tools enough for people with low vision to perceive images as precise as fully sighted people do? During both of my textual and face-to-face research, I found the strong evidence that Sharpness, contrast, and resolution of the image plays a critical role for increased visibility. from Dr. Eli Peli’s scholarly paper. Unfortunately, none of these accessibility software modifies those attributes.

 
 
Fig 1. use of the categorized accessibility software by severity of visual impairment.

Fig 1. use of the categorized accessibility software by severity of visual impairment.

During user interview, I was able to find problems among people with low vision and their unique strategy to solve their problem when they attempt to perceive images on the device screen. Here are some information I was able to get from the actual users (user names are modified for privacy).

 
 
 

One of the insight I was able to get was, they rely on additional software and hardware, sometimes even their peers and friends to cover their disparity that accessibility tools cannot be a help of. In the case of using software, they often encounter with difficulties in using the software, as it is not designed to support people with visual impairment. Modifying the hardware setup also takes away too much time (e.g. adjusting monitor settings every time they load the image). In the case of requesting help to their family, friends and peers. Sometimes they are not available to assist them. Therefore their core needs seem to be clear. They want more control access to the photo itself when seeing the images online.

 
 

Design Methodology

In this project, my aim is to deliver the actual product that can bring benefit to the target user group. Therefore, it involves series of continuous iteration. For my project, I follow the principles of the lean methodology for the quick modification of the project scope and redirection, and agile methodology for prototype delivery (see Fig 2). The length of each iteration cycle is approximately 3 weeks. Each week is assigned for the stage of “build, measure, and learn.” In measuring phase, I collected feedback from the user testing and the interview from the CNIB community and took notes about the parts of the prototype that needs to be revised in the next iteration. Moving onto the learning phase, I set up the plan on how to apply feedback collected from the previous testing in a technical way. I often looked through materials and tutorials explains how to make these ideas tangible. In building phase, I incrementally built the prototype incrementally. I kept track of the prototype development progress each day using a burn down chart.

 
 
Fig 2. Combined methodologies of Agile and Lean

Fig 2. Combined methodologies of Agile and Lean

 


Initial sketch

During the initial iteration cycle, I decided to build an add-on plugin for the web browser(Google Chrome). The popup window for Chrome extension can have various dimensions, so I started with a series of rough sketches to try to find the best form for the essential feature of the app.

One of the essential feature of the app will be sliders controls attributes of images. According to Choudhury and Medioni’s research paper, contrast and sharpness increase the perceived visibility of the image. Peli’s scholarly article about comparing image processing techniques draws a conclusion supports this research methods. moderately sharpened images with enhanced contrast has the highest perceived visibility among people with low vision. I wanted to check if these two variables makes the difference in perceived visibility of images in real life. Therefore, I decided to have two sliders that adjust the contrast and sharpness in a narrow popup window.

 

Prototype V1

Feedback

Once I finished my first prototype, I brought it to the CNIB community and I ran four usability testing with people with low vision. Each session was about an hour, it consists 40 minutes of observation and 20 minutes of user interview. One of the common feedback I got was that it needs to have increased number of sliders to create detailed customized filter. Also test participants mentioned that it is a bit of work to open the popup window to apply image filters every time they open up the new page, or refresh the existing page. When I asked them to rate the effectiveness of the image filter on a 1-10 scale, the rated number varied in a wide range. It is mainly due to complexity of the visual impairment. As mentioned above, visual impairment is not a binary, but spectrum. Therefore, an ideal solution for each individual varies based on its type and severity.

 
 
Fig 3. The complexity of solutions versus the severity of vision condition.

Fig 3. The complexity of solutions versus the severity of vision condition.

 

Prototype V2

process

During the second iteration cycle, I mainly focused on reflecting user feedback. I also tried to add more design to it, but refinement of UI was not a primary focus at this stage. Collecting user feedback and establishing a core functionality of the app by “fast failing”. Due to the time restriction, I was only be able to make changes that are crucial for the app. Here are some of the design choices I made in this iteration.

 
 
custom.jpg

1. More customization options.

As mentioned above, providing an ideal solution for each individual with visual impairment is very challenging goal. I thought providing them a variety of tools to build their own filter would be the better approach to fulfill targeted users’ need.


2. Automatic filter application.

Prototype V2 applies the filter automatically when the page is refreshed or newly loaded. Instead, prototype V2 has a separate settings window to turn on and off the entire app.

3. Increased readability of UI.

V2 loads a settings menu on a separate tab. I used a saturated primary color to make the button more visible to people with low vision. Also the default size of UI is larger than usual.

 


feedback

I conducted an hour survey a with different community members who did not participate in the interview session for the prototype V1. This time, the usability testing is done anonymously. I also conducted a paper-based survey instead of in-person interview in order to yield more unbiased data. The summary of the usability testing are same as following:

  1. The automatic filter apply is nice, but it does not really help them when seeing thumbnails from the search engine.

  2. Most people are positive about increased number of filter options. However, there are people who are frustrated about the number of choices given to them. Sliders are also quite sensitive.

  3. Image enhancement filter often makes the text in images less readable.

Also people suggested features that can help them to see the images better. separate tab, auto adjust the background color, and

Prototype V3

Prototype V3 is currently in the development process. I implemented new features based on the user feedback.


 

Bibliography

Asakawa, Chieko, et al. “Aibrowser for Multimedia.” Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility – Assets 07, 2007, doi:10.1145/1296843.1296860.

Beck, Kent, et al. “Manifesto for Agile Software Development.” History: The Agile Manifesto, agilemanifesto.org/.

Bennett, Cynthia L., et al. “How Teens with Visual Impairments Take, Edit, and Share Photos on Social Media.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems – CHI 18, 2018, doi:10.1145/3173574.3173650.

Choudhury, Anustup, and Gerard Medioni. “Color Contrast Enhancement for Visually Impaired People.” 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition – Workshops, 2010, doi:10.1109/cvprw.2010.5543571.

D’Aubin, April. “Working for Barrier Removal in the ICT Area: Creating a More Accessible and Inclusive Canada.” The Information Society, vol. 23, no. 3, 2007, pp. 193–201., doi:10.1080/01972240701323622.

Hersh, Marion, and Michael A. Johnson, eds. Assistive technology for visually impaired and blind people. Springer Science & Business Media, 2010.

Larnitus, Shane. Interview. By Do Eui Park. 22 Oct. 2018.

Leat, Susan J., et al. “Generic and Customised Digital Image Enhancement Filters for the Visually Impaired.” Vision Research, vol. 45, no. 15, 2005, pp. 1991–2007., doi:10.1016/j.visres.2005.01.028.

Luo, Gang, and E. Peli. “Development and Evaluation of Vision Rehabilitation Devices.” 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2011, doi:10.1109/iembs.2011.6091293.

Manduchi, Kurniawan et al. Assistive Technology for Blindness and Low Vision. CRC Press, Taylor & Francis Group, 2017.

Persson, Hans, et al. “Universal Design, Inclusive Design, Accessible Design, Design for All: Different Concepts—One Goal? On the Concept of Accessibility—Historical, Methodological and Philosophical Aspects.” Universal Access in the Information Society, vol. 14, no. 4, July 2014, pp. 505–526., doi:10.1007/s10209-014-0358-z.

Petrie, Helen, and Nigel Bevan. “The Evaluation of Accessibility, Usability, and User Experience.” Human Factors and Ergonomics The Universal Access Handbook, Nov. 2009, pp. 1–16., doi:10.1201/9781420064995-c20.

Poppendieck, Mary. “Lean software development.” Companion to the proceedings of the 29th International Conference on Software Engineering. IEEE Computer Society, 2007.

Qiao, Fahao, and Jinshan Tang. “A Mobile Image Enhancement Technology for Low-Vision Patients.” Electronic Imaging Applications in Mobile Healthcare, doi:10.1117/3.2204748.ch2. 
Voykinska, Violeta, et al. “How Blind People Interact with Visual Content on Social Networking Services.” Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing – CSCW 16, 2016, doi:10.1145/2818048.2820013.

Whiting, Anita, and David Williams. “Why People Use Social Media: a Uses and Gratifications Approach.” Qualitative Market Research: An International Journal, vol. 16, no. 4, 2013, pp. 362– 369., doi:10.1108/qmr-06-2013-0041.