Chris Ridpath, Jutta Treviranus, Patrice L. (Tamar) Weiss
Unlike the normal printed page, documents presented on the World Wide Web are generally designed to include color. The careful use of color can make the document easier to read, easier to navigate and more appealing to the reader as well as compliant with Web Accessibility Guidelines. In contrast, a poor selection of text-background color combinations can significantly detract from a document's readability. But what colors should be used for the text and background on web pages?
This poster describes an algorithm that can be used to machine test the readability of colors used for web pages and describes a study undertaken over the Internet to test the algorithm's effectiveness. Our data suggests that as the overall brightness of the color combination increased, user rating also increased. We found that light-on-dark color combinations were rated by users the same as dark-on-light color combinations. We also found that our algorithm could provide a good indicator of how users would rate various color combinations for readability.
To create an algorithm that can distinguish the readability of web page colors, we chose to test colors based on brightness difference and hue. Web page colors are described in an HTML document by their Red, Green and Blue (RGB) components and we used these values for our algorithm.
Difference In Brightness - To measure the perceived brightness of a color we used an algorithm that performs a linear transformation from RGB values to Luminance, Intensity and Crominance (YIQ) values. YIQ is a color system used by National Television System Committee (NTSC) broadcasters to optimize the transmission of color pictures for television and for downward compatibility with black and white television.
Difference In Hue - To measure the difference in hue we used the following algorithm:
Maximum ((TextR - Background R) , (Background R - TextR)) + Maximum ((TextG - BackgroundG) , (BackgroundG - TextG)) + Maximum ((TextB - BackgroundB) , (BackgroundB - TextB))
For our study we recruited a convenience sample of 149 volunteer subjects (65 males and 84 females) from various Internet mailing lists and listservs over a period of four weeks to take the test. The test used in the study can be viewed at http://snow.utoronto.ca/readtest/.
The study presented a series of sample text blocks that used different background / text color combinations (see Figure 1 below). Users were asked to rate the text images for readability using a sliding scale (visual analog scale). At the end of the test, users were asked several questions about their visual ability (e.g., acuity, field, color vision), personal characteristics (e.g., age, sex), and their computer system (e.g., monitor quality).
The sample images were rated by our algorithm and placed into 7 categories ranging from 'impossible to read' to 'effortless to read'. Then the subject ratings were compared to the algorithm ratings. Figure 2 (below) shows a plot of the median subject ratings versus category. It can be seen that as the image category increases, so does the user rating.
These results demonstrate that it is possible to judge the readability of web pages based on their color specifications. However, the judgement, based on brightness difference and color difference, is not entirely accurate. There are other factors that influence readability that call for further investigation.