lunes, 3 de septiembre de 2012

Reading on paper and reading on the screen

Although we don't wonder nowadays so much about differences between reading on paper or on screen, it is still interesting to see back to some studies to have a global view of visual user behavior.

The paper "An Eye Tracking Study on Text Customization for User Performance and Preference" by Luz Rello and myself (Mari-Carmen Marcos), that will be presented at LA-Web 2012 Conference, presents a user study which compares reading performance versus user preference in customization of the text. We study the following parameters: grey scales for the font and the background, colors combinations, font size, column width and spacing of characters, lines and paragraphs. We used eye tracking to measure the reading performance of 92 participants, and questionnaires to collect their preferences. The study shows correlations on larger contrast and sizes, but there is no concluding evidence for the other parameters. Based on our results, we propose a set of text customization guidelines for reading text on screen combining the results of both kind of data.

In the paper we collect some interesting previous work with empiric studies about readability on screen and printed format. They are mainly focused on layout and typography. The first studies (from 1929 to 1955) on printed format took in consideration the following variables: font size,, column width, font color, space between lines and font style. According to these studies, font type does not affect readability. These results were later confirmed using eye tracking.

We found that font size, font type and paragraph length were the most frequently studied variables concerning readability, but there is not a full agreement between the findings. Font sizes (12 or 14 points depending on the experiment) showed better performances in relation to smaller font sizes (8 and 10 points). Moreover, the largest sizes were also preferred in the surveys. Serif types performed better than sans serif types, however the users revealed to prefer sans serif types.
The performance on reading seems to be better for short lines -around 55, but it depends on the user goal, if they only
need to scan a document, long lines show more efficiency. There is less amount of related work taking into consideration specifically font and background colors and space between lines. Users prefer strong contrasts as well as moderate italics, regular fonts and just one color instead of four or six on a website.
For more information about eye tracking studies on reading for UX practitioners I recommend Jacob Nielsen's blog, and his book with Kara Pernice, Eyetracking web usability.

You might have listen about the F-shaped pattern (J. Nielsen 2006) and the golden triangle (G. Hotchkiss 2005). Both apply mostly in search engine results pages, but not so much in other websites because of the display and layout.

Also some interesting studies have been done in journalism, like those of Poynter Institute for websites and tablets (2011).

(This post will be updated with more papers in the future)


lunes, 25 de junio de 2012

Social annotation in web search: inattentional blindness

---------------------------------------------------
Aditi Muralidharan; Zoltan Gyongyi; Ed Chi. Social Annotation in Web Search. CHI '12 Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, pages 1085-1094
Full paper   Extended abstract 
---------------------------------------------------

Knowing that other researchers are looking for answers to similar questions to ours is a good signal of the interest of the topic. In this case, Aditi Muralidharan, Zoltan Gyongyi, and Ed Chi have done two studies on social annotation in web search using eye tracking technology. Here, in Barcelona, we have been running related experiments in the last months and arrived to similar conclusions not published so far.

Their study (CHI 2012)


1) 11 users perform several personalized tasks. Half of the results have snippets with social annotations, the other half have not this kind of results. Test is recorded with an eye tracker and the users watch the recording in a retrospective think-aloud (RTA). The conclusion is that most of people did not notice the social annotations, and those who saw them, don't pay too much attention. Why people don't see them? (see the second test)

2) 12 users (all of them know each other) perform several tasks in mock-ups of search engines. Again, half of the results have snippets with social annotations, the other half have not this kind of results. The pages are mock-ups because they have been modified in order to present social annotation with variations: big/small profile picture, above/below snippet, first/second position, long/short snippets. The test were recorded with eye tracker. The conclusion is that when the picture is big and when the social annotation is above the snippet, people notice it. The reason is that users have a pattern of reading in page results and they don't see further than the elements they recognize as useful in a first scan: title and url. So, new elements like social annotations prompt "inattentional blindness" (as Mack, Rock name it). This phenomenon is widely known, you might have seen the video with the gorilla dancing while other people is playing... and nobody notice it.
In a future paper, the authors could do more experiments to know if changing the style of this snippets would make them more visible.

Our study (We wish CHI 2013  :-)  )


Our study have not been published, so I will not write many results here so far. We studied 5 kind of snippets: Google Places, Google +, Google Author, multimedia and reviews. We duplicated each page of results removing in one version this rich snippets. We prepared 10 SERPs (with their 10 duplicated plain snippets pages). 60 users performed the 10 search and they saw 5 pages with rich snippets and 5 pages without the rich version of the snippet. The sessions were recorded with an eye tracker.
The preliminary results show that, in general, it does not exist significant difference (t-test) between the user visual behavior when looking to pages with rich snippets and the similar pages without them.
The metrics that we considered were mainly the fixation duration in the rich snippet (and his equivalent as a plain snippet) and the time to first fixation in the rich snippet (and his equivalent as a plain snippet), as well as the click count.
We hope to share soon the study with the scientific community and with SEO practitioners.
These heat maps show the fixation duration in the SERP with plain snippets and "Google +" snippets in 6th and 2nd position. No differences



And here you are some statistical results for the fixation duration average on the studied snippets on a top position (position 2) and in a bottom position (position 6). No significant differences:


jueves, 21 de junio de 2012

Thinking on buying an eye tracker?

Sometimes other colleagues around the world ask me about which eye tracker could they buy or rent.
My answer is "it depends on the kind of study you plan to run".

There are several firms in the market selling eye tracking devices based on infrared (for serious research I don't trust yet in other technologies based on recording the users' ayes with a webcam). As an infrared-based technology I know Tobii. I have a Tobii T1750, a second-hand one in fact from 2007, that it is not anymore in the market, so I cannot be sure about the newer models to recommend, but here you are some models and the uses that I (and Tobii) recommend for each one:

  • Tobii T60 hz and T120. Useful for research that aim to study the users' behavior on websites, images, and anything that can be showed on the screen. This device brings the infrared on the screen. Useful for usability studies
  • Tobii X60 and X120. Useful for research on screen (external screen) or other devices as mobile phones or tablets if you incorporates a special device for them. It is a great option because is lighter and flexible. If I could buy another one now, this would be my first option.
  • Tobii T60 XL. Similar to T60 but wider. Useful for studies that need a big screen. I don't see the point for usability studies.
  • Tobii Glasses. Necessary if the research is about physical objects (supermarket products, museums, etc.). I used it in a study on Connected TV and video games. It works properly but the resolution is lower than in screen-based eye trackers, and the complexity for the analysis is bigger due to the kind of information: video. It is not convenient for web studies.
 About to make your decision? Don't be on a hurry, it is an expensive device with an expensive software. Take it easy, check Tobii website, compare, and let me know if we can plan a joint research!

miércoles, 4 de enero de 2012

Lecturas "must" sobre Eye tracking aplicado a estudios de usabilidad

Los más teóricos (libros, artículos y tesis doctorales):
  • Jacob, R.; Karn, K. “Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises (Section Commentary)”. En: Hyona,J.; Radach, R.; Deubel, H. (eds.)The Mind's Eye: Cognitive and Applied Aspects of Eye Movement Research, pp. 573-605, Elsevier Science, http://www.cs.tufts.edu/~jacob/papers/ecem.pdf (2003)
  • Goldberg, J.H.; Wichansky, A.M. Eye tracking in usability evaluation: A Practitioner's Guide. En: Hyona, J., Radach, R., Duebel, H (Eds.). The mind's eye: cognitive and applied aspects of eye movement research. Boston, North-Holland / Elesevier, 2003, 573-605. 
  • Hassan, Y.; Herrero, V. “Eye Tracking en Interacción Persona-Ordenador”. No solo usabilidad http://www.nosolousabilidad.com/articulos/eye-tracking.htm (2007).
  • Nielsen, J.; Pernice, K.  Eyetracking Web Usability. New Riders Press (2009).
  • Duchowski, A. Eye Tracking Methodology. Springer (2009).
  • Nielsen, J.; Pernice, K. Técnicas de Eyetracking para Usabilidad Web. Anaya Multimedia (2010).
  • Drewes, H.  (2010). Eye Gaze Tracking for Human Computer Interaction, http://edoc.ub.uni-muenchen.de/11591/1/Drewes_Heiko.pdf
Estudios de caso:
Reflexiones:
Pautas:
Métricas: