Use Patterns of Interactive Graphics: A Case Study of a New York Times’ College Debt Graphic

Data visualizations, or “computer-based, interactive, visual representations of data,”1 have become a mainstay in journalistic storytelling. Using data visualization, journalists can provide the user with an almost endless amount of background information for a given story. The user can then choose which information to explore at their leisure. This novel form of storytelling has been used to cover stories in all genres of news, including science, economics, international affairs, and sports.

Although, the use of data visualization in journalism is on the rise, the academic research has not kept pace. Specifically, we have little collective understanding regarding the ways in which individuals navigate and use data visualizations. As Jennifer George-Palilonis, one of the leading journalism educators in the field of interactive design, explains, “most of the research surrounding interactive graphics on news sites comes in the form of content analyses that address how, when, and how often they are used.”2 The current study hopes to add depth to the nascent research by examining how users browse within a professionally designed, news data visualization.

Beyond being popular in journalism circles, data visualizations often go viral on social media platforms, such as Twitter and Facebook. For example, “The Budget Puzzle,” a popular interactive graphic by The New York Times that allowed the user to try to fix the federal budget deficit (fig. 1), was shared or discussed more than eleven thousand times on Twitter in its first week online. It also received over one million page views during that week. While the wild popularity of visualizations can be seen as a positive, we wonder if the way in which a user enters the visualization could affect the way in which he or she uses it. For example, a user could have come to The New York Times’ website, found the story about the federal budget, read it, and then used “The Budget Puzzle” interactive. Conversely, an individual could have followed a direct link to “The Budget Puzzle” visualization from one of the eleven thousand comments on Twitter. Although both cases result in the individual using “The Budget Puzzle,” reading the article could dramatically change the way in which the user searches for information within the visualization. In the current study, we examine whether reading an article before interacting with a data visualization primes the user and alters the way he or she use the visualization.

Figure 1 Thumbnail of “The Budget Puzzle” from The New York Times

Figure 1 Thumbnail of “The Budget Puzzle” from The New York Times

 

In order to understand how individuals use data visualizations, we used screen-capturing software to record the ways a convenience sample of students (n = 22) at a large southern university interacted with a data visualization. Specifically, students were asked to examine a data visualization by The New York Times about student debt, entitled “Student Debt at Colleges and Universities across the Nation” (fig. 2). The next section of this paper will discuss the previous research on data visualizations and priming theory. The results of the study will be discussed after the methods are explained. Finally, we will discuss the implications of our research on journalists’ use of data visualization.

Figure 2 Thumbnail of “Student Debt at Colleges and Universities Across the Nation” from The New York Times

Figure 2 Thumbnail of “Student Debt at Colleges and Universities Across the Nation” from The New York Times

Data Visualization

Information graphics (infographics) have long been used for conveying journalistic information.3 The creation of USA Today in the 1980s normalized the use of infographics in journalism. By 2000, the majority of daily newspapers were using at least three infographics everyday, and infographics were regularly used on the front page.4 Since then, the fields of journalism design and data visualization have continued to grow and change.5 As Washington Post information designer Wilson Andrews explains, “The kinds of graphics that are now being done, especially online, are on another level than what was being produced several years ago.”6

Specifically, Andrews is referring to the rise of online data visualizations in journalism. Unlike static graphics, data visualizations can be altered based on the choices of the user. For example, the user might be able to zoom in on the data or filter the data that is displayed (e.g., display data only from a certain year). This property of online data visualizations results in the journalist being able to store a much greater amount of information in a single graphic than when working with static graphic. Users might not be able to see everything upon first glance, but they have the ability to drill down into the data. In most data visualizations, the user can quickly toggle between a broad, abstract view of the data—good for recognizing relationships—and information related to specific cases—good for understanding nuance.

Unfortunately, we have very little understanding of how individuals use journalism-based data visualizations. For example, we don’t know if the average user can effectively understand and utilize the design conventions that have evolved in data visualization design (e.g., using multiple filters simultaneously to control the display of data). The few studies that examined news data visualizations have focused on either describing what journalists are doing with data visualization7 or examining the effects of data visualization on the user.8 While these are both important areas of research, it is also necessary to understand the nuanced ways in which individuals move through the graphic.

The findings from previous eye tracking studies provide valuable insights to the current research. For example, researchers using eye tracking observed differences between the amount of scanning and reading for old (newspapers) and new (online news) media.9 Continued research looked at where readers started and moved through a newspaper layout and distinguished three distinct types of news reading patterns.10 Another found evidence that design elements such as pictures and color were the first to attract the eye.11 More recent research has looked at infographics specifically and investigated its relation to the accompanying text. Results revealed that infographics spatially close to the text and organized in a serial order, such that one element follows the next, enhanced reading and integration of the two sources. The authors concluded that readers want to be guided through information and react positively to an ordered flow, something interactive data visualizations generally do not provide.12

The addition of interactivity presents a challenge to news consumers and designers. Interactivity refers to the user’s ability to control, configure and actively engage in information consumption.13 As news organizations moved into new media formats, the role of interactive features of the medium of online news sites (i.e., search functions, online discussion boards) on news use became of interest.14 On the one hand, researchers are interested in user’s motivations for online news consumption. For example, one study found weak support for information-seeking as a motivator for the use of interactive functions on online news sites.15 Other research has focused on the effects of interactivity. For instance, researchers have found that comprehension increases with site interactivity even after controlling for the effects of motivation.16 However too much interactivity, especially for inexperienced users can lead to increased cognitive processing and confusion, resulting in lower recall and negative arousal.17 As such, interactivity is considered on a continuum, with higher levels receiving less attention from viewers.18

Data visualizations, by their nature, turn over some editorial control to the user. Users act as additional gatekeepers for the information and are charged with the task of exploring the data and finding the information they need to understand the story. Therefore, the journalist must design the visualization in a way that reduces complexity and guides the user to the necessary information. In the current study, we seek to understand how individuals use a professionally designed data visualization. Specifically, we are interested in understanding what features of the data visualization individuals use, how much they use those features, and how deep they drill down in to the visualization.

Priming and Selective Scanning

A main concern of mass media is informing the public about social issues that individuals cannot personally experience.19 Individuals make sense of news content through three related processes: agenda-setting, priming, and framing. Agenda-setting refers to the news media’s emphasis on issues through distribution practices, which creates perceptions of importance in the viewer. Priming, considered an extension of agenda-setting, results when viewers base subsequent judgments from exposure to media messages made temporally and chronically accessible due to a perceived importance.20 Media does not only imply whether issues are important, it also frames the information to favor a particular viewpoint. Framing a story involves presenting information in a way that influences how an audience interprets the information.21 This study considers the practical implication of priming the use of a graphic with a framed news story, while also addressing the visual design as a framing device. Little research exists on the relationship between priming with media information and subsequent information-seeking within data visualizations.

The study of priming effects, or “the process by which activated mental constructs can influence how individuals evaluate other concepts and ideas,”22 has a long history in the fields of journalism, communication, and cognitive psychology. Priming occurs when a stimulus or event is presented prior to exposure to a second stimulus or event. Media priming specifically, refers the short-term influence of exposure to media, of which the content affects subsequent behaviors or judgments.23 Starting with violent media content, media priming research has linked violent content with aggressive cognitions, gender and race portrayals linked to stereotyping, and news coverage on political opinions.24 As Scheufele and Tewksbury explain in relation to journalistic media, “priming occurs when news content suggests to news audiences that they ought to use specific issues as benchmarks for evaluating [an issue].”25 In this regard, priming could affect the way in which individuals search for information within a data visualization.

In prior research looking at online news selection, researchers compared the effects of three theories that drive selectivity on the web: accessibility from priming, instrumental news utility, and personal importance. Although priming was not directly supported, the research found individuals searched for more news if no prior information was received and the issue was personally important.26 The results suggest selective scanning practices may have utility as instruments for gaining new information, maintaining vigilance of the media, or simply supporting an issue with high personal importance.

The current study used a graphic thought to be personally important to the study sample (i.e., student debt) in an attempt to maximize observable differences in activity. Interactive graphics promise new information, thus priming with a news story would not particularly affect the amount of information-seeking but, as this study explores, may change the selection of information sought.

We propose the news story is a guide, providing conceptualizations of the variables included in the graphic and examples of how manipulations produce different results. The variables highlighted (or primed) in the article could affect how the user utilizes the graphic. For example, an individual primed to think about foreign policy might use “The Budget Puzzle” graphic differently than an individual primed to think about domestic policy. In either case, the individual will engage in selective scanning processes, in which he or she picks and chooses information based on the primed issue (i.e., foreign policy or domestic policy).27

We argue that the way in which many data visualizations are presented on the Internet might result in the type of priming effects discussed above. News organizations typically create data visualizations in tandem with a print story, yet the user can usually access the data visualization in one of two ways: going to the story and clicking a link to the data visualization or going straight to the webpage hosting the data visualization. This creates two unique use cases (i.e., the individuals seeing the story and the individuals not seeing the story). The individuals in the former case could be primed to evaluate the information in the data visualization using issues and ideas raised in the article, while the individuals in the latter case will examine the visualization using other internal cues (i.e., personal criteria for relevance, importance, and interest).

Due to the lack of previous research linking priming to the use of interactive data visualizations, we pose the following research questions:

RQ1: How do individuals use the various features included in data visualization?

RQ2: Do priming effects—via the reading of an article—alter the way in which participants use the data visualization?

Methods

Sample

Students enrolled in undergraduate journalism courses at a large southeastern university were invited to participate in the study in return for extra credit. A total of twenty-two students (54.5% male) participated. The mean age of the sample was 21.5 years old, with a range of nineteen to forty years old. Senior-level students represented a majority of the sample (64% seniors; 18% juniors; 18% other). Sixty-eight percent of the sample identified as white (18% African-American; 14% other). All participants acknowledge their informed consent before beginning the experiment.

Procedures

A trained graduate student (lab assistant) met participants when they arrived at the lab. The lab assistant provided the informed consent form, which the participants read and signed. The lab assistant showed the participant to a computer and asked them to fill out the pre-stimulus questionnaire.28 After the participant completed the pre-stimulus questionnaire, the lab assistant started the screen capturing software, which recorded all activity on the screen as the individual used the data visualization.

The participants were then randomly assigned to one of two conditions—primed or not primed. In both conditions, the participant was asked to view a data visualization about student loan debt. The data visualization, entitled “Student Debt at Colleges and Universities across the Nation,” was created and published by The New York Times. The graphic and associated article (first in a series titled “Degrees of Debt”) were originally published on May 12, 2012.29 In the primed condition, the participant was first shown an edited version of the print news story associated with the “Student Debt” data visualization. The article specifically focused on the disparity in student debt between public and private universities. When originally published on The New York Times’ website, the article contained a link to the data visualization. The visualization was housed on a unique webpage without any other news content. Individuals could get to the data visualization either by directly linking to its unique page or by going to the article and then linking to the visualization. Our design emulated these two possibilities.

Participants were encouraged to take as much time with the materials as needed. When they finished viewing the visualization, the technician returned to the lab and turned off the tracking software. The participant was then asked to complete a post-stimulus questionnaire. Once they had finished the post-stimulus questionnaire, they had completed their participation in the study, were thanked for their time, and left the lab.

Measures

The tracking software recorded the screen as the individual used the graphic. In order to sort and examine individuals’ use behaviors, we identified and coded for thirty-two unique tasks that the user could do within the graphic. The list of tasks included items such as hitting the play button on the chart, typing in and search for a particular institution in the data, switching between the chart and map views, and using the text boxes to enter debt amount and graduation year. We also included each of the various filtering options as unique tasks in the coding. The participant could filter the data based on institutional characteristics including public or private control, enrollment size, graduation rate, share of graduates with debt, and athletic conference. Finally, we consider each way in which the individual could interact with the main chart as a unique task (i.e., hovering over a data point, clicking on a data point to reveal a pop-up window with additional information, and zooming in and out on the chart).

Two researchers analyzed the screen capture videos. They identified and coded the pattern of mouse-overs and mouse clicks for each task. Mouse-overs were distinguished by an underline on text or outline around the object the cursor hovered on. Clicks created a change in the graphic, including menu options or data points within the chart. Tasks were coded chronologically with start times to determine time spent on and between tasks. Sub-filters for the institutional characteristics were aggregated under the main characteristic to simplify the analyses. Mouse-overs and clicks were counted together as indicators of engagement.30

Results

To answer RQ1, we looked at the way in which the full sample used the data visualization. We present a descriptive look at their use patterns. The sample spent an average of 2:52 minutes using the visualization (SD = 1:40) with a range of 31 seconds to 6:02. During that time, the average participant completed 11 tasks (SD = 7.69).

As can be seen in Figure 3, the tasks most users completed included pushing the play button on the chart (95%), examining individual data points on the chart (84%), exploring the category filters on the right side of the chart (any filter = 78%; public/private = 68%; enrollment size = 63%; athletic conference = 58%; graduation rate = 53%),31 typing a school name into the text box above the chart (64%), and using the graduation rate filter (50%).32 The most commonly completed tasks matches the most common use pattern, which included hitting the play button, then interacting with data points on the graph, then searching for an institution using the text entry box, and then clicking the public verse private filter.

Figure 3 Tasks completed

Figure 3 Tasks completed

 

In contrast to the commonly completed tasks, there were a number of parts of the data visualization that received little use. For example, the zoom button on the chart was rarely used. Only three people in our sample used it at any point. The slider, which controlled the year of data being shown, was only used by 27% of the sample. The athletic conference filter was also an underutilized feature of the visualization. While half the users clicked on the athletic conference filter, only two users actually filtered the data by using one of the selections (e.g., SEC or Big Ten). Based on these findings, we can be sure that most users saw the filter; they just choose not to use it. The other filters showed similar patterns. For example, only 27% of the sample used the “share of graduates with debt” filter, even though 58% clicked to reveal the different categories (e.g., “0-25% of graduates”). The enrollment size filter was also clicked by 63% of the sample, but only 32% actually filtered the data based on enrollment size. This pattern speaks to the possibility that individuals were just exploring the graphic and not actually drilling down into the data in order to gain understanding.

In looking at how individuals used the main chart, we can again see that individuals generally did not drill too deep into the data. The average user browsed through the chart, occasionally revealing additional information by scrolling over a data point. The average user scrolled over 6.21 data points. Yet, he or she only clicked to reveal the full information for a college or university 2.95 times on average. The user had access to additional information on all the colleges and universities but again was not motivated to dig into the data.

While the descriptive statistics for the full sample are telling, the variation between conditions also informs us about the way in which individuals use data visualizations. By comparing the descriptive statistics for the two conditions, two primary findings related to RQ 2 emerge. First, individuals in the primed condition (i.e., those who were shown the article before interacting with the data visualization) showed more engagement with the data visualization. For example, individuals in the primed condition spent slightly more time using the data visualization (Mprimed = 3:01; Mnon-primed = 2:49). They also completed four more tasks on average than individuals in the non-primed condition (Mprimed = 13.4; Mnon-primed = 9.4).

Second, individuals in the primed condition engaged in more in-depth use of the visualization. In both cases (i.e., primed and non-primed), the participants performed basic tasks within the visualization (e.g., hitting the play button), but the individuals in the primed conditions showed a greater willingness to explore the visualization and drill down into the data presented. The dichotomy between the two conditions is most apparent in the use of filters. Individuals in the primed condition used the filters to organize the data in much greater numbers than individuals in the non-primed condition. For example, 67% of users in the primed condition filtered the data based on graduation rates, while only 38% of individuals did so in the non-primed condition. The individuals in the primed condition also scrolled over (M = 6.75) and clicked (M = 3.0) on more data points in the chart than individuals in the non-primed condition (Mscroll = 5.44; Mclick = 2.88). This finding again demonstrates that the primed users were more willing to drill down into the information presented in the visualization.

Discussion

This study, like any single study, is limited in numerous ways. The sample is relatively small and is a convenience sample of college students. Although college students are one of the groups most likely to get news online, we would suggest that future researchers use a more diverse sample. The sample size also limited our ability to expand beyond descriptive comparison into the area of hypothesis testing. We would suggest future research use our study as a pilot study from which to derive testable hypotheses. This study was also limited by using only one, idiosyncratic data visualization. The visualization used, “Student Debt at Colleges and Universities Across the Nation,” was a professionally designed visualization that was published by The New York Times, but it is not representative of all data visualizations. Future research must expand on the current study by looking at how individuals user other types of data visualizations.

This study provides a first look at how individuals use journalism-based data visualizations. The results demonstrate that individuals are interested in this novel form of journalistic storytelling. Most participants spent about three minutes using the data visualization, but it also showed that individuals were not motivated to drill down in order to seek a greater understanding of the data. The participants in our study played with the obvious parts of the graphic (e.g., hitting the play button and entering a school name in the textbox) and did not use the available filters, which could have given them a greater understanding of the types of colleges that result in larger student loan debt (i.e., private schools with lower graduation rates). Journalists should be cognizant of this finding when considering using data visualization. Specifically, they must remember that the user must be highly motivated in order to get the most out of a data visualization.

We also found that the way in which journalists present a data visualizations can alter the way in which users navigate that data visualization. Specifically, we asked if individuals’ search behaviors could be primed by reading an article associated with the graphic before using the graphic. Our results indicate that individuals engaged in different search patterns depending on whether or not they saw the article. Individuals in the primed condition (i.e., with the article) showed greater engagement with the data visualization and were more willing to drill down in to the data. This is an incredibly important finding considering that most journalism organizations, including the organizations most known for data visualization (i.e., The New York Times and The Guardian), regularly post their data visualizations on a separate webpage as opposed to embedding them in an article. Either way (i.e., embedded or stand-alone) could be effective depending on the story being told. If the journalist wants the user to see specific trends or relationships in the data, it might be better to embed the visualizations; whereas, if the journalist is just including the visualization to provide background information and there is no specific information he or she wants to convey with the visualization, it might be better to keep it as a stand-alone product. Regardless, the journalist has to remember that small design decisions could have large effects on the ways in which the user engages with a data visualization.

Future research on data visualization can take many avenues from this study. User motivations for attending to or engaging with a graphic should be explored. Beyond reader interest in the content or topic, which our study attempted to heighten in using a story centered on college debt, this case study suggests that complicated designs or unclear directions deter exploration. The potential of design features such as the size, placement, or color to direct and attract a readers’ attention should be examined. Individual differences could also be considered, as those with different information-gathering, processing, and seeking skills and habits may engage graphics differently. Future studies can provide better information on who is more likely to access, engage, and be affected by a data visualization.

The current research should also be expanded by moving in to the area of effects research focused on recall, comprehension, persuasion, or credibility. Examining such criterion variables would expand our understanding of what individuals learn from data visualizations. Furthermore, combined with user characteristics and motivations, effects research could expose design considerations and audience profiles, which could inform academic research and journalism practice.

Although the findings of this study add to our understanding of the use of data visualizations, eye-tracking could provide rich detail about information that was viewed but dismissed. This current research is unable to differentiate unviewed from dismissed parts of the graphic. While several studies have successfully used eye-tracking devices to measure the use of design features33 or design effects34 and cognitive abilities associated with using the graphic,35 only static graphics have been used. Future research should replicate our design using eye-tracking to understand individuals’ use of data visualizations.

Technological advances in online communication create new techniques for telling journalistic stories every day. While journalists have been busying creating new tools, the academy has not kept pace in understanding if the population is literate in these new tools. More specifically, in the realm of data visualization, we have little knowledge about individuals’ ability to effectively use and learn from data visualization. Therefore, the academy must work to provide journalists with best practices for using this new storytelling technique. In this study, we looked at one aspect of how data visualizations are presented. Future research must continue down this road, examining other attributes of data visualizations’ design and presentation.

Bibliography

Allen, David G., Jonathan E. Biggane, Mitzi Pitts, Robert Otondo, and James Scotter. “Reactions to Recruitment Web Sites: Visual and Verbal Attention, Attraction, and Intentions to Pursue Employment.” Journal of Business and Psychology 28, no. 3 (2012): 263-85.

Bao, Beibei, and Jefferson Mok. “Where the Jobs Are.” Columbia Journalism Review, February 18, 2013. http://www.cjr.org/between_the_spreadsheets/between_the_spreadsheets_wnyc_jobs.php

Bucy, Erik P. “The Interactivity Paradox: Closer to the News but Confused.” In Media Access: Social and Psychological Dimensions of New Media Use, edited by Erik P. Bucy and John E. Newhagen, 47-72. Mahwah, New Jersey: Lawrence Erlbaum and Associates, 2004.

Bucy, Erik P., and Chen-Chao Tao. “The Mediated Moderation Model of Interactivity.” Media Psychology 9, no. 3 (2007): 647-72.

Burmester, Micheal, Marcus Mast, Ralph Tille, and Wibke Weber. “How Users Perceive and Use Interactive Information Graphics: An Exploratory Study.” Paper presented at the Information Visualisation (IV) 2010 14th International Conference, London, July 26-29, 2010.

Card, Stuart K., Jock Mackinlay, and Ben Shneiderman. Readings in Information Visualization: Using Vision to Think. San Fransisco: Morgan Kaufmann, 1999.

Chung, Deborah S., and Chan Yun Yoo. “Audience Motivations for Using Interactive Features: Distinguishing Use of Different Types of Interactivity on an Online Newspaper.” Mass Communication and Society 11, no. 4 (2008): 375-97.

Domke, David, Dhavan V. Shah, and Daniel B. Wackman. “Media Priming Effects: Accessibility, Association, and Activation.” International Journal of Public Opinion Research 10, no. 1 (1998): 51-74.

Eitel, Alexander, Katharina Scheiter, Anne Schüler, Marcus Nyström, and Kenneth Holmqvist. “How a Picture Facilitates the Process of Learning from Text: Evidence for Scaffolding.” Learning and Instruction 28 (2013): 48-63.

George-Palilonis, Jennifer. The Multimedia Journalist: Storytelling for Today’s Media Landscape. New York: Oxford University Press, 2012.

George-Palilonis, Jennifer, and Mary Spillman. “Interactive Graphics Development: A Framework for Studying Innovative Visual Story Forms.” Visual Communication Quarterly 18, no. 3 (2011): 167-77.

Holmqvist, Kenneth, Jana Holsanova, Maria Barthelson, and Daniel Lundqvist. “Reading or Scanning? A Study of Newspaper and Net Paper Reading.” In The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, edited by Jukka Hyona, Ralph Radach and Heiner Deubel, 657-70. Amsterdam: Elsevier Science Ltd., 2003.

Holmqvist, Kenneth, Constanze Wartenberg, and Lundagård Kunsghuset. “The Role of Local Design Factors for Newspaper Reading Behaviour—An Eye-Tracking Perspective.” Lund University Cognitive Studies 127 (2005): 1-21.

Holsanova, Jana. “Cognition, Multimodal Interaction and New Media.” In Hommage à Wlodek Philosophical papers dedicated to Wlodek Rabinowicz, edited by T. Rønnow-Rasmussen D. Egonsson, J. Josefsson, and B. Petersson, 1-14. Lund, Sweden: Lund University, 2007.

———. “Entry Points and Reading Paths on Newspaper Spreads: Comparing a Semiotic Analysis with Eye-Tracking Measurements.” Visual Communication 5, no. 1 (2006): 65-93.

Holsanova, Jana, Nils Holmberg, and Kenneth Holmqvist. “Reading Information Graphics: The Role of Spatial Contiguity and Dual Attentional Guidance.” Applied Cognitive Psychology 23, no. 9 (2009): 1215-26.

———. “Tracing Integration of Text and Pictures in Newspaper Reading.” Lund University Cognitive Studies 125 (2005).

Iyengar, Shanto, Mark D. Peters, and Donald R. Kinder. “Experimental Demonstrations of the “Not-So-Minimal” Consequences of Television News Programs.” The American Political Science Review 76, no. 4 (1982): 848-58.

Johansson, Roger, Jana Holsanova, and Kenneth Holmqvist. “The Dispersion of Eye Movements During Visual Imagery Is Related to Individual Differences in Spatial Imagery Ability.” In Expanding the Space of Cognitive Science: Proceedings of the 33rd Annual Meeting of the Cognitive Science Society, edited by L. Carlson, C. Hölscher, and T. Shipley, 1200-05. Austin, TX: Cognitive Science Society,2011.

Kim, Young Mie. “Where Is My Issue? The Influence of News Coverage and Personal Issue Importance on Subsequent Information Selection on the Web.” Journal of Broadcasting & Electronic Media 52, no. 4 (2008): 600-21.

Kosicki, Gerald M., and Jack M. McLeod. “Learning from Political News: Effects of Media Images and Information-Processing Strategies.” In Mass Communication and Political Information Processing, edited by S. Kraus, 69-83. Hillsdale, NJ: Lawrence Erlbaum Associates,1990.

Larsson, Anders Olof. “Interactive to Me—Interactive to You? A Study of Use and Appreciation of Interactivity on Swedish Newspaper Websites.” New Media & Society 13, no. 7 (2011): 1180-97.

Martin, Andrew, and Andrew W. Lehren, “A Generation Hobbled by the Soaring Cost of College.” The New York Times, May 12, 2012. http://www.nytimes.com/2012/05/13/business/student-loans-weighing-down-a-generation-with-heavy-debt.html.

Reavy, Matthew. M. “Ruler and the Real World Examination of Information Graphics in Times and Newsweek.” Visual Communication Quarterly 10, no. 4 (2003): 4-10.

Roskos-Ewoldsen, David R., Mark R. Klinger, and Beverly Roskos-Ewoldsen. “Media Priming: A Meta-Analysis.” In Mass Media Effects Research: Advances through Meta-Analysis, edited by Raymond W. Preiss, Barbara M. Gayle, Nancy Burrell, Mike Allen, and Jennings Bryant, 53-80: New York: Psychology Press, 2007.

Roskos-Ewoldsen, David. R., Roskos-Ewoldsen, Beverly, and Carpentier, Francesca D. “Media Priming: An Updated Synthesis.” In Media Effects: Advances in Theory and Research, edited by J. Bryant and M. B. Oliver, 74-93. New York: Routledge, 2009.

Sanders-Jackson, Ashley N., Joseph Cappella, Deborah Linebarger, Jessica Taylor-Piotrowski, Moira O’Keeffe, Andrew Strasser, and Caryn Lerman. “Visual Attention to Antismoking PSAs: Smoking Cues Versus Other Attention-Grabbing Features.” Human Communication Research 37, no. 2 (2011): 275-92.

Scheufele, Dietram A., and David Tewksbury. “Framing, Agenda Setting, and Priming: The Evolution of Three Media Effects Models.” Journal of Communication 57, no. 1 (2007): 9-20.

Thomas, Jesse. “Meet the Young Designer Behind the Washington Post’s Infographics.” Forbes, August 29, 2011. http://www.forbes.com/sites/jessethomas/2011/08/29/meet-the-young-designer-behind-the-washington-posts-infographics/.

Tremayne, Mark. “Lessons Learned from Experiments with Interactivity on the Web.” Journal of Interactive Advertising 5, no. 2 (2005): 40-46.

———. “Manipulating Interactivity with Thematically Hyperlinked News Texts: A Media Learning Experiment.” New Media & Society 10, no. 5 (Oct 2008): 703-27.

Tremayne, Mark, and Sharon Dunwoody. “Interactivity, Information Processing, and Learning on the World Wide Web.” Science Communication 23, no. 2 (Dec 2001): 111-34.

Usher, Nikki. “Interactive Visual Argument: Online News Graphics and the Iraq War.” Journal of Visual Literacy 28, no. 2 (2009): 116-26.

Utt, Sandra H., and Steve Pasternak. “Update on Infographics in American Newspapers.” Newspaper Research Journal 21, no. 2 (2000): 55-66.

 

Notes    (↵ returns to text)
  1. Stuart K. Card, Jock Mackinlay, and Ben Shneiderman, Readings in Information Visualization: Using Vision to Think.(San Fransisco, CA: Morgan Kaufmann, 1999), 7.
  2. Jennifer George-Palilonis and Mary Spillman, “Interactive Graphics Development: A Framework for Studying Innovative Visual Story Forms,” Visual Communication Quarterly 18, no. 3 (2011): 168.
  3. Matthew M. Reavy, “Rules and the Real World Examination of Information Graphics in Times and Newsweek,” Visual Communication Quarterly 10, no. 4 (2003): 4.
  4. Sandra H. Utt and Steve Pasternak, “Update on Infographics in American Newspapers,” Newspaper Research Journal 21, no. 2 (2000): 58-59.
  5. Beibei Bao and Jefferson Mok, “Where the Jobs Are,” Columbia Journalism Review, February 18, 2013, http://www.cjr.org/between_the_spreadsheets/between_the_spreadsheets_wnyc_jobs.php.
  6. Jesse Thomas, “Meet the Young Designer Behind the Washington Post‘s Infographics,” Forbes, August 29, 2011, http://www.forbes.com/sites/jessethomas/2011/08/29/meet-the-young-designer-behind-the-washington-posts-infographics/.
  7. Nikki Usher, “Interactive Visual Argument: Online News Graphics and the Iraq War,” Journal of Visual Literacy 28, no. 2 (2009).
  8. Micheal Burmester et al., “How Users Perceive and Use Interactive Information Graphics: An Exploratory Study” (paper presented at the Information Visualisation (IV) 2010 14th International Conference, London, July 26-29, 2010).
  9. Jana Holsanova, Maria Barthelson, and Daniel Lundqvist, “Reading or Scanning? A Study of Newspaper and Net Paper Reading,” in The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, ed. Jukka Hyona, Ralph Radach, and Heiner Deubel (Amsterdam: Elsevier Science Ltd., 2003).
  10. Jana Holsanova, “Entry Points and Reading Paths on Newspaper Spreads: Comparing a Semiotic Analysis with Eye-Tracking Measurements,” Visual Communication 5, no. 1 (2006).
  11. Kenneth Holmqvist, Constanze Wartenberg, and Lundagård Kunsghuset, “The Role of Local Design Factors for Newspaper Reading Behaviour—An Eye-Tracking Perspective,” Lund University Cognitive Studies 127 (2005), http://www.lucs.lu.se/LUCS/127/LUCS.127.pdf.
  12. Jana Holsanova, Nils Holmberg, and Kenneth Holmqvist, “Tracing Integration of Text and Pictures in Newspaper Reading,” Lund University Cognitive Studies 125, http://www.lucs.lu.se/LUCS/125/LUCS125.pdf.
  13. Mark Tremayne, “Lessons Learned from Experiments with Interactivity on the Web,” Journal of Interactive Advertising 5, no. 2 (2005): 42.
  14. Mark Tremayne and Sharon Dunwoody, “Interactivity, Information Processing, and Learning on the World Wide Web,” Science Communication 23, no. 2 (2001).
  15. Deborah S. Chung and Chan Yun Yoo, “Audience Motivations for Using Interactive Features: Distinguishing Use of Different Types of Interactivity on an Online Newspaper,” Mass Communication and Society 11, no. 4 (2008).
  16. Mark Tremayne, “Manipulating Interactivity with Thematically Hyperlinked News Texts: A Media Learning Experiment,” New Media & Society 10, no. 5 (2008).
  17. Erik P. Bucy and Chen-Chao Tao, “The Mediated Moderation Model of Interactivity,” Media Psychology 9, no. 3 (2007).
  18. Anders Olof Larsson, “Interactive to Me—Interactive to You? A Study of Use and Appreciation of Interactivity on Swedish Newspaper Websites,” New Media & Society 13, no. 7 (2011).
  19. Shanto Iyengar, Mark D. Peters, and Donald R. Kinder, “Experimental Demonstrations of the ‘Not-So-Minimal’ Consequences of Television News Programs,” The American Political Science Review 76, no. 4 (1982): 848.
  20. Dietram A. Scheufele and David Tewksbury, “Framing, Agenda Setting, and Priming: The Evolution of Three Media Effects Models,” Journal of Communication 57, no. 1 (2007): 11-12.
  21. Ibid.
  22. David Domke, Dhavan V. Shah, and Daniel B. Wackman, “Media Priming Effects: Accessibility, Association, and Activation.,” International Journal of Public Opinion Research 10, no. 1 (1998): 51.
  23. Ibid.
  24. Roskos-Ewoldsen, David R, Mark R Klinger, and Beverly Roskos-Ewoldsen, “Media Priming: A Meta-Analysis,” in Mass Media Effects Research: Advances through Meta-Analysis, ed. by Raymond W. Preiss, Barbara M. Gayle, Nancy Burrell, Mike Allen, and Jennings Bryant (New York: Psychology Press, 2007).
  25. Dietram A. Scheufele and David Tewksbury, “Framing, Agenda Setting, and Priming: The Evolution of Three Media Effects Models,” Journal of Communication 57, no. 1 (2007): 9.
  26. Young Mie Kim, “Where Is My Issue? The Influence of News Coverage and Personal Issue Importance on Subsequent Information Selection on the Web,” Journal of Broadcasting & Electronic Media 52, no. 4 (2008): 600-21.
  27. Gerald M. Kosicki and Jack M. McLeod, “Learning from Political News: Effects of Media Images and Information-Processing Strategies,” in Mass Communication and Political Information Processing, ed. S. Kraus (Hillsdale, NJ: Lawrence Erlbaum Associates,1990).
  28. The current study is part of a larger research project. The data from the pre-stimulus and post- stimulus questionnaires are not utilized in the current study.
  29. The first article in the series and thumbnail of data visualization with link to its home page can be found on the New York Times’ website: Andrew Martin and Andrew W. Lehren, “A Generation Hobbled by the Soaring Cost of College,” The New York Times, May 12, 2012, http://www.nytimes.com/2012/05/13/business/student-loans-weighing-down-a-generation-with-heavy-debt.html.
  30. Kim, “Where Is My Issue?”
  31. This refers to clicking on one of the main filter categories, which would make the chart change to the default filter setting for that category. It does not include clicking different filter settings with in the category.
  32. This refers to clicking on filtering setting (e.g., “0-25% graduation rate”) within the graduation rate filter category.
  33. See Holmqvist, Wartenberg, and Kunsghuset, “Role of Local Design Factors”; Holsanova, “Entry Points and Reading Paths.”
  34. See David G. Allen et al., “Reactions to Recruitment Web Sites: Visual and Verbal Attention, Attraction, and Intentions to Pursue Employment,” Journal of Business and Psychology 28, no. 3 (2012); Alexander Eitel et al., “How a Picture Facilitates the Process of Learning from Text: Evidence for Scaffolding,” Learning and Instruction 28 (2013); Ashley N. Sanders-Jackson et al., “Visual Attention to Antismoking PSAs: Smoking Cues Versus Other Attention-Grabbing Features,” Human Communication Research 37, no. 2 (2011).
  35. See Holmqvist, Wartenberg, and Kunsghuset, “Role of Local Design Factors”; Jana Holsanova, “Cognition, Multimodal Interaction and New Media,” in Hommage à Wlodek Philosophical papers dedicated to Wlodek Rabinowicz, ed. T. Rønnow-Rasmussen D. Egonsson, J. Josefsson, and B. Petersson (Lund, Sweden: Lund University, 2007); Jana Holsanova, Nils Holmberg, and Kenneth Holmqvist, “Reading Information Graphics: The Role of Spatial Contiguity and Dual Attentional Guidance,” Applied Cognitive Psychology 23, no. 9 (2009); Roger Johansson, Jana Holsanova, and Kenneth Holmqvist, “The Dispersion of Eye Movements During Visual Imagery Is Related to Individual Differences in Spatial Imagery Ability,” in Expanding the Space of Cognitive Science: Proceedings of the 33rd Annual Meeting of the Cognitive Science Society, ed. L. Carlson, C. Hölscher, and T. Shipley (Austin, TX: Cognitive Science Society, 2011).
Nick Geidner & Jackie Cameron

About Nick Geidner & Jackie Cameron

Nick Geidner is an assistant professor of journalism in the School of Journalism and Electronic Media at the University of Tennessee. His research focuses on how changes in journalism affect the ways in which individuals select, use and make sense of news. Geidner is interested in all aspects of the changing media landscape, but has focused on social media, interactive graphics/data visualization and news monetization. Geidner is also director of the Medal of Honor Project, an undergraduate service-learning project at the University of Tennessee. Before joining the faculty at UT, Geidner earned a Ph.D. in mass communication from the Ohio State University (2011), a M.A. in telecommunications from Ball State University (2007), and a B.A. from Youngstown State University (2005).

Jackie Cameron is a doctoral student of journalism and electronic media in the College of Communication and Information at the University of Tennessee. Her research investigates the use and understanding of data visualizations in mass media, the effects of dual consumption of television and social media on attitudes and behavior, and the study of reality television in relation to social values. She has a Master’s degree in psychology and professional research experience in higher education outcomes.
No comments yet.

Leave a Reply


IMPORTANT

To proceed with your comment, please solve the following simple math problem. This will show that you are human and not a robotic spam generator.

What is 14 + 8 ?
Please leave these two fields as-is: