Late in the last century, then-WSU President Sam Smith introduced me to the Internet. I was the Spokesman-Review’s Pullman reporter, and after a press conference in his office, Sam excitedly ushered me over to his computer to show how he could share electronic texts through a system called “Gopher.” I had to get on this thing, he said, and he cleared the way for me to get access through the Information Technology department. It was a first step, followed by a leap and then a plunge, into what is now the End of the World as I Knew It.
You know what happened next: web browsers, email, e-commerce, Craigslist, friending and unfriending, cat videos, 140-character messages, fan pages for bacon, dying newspapers, fading bookstores, furtive searches for high-school sweethearts, business offers from Nigerian princes, more than a billion people looking at more than a trillion web pages.
So while I was thrilled in 1989 to be a newspaper reporter with access to articles on microfiche in the Holland Library, I now find no subject is too obscure for Google, Wikipedia, or an online chat with a faceless reference librarian. I once counted on The New York Times arriving sometime around 10 in the morning. Now I read it in bed on a cell phone with more gadgets than one of James Bond’s Aston Martins. I watch movies and TV on a laptop.
If you are reading this, you are witnessing yet another wrinkle in this revolution: an online edition of Washington State Magazine. We’re publishing this issue exclusively on the web for a one-time savings of printing costs, resuming the paper version with the fall issue.
To be sure, it’s not the biggest change in the world, as magazines are old hands at doing much of what the web does best: writing on a variety of topics, marrying the text to images, and delivering the final product to a community of readers. Now we do the same thing electronically, and add a lot of cool bells and whistles.
But the universe has changed around us, with ramifications of concern to not only writers and editors, but people throughout the university community. A printed magazine story sits alone on a page, with relatively little competition for the reader’s attention. An online story sits only a few keystrokes away from a torrent of other stories, tweets, videos, free classifieds, and emails. The reader arrives in a different frame of mind, and this affects what we talk about, how we talk about it, what we understand of each other.
It has Paul Whitney, cognitive psychologist and senior associate dean in the College of Liberal Arts, wondering if we’re reading in a shallower way that will hinder our ability to make informed opinions on big issues. Brett Atwood, who teaches online journalism in the Edward R. Murrow College of Communication, worries about web sites pandering for clicks. And Patty Ericsson, director of WSU-Pullman’s Digital Technology and Culture degree program, sees all these changes as part of a continuum going back to the ancient Greeks. In effect, she says, Yeah, things are changing, like always. Welcome to the brave new world.
Not your father’s Esquire
It’s tempting to say the magazine is one of the most enduring forms of communication, informing and delighting readers for just over three centuries. That’s true, but it’s also possible to say this digital iteration is just another riff on an ever changing form.
The earliest magazines were more like books—mostly text, few illustrations—with personal opinion and satire on a regular basis. They were read by an elite few until the late 1800s, when less expensive, mass-market magazines arrived. Even those were still mostly text, assuming a habit of concentrated, deep reading.
“In the 1800s reading habits were different,” writes Art Kleiner in “The History of Magazines on a Timeline,” a 1981 article he wrote for CoEvolution Quarterly. “You started at the very first page and read straight through, column by column, until the end. People didn’t flick through or skim, and magazine layouts didn’t encourage them to.”
Improved presses and processes led to what is now the modern magazine of sophisticated photos, graphics, and layouts supporting and enhancing long narratives. Much of the work is the stuff of journalistic legend: Gay Talese’s “Frank Sinatra Has a Cold” in Esquire, Roger Rosenblatt’s “Children of War” in Time, Norman Mailer’s “Armies of the Night” and Sallie Tisdale’s “We Do Abortions Here,” both in Harper’s, and a long line of New Yorker stories: Jon Hersey’s “Hiroshima,” Joseph Mitchell’s Joe Gould accounts, Elizabeth Kolbert’s “Climate of Man” series, Malcolm Gladwell’s profile of Ron Popeil.
These were stories—modern versions of a cultural artifact as old as telling stories around a fire, and often with the flourishes of New Journalism. Now half a century old, the style brought fiction’s scene-setting, dialog, and character development to non-fiction. Things happen in these stories. Difficult questions get broached. Illuminating angles and insights and human nature are revealed.
And they were a bit long—3,000 words at a minimum, and often book length. Reading them was an act of concerted, linear concentration.
In 1979, three years before the computer was Time’s “Machine of the Year,” Art Kleiner was predicting a revolution in magazine production and how we might approach reading:
“Once magazines, newspapers and books start to come in over the home terminal, or the terminal down at the corner computer center, then the boundaries between them won’t be necessary; they’ll merge into a steady flow of information, stories, opinions, pictures, design, photographs. You might never have to stop reading: like Homer Price’s donut machine, the terminal will type out the stories, and the photo-typesetter will click, buzz and release the photocopied pages, and printouts will pile high in the recycling centers. If input channels are kept open, advertisers may lose their hold on the magazines, and designers will have to develop another new language to give visual personality to a flowing, undivided stream.”
The “flowing, undivided stream” of information—once mostly text, images, TV and radio—is now a tsunami of text, images, sound, video, databases, video games and applications. We walk through it, we drive through it, it washes over us at work and at home. A recent analysis by the Global Information Industry Center at UC-San Diego estimates Americans consume information almost 12 hours a day, an increase of more than four hours over the last three decades. And thanks to the computer, reading is holding its own against its biggest competition, the television.
In fact, we take in more printed words now than we did 50 years ago. It’s what we’re reading and how we’re reading it that has changed.
Even if you haven’t looked at a newspaper in a few years, and many people haven’t, you would know that we’re reading fewer books, newspapers and magazines. Newspaper readership has been in decline for half a century; magazine audiences have been shrinking since the mid-1990s. Books are the revered but increasingly ignored elder statesmen of our culture. Literary reading declined in the past quarter century to where, according to the National Endowment for the Arts, “The U.S. population now breaks into two almost equally sized groups—readers and non-readers.”
“I have stacks and stacks of Business Weeks,” says John Wells, a WSU associate professor of Management Information Systems who specializes in business-to-consumer web design. “It’s a tribute to my inability to digest them. But when I get tested on current events, I do pretty well.”
He reads a lot on the web, as do most of us. The San Diego study estimates that one in every four words we consume is now coming through the computer.
The printed word is a powerful thing.
We’re not talking here about something like Thomas Paine’s pamphlet, “The American Crisis,” whose lead to end all leads, “These are times that try men’s souls,” rallied support for the Revolutionary War. No, we’re talking about the power of the printed word to get a grip on the inner workings of the brain.
The seemingly simple act of reading, says Paul Whitney, cognitive psychologist and senior associate dean in the College of Liberal Arts, is one of the singular feats of the human mind.
“If we understand all of reading,” he says, “we understand all cognition.”
In reading, we recognize and convert a symbol into a sound. From that, we create a word and meaning. Then we synthesize them in a sentence. If the engine is firing on all cylinders, the sentences contribute to a larger analysis.
This process is so automated, that if you see the word “blue” written out in red letters, and someone asks what color you’re seeing, you will feel compelled to say “blue.”
“You can’t turn it off, even if you want to,” says Whitney. “That’s how powerful the process is.”
Our mind’s ability to process the written word is unlikely to change as we go from print to the screen. But how we approach those words when they’re on the screen does change, to the point where many fall by the wayside.
“There’s much more information filtering about what I am going to look at and how long I’m going to read it than with traditional deep reading,” says Whitney.
So where the writing in a magazine is linear, the web is a study of options, multiple roads diverging in a digital wood. Eye-tracking studies bear this out, with readers first looking at the upper left hand corner of a page, just like it was a book. But then the eyes move in an F-shaped pattern, or in a set of loops, skimming, looking for key words and scanning.
Nearly three-fourths of web readers come to a page through Google—surfing or searching, finding what they need, then leaving. On our own site, about one-third of the visitors come from Google, look at a page or two, and leave in less than a minute. About as many visit from a bookmark or by typing the web address. They’ll look at half a dozen pages and stay for nearly seven minutes—long enough to read several short articles but only about half of a long feature.
So if you’ve read this far, you’re ahead of the pack, above average, and thank you. But please keep reading.
The mouse roars
The web does offer something print media will never have: a direct two-way connection with the reader. Every click counts, to where an outlet can instantly analyze what is resonating with the readership.
That in itself troubles Brett Atwood, a clinical assistant professor who teaches online journalism in the Edward R. Murrow College of Communication. Atwood has worked as both a print and online journalist, writing for newspapers, Billboard, Rolling Stone, and editing for Amazon.com and RealNetworks. He’s seen the power of the click, and recalls editors and reporters in one newsroom debating how much they should let click-through rates affect what they covered.
“We would see crime stories, things that might frighten people, would always get the most clicks,” he says. “What do you do with that information? Do you just say, ‘That’s nice,’ or do you change your formula? Maybe we’ll assign more reporters to those kinds of stories because we know that more people click on them and it generates ad revenue and it’s what people want.”
The downsides are obvious. Care, quality, and crafting give way to clicks. It’s all carnival barker, with “a collective dumbing-down” once people come inside the tent.
“So where’s the information that challenges your world view,” he says, “that introduces competing information to make you think differently, that expands your knowledge base?”
Sometimes we really need to think deeply, notes Paul Whitney. A quick look at the national health care debate, he says, yields a few buzzwords and claims. At this level, a reader will activate what experts in choice behavior call a “confirmation bias,” an innate tendency to seek out information that confirms our preconceptions.
With such a shallow, one-sided approach, says Whitney, “You just end up with a few slogans that buttress what your belief is. Does that make you an intelligent, informed reader? I don’t think so.”
The horse to water analogy
In a way, the web shows you can lead people to information but you can’t make them read. It’s even harder to make them think.
It has always been that way. That’s why writers are trained to use snappy leads. It’s why stories have cliffhangers, compelling 19th Century readers to buy the next chapter of serialized Charles Dickens novels. Except now a piece of text on the web competes with a host of bells and whistles on the page, as well as incoming email, Facebook updates, perhaps an MP3 in the background, hyperlinks, and the risk that at any moment the reader will decide to check out Jimmy Fallon’s interpretation of “Pants on the Ground.”
It’s why much of what you find on the web is broken into smaller, digestible chunks. To some extent, this very story has been “chunkified.” It’s an acknowledgement that the terms of engagement have changed, says Atwood.
“In chunking you’re basically saying, ‘You know, we recognize that readers in many cases, one, might have a shorter attention span,” he says. “And two, that they might want to have control over the order in which they receive the facts and information in a story. So let’s literally chunk up the story into its own sub-narratives and present it in such a way that’s non-linear, so then readers can make decisions about which they want to click on and in which order and even omitting part of the story.’”
Readers can often do the digital equivalent of finding a quiet, undistracted place. For many of us, that’s 35,000 feet up.
“I find often that reading on an airplane, I can focus more because I don’t have any web access,” says Chris Hundhausen, an associate professor who specializes in how people use computers. “I need to turn off email and other stuff when I’m trying to focus more on reading a document.”
The Heraclitean view
In the stream of information, the only constant is change. Just ask Patty Ericsson.
Since 2003, Ericsson has directed WSU-Pullman’s Digital Technology and Culture degree program.
“I’m looking at how technology changes our lives, how it changes the culture in which we live, how it cha
nges the way we do things and the way we think about things,” says Ericsson. “My whole focus is we’ve got to think about this stuff. We can’t accept it mindlessly.”
She has four computers. She largely ignores her office phone and has a voice-mail message saying write an email instead. She wears a machine-woven Norwegian sweater and reads a first-generation Kindle, the electronic book reader that last Christmas helped Amazon sell more electronic books than paper books, or what Ericsson calls “ink stains on tree carcasses.”
To give her current students some sense of perspective, Ericsson will have them looking at old maps one day and old money the next, all with an eye towards seeing how technology changes over time. And change it does, generally in concert with its culture. In some ways, the only thing that stays the same is the fairly overblown shock and outrage as technologies emerge.
I mention to Ericsson that I had on my desk Nicholas Carr’s recent Atlantic article, “Is Google Making Us Stupid?” The headline writer could have omitted the question mark, as the article, adapted from Carr’s book, The Shallows: What the Internet Is Doing to our Brains, contends we cease to think deeply if we power-browse and skim at the expense of reading deeply.
Ericsson is not so alarmed.
“Go back to the Platonic Dialogues,” she says. “It’s absolutely the same thing—the same thing but different… . People didn’t like the pencil when it first came along.”
Sure enough, the Phaedrus dialogue, unearthed with Google’s help, says writing “will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own.”
The printing press, Ericsson adds, was also going to ruin us.
“If we let the masses read and teach them how to read, it’s the end of civilization as we know it,” she says. “The fact that people got bibles and were able to read them on their own, rather than having an anointed person giving them the word of God, was thought to be something really dangerous. I grew up in the Catholic tradition where we still didn’t read the bible.”
Now, she says, we’re at another pivot point in how we communicate, with devices converging, new ways of communicating, a shift in who has control over who says what. Print, she says, “is headed to a device no smaller than an iPod or a Droid. We’re going to get almost everything there. However, books won’t go away. Maps, paper maps, are more likely to go away. Money is totally becoming an abstraction,” a magnetic strip on the back of a Cougar Card.
As for the written word, she says, “words are only words and words alone aren’t good enough.” A writer now needs to be “a multi-modal composer,” combining words and pictures—the magazine’s old forte—plus music and video. Down the technological road, a writer could be using touch and smell.
She has her own feelings about this. But at her age, 59, “my literacy is not that important. We’ve got people coming up who have totally different literacies. They’re different people.”
The future is different from you and me
Justin Hartley, a WSU sophomore, regularly reads the paper versions of USA Today and the Daily Evergreen, plus a video magazine. He also reads about three books a month.
“I would much rather have a book,” he says. “You can’t get the same feeling of turning a page, of not knowing what’s going to come next, with something electronic.”
Among his peers, Hartley is a statistical outlier.
“That opinion is not very universal at all,” he says. “They really like life on the screen. The digital era has definitely taken over as a huge part of this generation. It’s what defines it.”
John Wells, the Management Information Systems professor, notices that his students expect reading assignments in “digestible chunks”—that word again—and shrink from a long piece.
“It’s much more daunting,” he says, “because they’re online creatures now.”
In the process, he says, it is unclear if they learn more or do a better job of synthesizing what they read than when he was in college. Then again, in a world of abundant but diffused information, the savvier people will be those who learn how to digest and discard. Among the members of this generation, filtering is an emerging skill.
And they’re far more likely than older generations to be comfortable with a Kindle, or an iPhone, or an iPad.
This January, the day before the iPad was announced, I broach this possibility with WSU Marketing professors David Sprott and Darrel Muehling. They study nostalgia, the rose-colored glasses with which people see a better past, even if it wasn’t as great as they imagine. Marketers and psychologists have given this a lot of thought, and some particularly influential work by researchers at Rutgers and Columbia suggests that our preferences for certain films, music, and other consumer choices tend to be set around the age of 23 or 24.
So scoff as some of us might at the thought of reading a magazine online, or curling up with a Kindle, the laptop- and iPhone-toting young people outside Sprott’s office may well be heading squarely into a lifelong love affair with surfing on a portable screen.
“The things that they’re doing now, their kids are going to be saying, ‘Oh my gosh, Dad, I can’t believe you’re still on Facebook and using an iPhone—that’s so old-school,’” says Sprott.
“But most of the readers of your magazine are going to be older,” adds Muehling, “and they do count. And they’re still wanting to curl up and read a magazine that’s not in electronic form.
Indeed. We’ll see you again in August, online, and in print.
On the web
View some of the earliest printed magazines at the Bodleian Library, University of Oxford
Video: Jaak Panksepp on the brain and searching the web
Video: Patty Ericsson on digital communication