This Presidential election season has had a lot of strife, angst, anger and hate. So I want to enter the fray–but with some frivolous fun: I am going to sort all U.S. Presidents into the Houses of Hogwarts from the Harry Potter series.
I absolutely love the Harry Potter series. In fact, I recently went to Harry Potter World in Universal Studios (Florida) for vacation. While there, I couldn’t help but think of the sorting system at Hogwarts. For those that aren’t familiar with the series, the titular hero attends the Hogwarts School of Witchcraft and Wizardry. At the start of the school year, each new student gets “sorted” into one of four Houses: Gryffindor, Hufflepuff, Ravenclaw or Slytherin, which are basically known as the brave, loyal, wise and ambitious houses, respectively. Here’s how the four houses are described in the first book:
You might belong in Gryffindor,
Where dwell the brave at heart,
Their daring, nerve, and chivalry
Set Gryffindors apart;
You might belong in Hufflepuff,
Where they are just and loyal,
Those patient Hufflepuffs are true
And unafraid of toil;
Or yet in wise old Ravenclaw,
if you’ve a ready mind,
Where those of wit and learning,
Will always find their kind;
Or perhaps in Slytherin
You’ll make your real friends,
Those cunning folks use any means
To achieve their ends.
Muggles such as myself can sort themselves into one of these four houses via a variety of websites. J.K. Rowling’s official website Pottermore lets you do it, though you need a free account for that. If you’d rather do it quick and easy (if unofficially), many websites offer sorting quizzes. (Full disclosure: I’m a Ravenclaw.)
So I thought it would be fun to sort all Presidents (plus this year’s candidates) into these houses. One important point before I get started. Because we see these houses in the books through Harry’s point of view, the perceptions of the houses often get skewed. Gryffindor can appear the “good guys,” versus evil Slytherin, while the other two houses might be considered nerdy (Ravenclaw) or losers/leftovers (Hufflepuff). For example, in explaining the Houses to Harry, Hagrid tells him “There’s not a witch or wizard who went bad who wasn’t in Slytherin.” However, we learn over the course of the series that there are “evil” wizards who didn’t go to Slytherinn (such as Prof. Quirrell1 or Peter Pettigrew) or Slytherins who stood up to Voldemort (Snape and Regulus Black come to mind). So in doing this exercise, I’m thinking of these four houses more neutrally than perhaps they’re often considered. Similarly, I don’t want to appear partisan in assigning houses to the presidents. Just because I disagree with a President doesn’t mean I’m automatically going to place him in Slytherin. Similarly, some of the more, shall we say obscure Presidents, aren’t automatically Hufflepuffs.
Join me as I sort the Presidents!
Way back in February, I wrote about the Soldiers’ Home National Cemetery and how Evan Phifer and I sought to create a viable digital resource on the cemetery, the first national cemetery in America. Today I’m excited to announce the launch of that website: How Sleep the Brave: the Soldiers’ Home National Cemetery, Abraham Lincoln and the Civil War.
As with many digital projects, our final deliverable evolved from the proposal stage. Originally we saw this just as a digital resource that would document the burials in the cemetery. While we certainly do incorporate that database into our site, the database itself is the foundation of the project, rather than the end-all, be-all. Instead, the website robustly interprets and contextualizes the cemetery. Specifically, the website demonstrates that the cemetery profoundly effected Abraham Lincoln while he lived at the Soldiers’ Home, especially his evolving views on the bloodshed of the Civil War and the question of emancipation.
One element from the proposal that we did implement is a series of charts analyzing the burials. After assembling these graphs, it’s clear that the peak burial season occurred in September 1862, right when Lincoln issued the Preliminary Emancipation Proclamation. In addition to the focus on the timeline of the cemetery’s burials, we wrote about important soldiers buried in the cemetery, including General John Logan, and Medal of Honor recipient and Buffalo Soldier John Denny. We also asses how wartime death and suffering affected both Abraham Lincoln specifically, and Washington D.C. generally. We also wanted to use some of the tools we’ve learned this spring, including mapping techniques and culturomics. Thus we wrote about how people have visualized the cemetery over time, as well as the evolution of “National Cemetery” both in American history and American literature.
The authors of Digital_Humanities argue the key to the field of Digital Humanities is collaboration. What makes the field so exciting is the tantalizing new ways that scholars can work together to produce ground breaking interpretation. Fortunately, I can say that my project with Evan was a perfect encapsulation of this. He came up with some great ideas for the project, and I’d like to think I did the same. But regardless of who thought up an idea, we both worked on all aspects of the project. On every single page we both wrote, edited and contributed to the content and layout.
In addition to Evan, I’d like to thank several people. First, our colleagues at President Lincoln’s Cottage, especially Executive Director Erin Mast, provided support and allowed us to use several images and resources. Second, this project would not have happened without Paul LaRue, an Ohio history teacher, whose high school class created the database we used in this project. (You can read our interview with Mr. LaRue on the site as well.) Lastly, Dr. Dan Kerr at American University provided guidance throughout the whole process.
Most importantly, the main purpose of this project is to provide an interpretative lens of the cemetery for the public. So we hope that you help us achieve this goal by providing feedback. Please take a look and let us know what you think.
Since I started this blog, I’ve focused on specific things like preservation, visualization, mapping and networking. Now that I’ve gotten a good knowledge base, I thought I’d take a step back and look at the entirety of the Digital Humanities field. Fortunately, there is a lot of good scholarship already on the topic. With such descriptions as “landmark publication” and “perfect summation” of the digital revolution in the humanities, the 2012 book Digital_Humanities immediately caught my eye. Turns out, such plaudits are not hyperbole.
The work provides a thorough account of the field with diverse perspectives, as five people from a variety of backgrounds co-wrote, -edited and -published the book. Of these five, two teach media design, one is an Information Studies professor, one is a Germanic Languages and Comparative Literature professor, and the fifth contributor teaches Romance Languages. Despite teaching at vastly different schools (UCLA, Art Center College of Design and Harvard) and in different fields, all five are heavily active in digital projects. The Authors chose to write from one voice, instead of each having a separate essay or chapter, since “the Digital Humanities remains at its core a profoundly collaborative enterprise” (ix). This collaboration, as we’ll see, is the key to Digital_Humanities.
Not surprisingly, the Authors are very much believers in the power and potential of the field of Digital Humanities (and yes they believe it’s a field and not just a method). According to the Authors, we’re currently at an exciting time in history, where the rise of the internet has allowed the humanities to take on a “vastly expanded creative role in public life” since Digital Humanities “is a global, trans-historical, and transmedia approach to knowledge and meaning-making.” Specifically, the model of digital humanities outlined in the book “moves design—information design, graphics, typography, formal and rhetorical patterning—to the center of the research questions that it poses” (8). Thus, they conclude that “Digital Humanities has the potential to make a genuine difference” (131).
The concise book has four main chapters. The first chapter, “Humanities to Digital Humanities,” charts the recent history of the humanities, both immediately prior to, and now during, the digital revolution. The Authors argue that while the humanities have been thought to lack modern relevance, digital technologies have the ability to give new meaning. Specifically “with the migration of cultural materials into networked environments,” digital humanistic scholarship is “conspicuously collaborative.” This focus on cooperation “changes the culture of humanities work” by allowing different approaches to old topics, revitalizing the discipline (3). In fact, the Authors’ central thesis to Digital_Humanities is explicitly in the title: it’s the underscore connecting “digital” and “humanities.” For the Authors, the underscore serves as a “vital yoke and shifting signifier, one that presents the two concepts in a productive tension, without either becoming absorbed into the other” (ix).
The second chapter, “Emerging Methods and Genres” serves as a field map to the 15 different practices and methods within the field of the Digital Humanities. These range from Visualizations, to Cultural Analytics, to Large Scale projects, to Mapping to Code Studies and everything in between. Each is described quickly and efficient, yet completely.
The third “The Social Life of the Digital Humanities” discusses how these practices are effecting society. Though the other chapters, especially chapter 2, are more helpful illustrating what the Digital Humanities is, chapter 3 discusses why and how the Digital Humanities is significant. Yet again, this ties back to the collaboration of digital scholarship. Since networks are inherently “social technologies,” Digital Humanities has much different social roles than traditional scholarship. “New modes of knowledge formation in the digital humanities are dynamically linked to communities vastly larger and more diverse than those” of traditional approaches (75), the Authors argue. This ability to completely integrate scholarship with the wider world is what makes the “Digital_Humanities” (yoked together) so powerful.
Lastly, the fourth chapter, “Provocations” speculates on the future of the digital humanities. Specifically, it seeks to answer the basic question: If digital approaches are quickly becoming normal scholarly practices, will there be a point where “Digital Humanities” as a field no longer exists? In other words, will the underscore no longer matter in Digital_Humanities? To this, the Authors respond that the Digital Humanities must critically experiment going forward, and not remain in stasis. “Understood as a critical experimental practice, carried out in the public laboratory of a cultural commons, Digital Humanities is itself a work-in-progress as much as a future promise.” Furthermore, “the future course of the humanities will hinge upon informed and imaginative engagement with the historical forces that are shaping our times, our communities, and ourselves” (120). It goes without saying that digital methods will be the best chance to pursue that engagement.
Two other sections make the book more than just an abstract look at the field. The Authors discuss several fictional case studies of the various practices of Digital Humanities as described in Chapter 2. These case studies–made up so as not to pick “favorites” among current digital scholarship–are written like grant applications. There’s a short summary of the goals, more background on the project, a list of work plan areas, and finally dissemination and assessment of materials produced. These projects range from using mapping and text analysis to locate specific interactions between Europeans and Natives in the New World, to creating a virtual Afghan refugee camp. In addition, in perhaps the most useful seciton for students and professors alike, the Authors sum up the other chapters with a “Short Guide to the Digital Humanities” that has a FAQ and a list of specific areas where Digital Humanities is making an impact.
Overall, the book is instrumental in describing the burgeoning field of Digital Humanities. Though perhaps not quite geared to the general public, it does provide an excellent, in-depth analysis for those particularly interested in learning the ins and outs of Digital Humanities. In fact, I wish I had read it earlier since it is a great complement to my newly acquired knowledge.
When we hear the phrase “New media” we usually think of recent technology like smartphones, or Twitter, or even just the internet at large. However, as the essay collection New Media, 1740-1915 argues, there has always been “new media” that changed how humans interacted with the world. The original “dumb” telephone is an obvious example, but there are also now obsolete ones like the physiognotrace and the zograscope that were considered transformative in their time. So while the modern new media revolution is more likely to have a lasting impact than those old-timey devices, it’s important to keep historical perspective in mind when discussing current digital media. (For more on the essay collection, see my classmate Joanna Capps’ review.)
Take for example mobile applications in museums. Though thoroughly robust mobile applications were uncommon prior to the rise of smartphones a decade ago, museums have long used hand-held technology to support its exhibits. Some examples are extremely low-tech, such as maps. But audio tour devices date back over 60 years. For example, check out this video clip of Dutch museum patrons going on an audio tour in 1952 (for a translation of the narration, see the first comment here).
The woman introducing that video is Nancy Proctor, former head of the Smithsonian Institute’s mobile division, who gave this talk at the 2009 MCN conference. Proctor used that clip to illustrate her main argument about mobile applications in museums: “It’s NOT about the technology, it’s about the content.” For example, the Dutch audio tour was obviously technology limited. Yet, everyone is doing the same thing at the same time only because the content is instructing them to do it that way, not because the audio must be played that way. If instead, as Proctor suggests, the audio tour had asked the patrons to choose a favorite painting then discuss its traits with the group at large, that might provide a more meaningful experience than passively listening to a description of the painting. Thus, Proctor believes museums should “think about content and experience design” first and foremost. Specifically, the content should be tied to the site’s mission, as well as the intended audience.
Of course, it’s still important to consider what the appropriate technology is in order to deliver the content. Designers need to understand what platforms visitors are comfortable using, especially in a museum setting. In fact, citing an earlier study, Proctor noted that museum patrons have become “trained” not to use modern technology while visiting exhibits, even if they use that technology all the time outside of museums.
To be fair, since this 2008 study, those bar graphs are probably much more even. Smartphone mobile applications truly have penetrated everyday life, becoming extremely ubiquitous. This has spilled over to museums as well. In fact, the Smithsonian alone has 30 mobile applications related to its various museums. Yet these apps all have content that serves a specific purpose, and certainly attracts diverse audiences and reactions. So Proctor’s vision that content should dictate mobile designs is now more important, as potential technological barriers have been diminished due to a proliferation of technology.
Put another way, think of your favorite application (museum or otherwise). Is the technological or design aspects what makes you appreciate and use it? Or is the content more important?
Last week I received a group email from a friend. The subject line was simply “fb” and the original message was quite clear: “What the hell. Stop changing.” That was followed up by a dozen responses, some in agreement (“Argh”), some pragmatic (“What the hell else are their engineers going to do but change things?”), and some positive (“change is good”). To the shock of no one, a Facebook design change received strong reactions. As probably the best comment in the thread put it: “Yeah – put it back to the way we complained about when it changed last time! We LOVE that way!”
Why do we always complain when a site like Facebook, or Google or AV Club makes these changes? Websites rarely seek input from the users, and often they don’t explain why they’re making the changes until after the switch. In addition, new sites might be shiny and fancy looking, but they lose some key functionality. This lack of two-way communication and lack of understanding what previously worked leads to disgruntled users. As Dan M. Brown — not that Dan Brown — writes in his book Communicating Design, “if we can’t communicate an idea effectively, how can we hope to create a website around it?”
From personal experience, I can’t agree more. A few years ago my old organization redesigned its website.* Though we hired a website developer to do the actual coding, one of my colleagues was in charge of developing the vision for the new site. She created the wireframes, content inventories and other “deliverables” that Brown discusses in detail. But unfortunately the project took longer than initially anticipated, and she quit in the middle for maternity leave. That left me to take over some of the responsibilities until we hired a replacement. Unfortunately, there was not a good trail of communication between the developer and my colleague. Items that were in one draft of the wireframe weren’t ‘t in the next and vice versa. The developer had also promised several features to my colleague orally — and yes they were worth the paper they were written on. That led to games of he said she said. Eventually, it got to the point where I was constantly singing this song in my head.
Awesome Led Zeppelin riffs aside, it was not the most pleasant project. Eventually we figured it out and the website worked, albeit without some of the initial functionality we had envisioned. No one person was to blame, but it still was frustrating. As Brown argues, “Communicating design is about combining words and pictures into a story that elaborates on a vision.” Sadly, our vision was not achieved thanks to a communication breakdown.
What about you? Do you hate it when sites like Facebook redesign features you’ve grown accustomed to? Do you wish websites better communicated their website redesigns? On a scale of 1-10 how angry would you be if I completely changed my WordPress theme tomorrow?
*I’m not going to link to the website, partially so that I don’t draw attention to my old company, but also because it’s already been updated.
Just like Brick Tamland loves lamp, I love map.
This came in part from my grandmother. Whenever she’d visit, we’d usually go to a museum or historic site. Upon arrival, she always said “you got to have a map, Zach, so we know where to go.” Even at places we’d been to before, I would immediately grab a map and navigate around. Still to this day I usually consult a map when entering a museum instead of wandering aimlessly.
Even outside of public history sites, I value maps. On road trips I always get navigating duties, and I never ask for directions–not because I’m embarrassed to do so, but because I always want to figure them out on my own via maps. In addition, I pride myself on my ability to decipher the local transit map in foreign cities that I visit. And that’s no laughing matter as transit maps overseas like London, Madrid and Paris are much more complex than D.C.’s Metro map.
As a Maphead, I believe spatial history is absolutely vital in Digital History. Besides the very practical uses of maps I outlined above, maps can be great tools of learning. The image at the top of this post comes from a 2012 Guardian piece on strange, yet illuminating, maps. More recently, back in December one website published “40 Maps That Will Help You Make Sense of the World.” Though obviously all these maps are rooted in geography, in addition they teach us about astronomy, politics, geology, demography, ecology, economics, language, sociology, meteorology, transportation and of course history. (Check out a gallery of my favorite maps from those two links below.)
History really can benefit from this approach. As Richard White, the head of Stanford’s Spatial History Project, says, “Space is itself historical.” Quoting French philosopher Henri Lefebvre, White argues that spatial relations are an integral part of human history, as our world constantly affects human interaction. Therefore, I think there is great benefit in using these visual tools as historical research, whether we’re detailing incidents of crime in Harlem, or the rise and fall of Napoleon’s Army in Russia. Historians should not view maps just as representations of their “traditional” research, but instead should be using it as a research tool in the first place. In the words of White, spatial history “generates questions that might otherwise go unasked, it reveals historical relations that might otherwise go unnoticed, and it undermines, or substantiates, stories upon which we build our own versions of the past.”
Daniel Snyder, the owner of Washington D.C.’s NFL franchise , is in the news again. On Monday he released a letter announcing the creation of the Original Americans Foundation which will “provide meaningful and measurable resources that provide genuine opportunities for Tribal communities.”
Why is a White owner of a Mid-Atlantic sports franchise creating a foundation dedicated to supporting Native American tribes? The answer lies in the team’s name: the Redskins. Over the last year, the Redskins, and by extension their owner, have come under scrutiny for running a team whose nickname is a derogatory term in other contexts. Though this controversy is not new, the team’s success and return to national prominence during the 2012 season have led to a huge increase in opposition to the name, going as high as President Obama. In response, Snyder has said multiple times that the name is not changing any time to soon. Predictably this has not exactly engendered much sympathy. So this latest announcement is part of his strategy to deflect some of the criticism.
I’m not going to get into the entire controversy here. But what I’m interested in is the use of the actual word “Redskins” over time. Words evolve, and perhaps it means something different today. Luckily Google Ngram Viewer can help. Google Ngram analyzes how often a word or phrase is used in the corpus of about 5 billion books that have been digitized via Google Books. It then graphs the use of the word in the books as a percent for the given year range. Here’s is the “Redskins” Ngram:
First a caveat. Again, the chart only shows the use of Redskins in printed books; so we can say it did not appear often in print prior to the Civil War, but can’t asses whether it was still a common part of the oral vernacular then (and in subsequent years when it was more commonly published). After 1860, it peaked significantly in the mid 1870s, and around 1890. Custer’s Last Stand was 1876 and the Wounded Knee Massacre occurred in 1890. So it makes sense that a significant amount of print then would focus on Natives, calling them terms like Redskins.
And what about the football team? The team first used the name “Redskins” in 1933, so that partly explains the increase in the word around then. But most importantly, since 1960 there has been a roughly constant increase in the use of the word “Redskins” (outside of a blip in the late 70s) culminating in its highest levels… in 2000! So just looking at this chart, we might conclude that calling Native Americans Redskins is more common now than at the height of the Indian Wars.
In reality, the term used in 1890 was different than the term being used increasingly from the mid 1970s and today. Ngram also allows you to see what books use the given word. Prior to the football team, Redskins appear in such books as “The Redskins, Or, Indian and Injin” or “Redskin and Cowboy–A Tale of the Western Plains.” However, since 1970, the books are predominantly about the football team. In fact, the rise and fall of the word in the last 40 years correlates to the team’s success. The first major peak in 1974 occurred in the midst of a five-year playoff run including the team’s first Super Bowl appearance. The relative valley in the following years correspond to a playoff absence from 1977-1981. Then from 1980 until 1993 there is a rapid increase–during this time the Redskins won three Super Bowls while appearing in another and became one of the most popular and dominant teams in the NFL.
Of course, we shouldn’t overstate Ngram’s findings. These rise and falls only show correlation with military and sports events, and not causation. And again, the term’s frequency only relates to publications. The NFL is the most covered and followed American sport; it makes sense that one of its high-profile teams would receive significant amounts of coverage in today’s 24-7 sports world. So we shouldn’t overlook the term’s non-literary uses. Still, Dan Snyder would be happy to know that the term today is clearly associated with the team, and not just Native Americans.
(This is my second look at Truth in Numbers. Click here for the first one).
Apple had Steve Jobs. Facebook has Mark Zuckerberg. And Wikipedia has Jimmy Wales.
And just like The Social Network and Jobs, Wales has a film about him.
However unlike Jobs and Zuckerberg, the film about the Wikipedia co-founder is not a major motion picture starting Hollywood actors. (Though I think Paul Giamatti would be perfect). Instead, Wales appears as the protagonist of the 2010 documentary Truth in Numbers? Everything According to Wikipedia. The documentary, co-directed by Scott Glosserman and Nic Hill interviews dozens of Wikipedians as well as authors, journalists and critics who have mixed views on the site, such as the late historian Howard Zinn, Bob Schieffer of CBS News, and former Central Intelligence Agency Director James Woolsey. But Wales is without question the central character.
Not surprisingly almost everything Wales himself does and says in the film casts Wikipedia in a good light. Indeed, the film starts with him visiting Varanasi, India where he teaches a local how to edit Wikipedia. That transitions into him speaking before an American crowd saying “Imagine a world where everyone on the planet is given free access to the sum of human knowledge. That’s what we’re doing.” From this start you get a sense that the documentary will focus on how Wales and Wikipedia is changing the world for the better. And there certainly is some of that. Glosserman and Hill show the international bonding of the Wikipedia community, as Korean, Indonesian, Chinese, Indian, Dutch, Taiwanese, Arabic, South African, English and American Wikipedians of both genders are interviewed. (Compare this to the “critics,” all but one of whom are men, and most of whom are old and white.) In one of the best lines of the film, a Taiwanese Wikipedian who earlier is depicted meeting a Chinese Wikipedian with whom he had collaborated on an article, says “Through Wikipedia it’s unbelievable I could make friends with a communist.” The other interviews with members of the Wikipedia community aren’t as touching, but they all do share a sincere sense of optimism about the site.
Yet, there is balance. As I mentioned in my first post, the documentary mirrors the Wikipedia site’s “Npov” mission to balance all articles. Glossmerman and Hill don’t really build any specific narrative, except that “Wikipedia is good but it also is bad.” The target of a lot of this criticism is Wales himself. About halfway through the movie, internet critic Andrew Keen says that Wales possesses the Libertarian suspicion of external authority first espoused by Ayn Rand. These comments frame clips of Wales talking about how Rand’s novel The Fountainhead influenced him (“My motivation is not altruism”), clips of Rand expounding her own philosophy (“I say man is entitled to his own happiness, and he must achieve it himself”) and clips of the 1949 movie version of The Fountainhead (“He served no one and served nothing. He lived by himself.”) Taken together, these clips depict Wales not as a man providing access to human knowledge for the benefit of humanity, but for his own ego.
Some of the other criticisms of Wales are less blatant, or even underdeveloped. Just like Facebook, there is controversy over who “founded” Wikipedia. The film describes how Larry Sanger and Wales came up with the idea of an open-source encyclopedia, eventually using Ward Cunningham’s “wiki” technology as a platform, and that the two had a falling out. Unfortunately, this story about the site’s early origins is brushed over. Sanger gets his moment to criticize Wales, stating that “When [Wales] first started leaving me out the story, it was extremely disappointing to me. It wasn’t something I would have thought Jimmy capable of.” However, instead of actually having to defend actions, Wales gets left off the hook. Right after Sanger’s jab, a nameless journalist interviews Wales, mentions that Sanger, “co-founded” the site, and Wales interrupts saying “he says” with a smile. And that’s it! It’s possible this story isn’t as juicy as the The Social Network. But by not letting Wales discuss his vision of the founding of the site, Glosserman and Hill lose a great opportunity to delve into their central character’s mind and better illuminate the foundation of the site they document.
In the end, I came away from the film feeling sorry for Wales. Keen points out that Wales — who doesn’t make any money directly off of the main Wikipedia site — is like someone who found a winning lottery ticket, but instead of cashing it in, donated it to the world. And clearly his personal life suffered from all this too. About a quarter into the movie Wales is interviewed in his Florida home, with his wife and daughter in surrounding him. It’s a very weird situation where he’s explaining how busy he’s been, and how he’s barely home anymore because he travels so much. Meanwhile, his wife is nervously laughing about this and how great it was to share an apartment in Japan for 30 days recently when Wales was working there (their last “vacation”). It’s clear the Wikipedia efforts are taking a toll on this marriage. Lo and behold, in a postscript at the end of the film, Wales admits that since that previous interview he separated from him wife. (He’s since divorced and remarried).
So while Truth in Numbers ends with all the interviewees reading their own Wikipedia page (Howard Zinn is quite impressed with the accuracy of his), I think a more fitting ending would be Wales staying up late, editing his own Wikipedia page, just like the last scene of The Social Network.
About half way through the 2010 documentary Truth in Numbers? Everything According to Wikipedia, historian Howard Zinn says “All history is a matter of selecting out of an infinite number of facts, and the selection itself is inevitably biased.” For example, in the traditional accounts of Christopher Columbus, the Progressive Era and the Civil War, historians routinely fail to mention, respectively, Columbus’ slaughter of native populations; the widespread lynching throughout America in the early 20th Century; and the massive amount of Indian land grabbing during the 1860s. All these examples, Zinn argues, show how in historical writing omission can be just as subversive as factual inaccuracies. Zinn believes this inherent bias can lead to a distortion of “the truth,” especially when amateur historians write on Wikipedia. As CBS News anchor Bob Schieffer adds “What’s worse, telling a bold-face lie, or just part of the truth?”
As its title indicates, Truth in Numbers? is concerned with how Wikipedia explores the “truth.”Interviewing dozens of Wikipedians as well as authors, journalists and critics such as Zinn, Schieffer, former CIA Director James Woolsey, Lawrence Lessig (who sadly only appears briefly at the beginning), and Wikipedia co-founder Jimmy Wales, the film focuses on the rise of Wikipedia in modern internet culture, and how it is shaping human knowledge.
Unfortunately, directors Scott Glosserman and Nic Hill mirror Wikipedia’s “Npov” policy, as the film itself doesn’t take sides. (To get even more meta, the “reception” section of the film’s Wikipedia page shows that almost all sides have a different interpretation of the film’s message.) Furthermore, there is very little narration, and in fact not much of an overarching narrative to the film. Instead, Glosserman and Hill jump from one interview to another, almost like the “Wikipedia wormhole” I mentioned two weeks ago.
The majority of the interviews revolve around how Wikipedia’s anonymity affects its accuracy and credibility. The first criticism is that it’s too easy to create inaccurate information. (As Stephen Colbert jokes in an interview with Wales, Wikipedia is the “First place I go when I want some knowledge, or want to create some.”) Wales’s brushes these aside, since unlike traditional encyclopedias, Wikipedia can easily change any error.
A lot of the critics also complain that the site lacks credibility since the editors are anonymous; one points out that traditionally people who wrote anonymously wrote things like ransom letters, poison pen letters and graffiti. This line of thought is pretty ridiculous, and in fact the next interviewee immediately points out that much of the early political writing in this country was done anonymously (such as the Federalist Papers). Regardless, Wales also brushes this aside. He thinks that by editing under established pseudonyms, Wikipedians are practicing “pseudonymity” not anonymity, and take pride in their pseudonym just like they would their real name.
The last major attack on Wikipedia’s credibility I think is the most valid: that its current system scorns “experts.” Wales states that his goal is for Wikipedia to be a meritocracy. But for it to be a true meritocracy, wouldn’t professors and experts rise to the top? Instead, as interviewee after interviewee lament, Wikipedians often look down on elitism, and thus there are few so-called experts on the site. Writer Simon Winchester concedes that having just experts is not ideal, since “experts” often include old white men, and that is a problem. (Of note, the actual Wikipedians interviewed are extremely diverse, whereas all but one of the “critics” are men, with most old and white.) Still, as Zinn notes, having some experts would be helpful, since everyone carries biases. So experts are needed to provide the right context.
Overall, I do think some of the interviewee complaints are valid. To avoid the issue of anonymously editing without repercussions, requiring everyone to register makes sense. And possibly it would be helpful to have experts at the top of the food chain to review articles, as Winchester says. Perhaps the best analogy of Wikipedia is one of the last comments. The head of the Biblioteche Alexandria in Egypt says that prior to the emergence of Wikipedia and the internet as a whole, the global fountain of knowledge was a very slow drip, and that most people in the would couldn’t “drink” from it. Now with Wikipedia providing free and open access to everyone in the world, it’s like opening a fire hose. However, you still can’t drink from that either. So the goal should be finding the perfect middle.
(I have a lot more to say about this documentary, especially about Wales. But it’s not as germane to digital scholarship, so I”ll write a separate post.)
(3/25/14 UPDATE: here’s a second blog post on Truth in Numbers, focusing on Wales.)