Visualization of speakers by gender in Aphra Behn's The Amorous Prince, Women Writers Online Labs

Week 5: Code, Evidence, Analysis

Explore the visualization here; check out other WWO Labs visualizations.

Read:

Argamon, Shlomo, et al. “Gender, Race, and Nationality in BlackDrama, 1850-2000: Mining Differences in Language Use in Authors and their Characters.” Digital Humanities 2007 (2007): 8. http://www.digitalhumanities.org/dhq/vol/3/2/000043/000043.html

Bode, Katherine. “Graphically Gendered: A Quantitative Study of the Relationships between Australian Novels and Gender from the 1830s to the 1930s.” Australian Feminist Studies 23.58 (2008): 435-450. Through library.

Jockers, Matt and Gabi Kirilloff. “Understanding Gender and Character Agency in the 19th-Century Novel.” Journal of Cultural Analytics 2016. http://culturalanalytics.org/2016/12/understanding-gender-and-character-agency-in-the-19th-century-novel/

McPherson, Tara. “Why Are the Digital Humanities So White? Or Thinking the histories of race and computation.” Debates in the Digital Humanities (2012): 139-160. Online.

Underwood, Ted, and David Banman. “The Instability of Gender.” The Stone and the Shell. https://tedunderwood.com/2016/01/09/the-instability-of-gender/

Explore:

Schmidt, Ben. “Gendered Language in Teacher Reviews.” http://benschmidt.org/profGender/#

WWO Lab. Women Writers Online. https://www.wwp.northeastern.edu/wwo/lab/

Advertisements

17 thoughts on “Week 5: Code, Evidence, Analysis

  1. I find that the two readings I want to focus on for this week’s post cannot be divided into sections as I have done in previous posts because I find that they intersect a lot with one another and both relate to my experience with Voyant Tools, a tool that I will be co-presenting with Cherise this week. The main similarity I discovered while reading Argamon’s “Gender, Race, and Nationality in Black Drama 1850-2000” and Underwood and Banman’s “The Stone and the Shell” was how the Book NLP and The Black Drama Collection identified that from the 1980’s onwards, both the linguistic differences between male and female characters and American versus non-American plays became less distinct in comparison to previous decades, we can see a sense of both a ‘merging’ and a blurriness between the two groups. I think the conclusion that can be drawn from this observation is how gender and nationality are fluid terms. Historical and cultural development allows these concepts to be ever-changing and their specific associations transform over time. An example of common terminology that derived from text pertaining to the female experience was “her chamber” and “her room”, as identified by the Book NLP, and terms associated from subjects revolved around the family/home, emotive states, or women (ex. “female”, “girl”, “her”) as was identified by the Black Drama Collection. We can see a common theme surrounding the domestic space and other stereotypical traits that are usually tacked on to women, such as our emotional states or responses. However, there’s a possibility that these associations can change over time. Do you think we will observe the same trends if we were to perform some data mining on texts ranging from the 2000 to the present? I would hope that the answer would be no.

    These two articles have also shed some light on the process involved in data mining as well as identifying traits that I found to be applicable to Voyant Tools, a program that I have played around with in preparation for this week’s presentation. For instance, both of these articles illustrate different types of graphs which are exactly what Voyant Tools includes in their own program. Voyant Tools also comes with a range of ways to graph trends, whether that be the textual arc (where a narrative is illustrated as a circle and terms pop up in the middle to show their frequency and where they most appear in the narrative) or a simple line graph that has a similar function as the textual arc in terms of it displaying how often a word appears in the text. However, we know that some texts can be extremely long and Argamon addresses this issue. A program, such as Voyant Tools, can allow large amounts of data to be inputted at one time, however Argamon mentions how this can limit the amount of research and analysis that can be accomplished due to the processing time involved when waiting for an output. Like the Black Drama Collection, Voyant Tools is more for an experimentation of text mining since it also involves a delay when loading data with an extensive amount. Lastly, in my opinion, the most important characteristic of data mining, as identified by Argamon, is the importance of human involvement in the data mining process. A text data mining program can perform the algorithms for us and illustrate trends that would otherwise take too long for us to do if performed manually, but it’s not perfect as much as we like to think that computers are. Mistakes can still be made by a program and that’s when we come in to fix these errors, determine the significance behind these trends, and how we can use these findings to contribute to the humanities or to our own research.

    Liked by 1 person

    1. Totally agree, Jaqueline, that the human involvement is really important, both as you say that it can help to identify problems with algorithmic outputs–and it’s interesting to see how our articles for discussion deal with the question of truth or verifiability–but because human decision has such an impact on how those algorithms work.

      Like

  2. Discussion of Gender, Race and Nationality in Black Drama… and Graphically Gendered:

    Both the Aragmon and Bode articles stood out to me for the ways they approached discussing use of data mining resources. I found it particularly interesting to see the ways in which engaging with programs that allow analysis of a particular corpus, allows for greater trends to emerge much faster than if that same work were to be done manually. Like Jacqueline, I could not help but see the similarities of programs like the one discussed in Aragmon’s article to Voyant, especially in regards to the lists of frequent terms mentioned across certain groups of playwrights. While I have found it interesting that data like that can be pulled out from texts so easily with such programs, I have admittedly found it hard to imagine many uses for information of that sort, apart from noticing trends in how often a term was used across a particular text. However, I found Aragmon’s analysis of these lists to be useful in demonstrating how these lists of terms can be demonstrative of social and cultural features that go beyond a literal text representation.

    This also made me think about the process of close reading and how we similarly engage with meaning beyond the literal definition of a word when close read a text. By using data mining tools we are able to do the opposite, look at large swathes of data, and still come to the same levels of analysis, as is demonstrated in Bode’s article. I was especially intrigued by her mention of the three gender epochs of the Victorian novel, identified by Tuchman, particularly because it was again a process of categorization that was made much smoother with the aid of tools to analyze trends across a large number of corpuses. By using the Macmillan archives, Tuchman was able to devote more time towards interpreting the data that was found, and was able to engage with this data more quickly by using the resources provided by the archive to do some of the gathering information on the texts. I find this to be a huge benefit of such programs, again like Voyant, because of the ease with which they allow you to interact with text.

    Of course it is not impossible to do data mining manually, but it seems unnecessary to spend extraneous amounts of time doing the work of manually listing the frequencies of words or the trends of marking out contextual phrases, when in the end it is the data you are left with after doing all that work that is relevant. To go back to my earlier analogy, when doing close reading we rely on our analytical skills, and because it is often a small chunk of text that we are working with we are able to see the contexts in which terms appear on our own before analyzing them. However, when working with larger amounts or even multiple texts, there seems to be good reason to use programs like Voyant to allow us to quickly and comprehensively find those contexts and trends without having to first manually read through every bit of text. That is not to say that I think data mining can replace closer analytical reading, but these two articles have shown me that data mining can allow for analytical reading by eliminating a lot of the unseen work put into manually mining through a text, while still allowing you to later engage with the text on a closer level if need be.

    Like

    1. I would also like to add that, as the Jockers and Kirilloff article points out, a lot of these programs do still require a second look from a human eye to catch potential mistakes or issues in the programming. However, I still see it as useful to engage with such programs nonetheless, as continued interaction with such resources not only helps develop them to work better for us, but also allows the sharing of such archival and data mining work amongst those who seek to use it.

      Like

    2. That dialogue between close and distant or algorithmic reading is as you say very important to consider. It’s a controversy that has been very prominent within and beyond digital humanities–more beyond, I think, perhaps since as you can see all of these articles are keenly aware of the limitations of their methods.

      Liked by 1 person

  3. I found learning about the applications of distant reading in “Gender and Character Agency in the 19th-Century Novel” and “Gender, Race, and Nationality in Black Drama” to be very interesting, even though I didn’t always understand some of the details. I think both essays show some of the positive and negative sides of using computer programs in literary analysis; they open the door for scholars to provide some level of comparative analysis of far more texts than a person could read on their own, but they still can’t really understand literature the way humans can, and the limitations of the technology can lead to mistakes and inaccuracies if scholars don’t comb through the data very carefully.

    For instance, while the findings from the 19th-century novel article are certainly intriguing and convincing, I think their arguments could be a lot stronger if they also included first person pronouns and the names of characters, as they acknowledge in their conclusion. It would be difficult, of course, to create a program that is capable of understanding whether a first person narrator is male or female—I’m not sure if it would even be possible with current technology. And while it might not be as difficult to simply tell a program which names are masculine or feminine, there would probably be a lot of gender-neutral names that could just lead to more inaccuracies and more work for scholars. A more detailed version of this study, including information like “whether all of the male characters are behaving in an unconventional way, or whether it is simply the protagonist, or antagonist,” would probably have to either narrow its focus to fewer works or develop much more powerful text-mining software.

    Another problem, which the black drama article brings up, is how to access a big enough collection of works to do analysis like this. The inclusion or lack thereof of various plays in the database they used “reflects editorial decisions,” and I noticed that they said only 18% of the authors whose works were represented were women. I’m not sure if that disparity is the result of the interests of the curators of the collection or if there are simply a lot more male black playwrights than female, but I think it calls into question how universal their findings about the differences between plays by the male and female writers are—maybe analyzing a database including the works of more women writers would lead to somewhat different results. But if such a database doesn’t exist, then there’s not much the scholars can do, unless they have a lot more time on their hands in order to put a new database together. But maybe the interesting results of studies like these will inspire more people working in other fields to work on programming more software, or organizing more databases, that could be useful for distant reading scholars in the future!

    Like

    1. Hi Stephanie!

      You write, “whether all of the male characters are behaving in an unconventional way…” Do you have any suggestion for how a text-mining software might categorize “unconventional”? Or is this simply an instance where you think ‘the human’ excels where ‘the cyborg’ cannot (yet..?)

      I definitely agree that tech can lead to mistakes and inaccuracies, but I think at least for the way my brain works, I appreciate text-mining software for instances where I’m looking for narratological or statistical elements (rather than strictly “metaphorical,” like we might be taught to do in a close reading, for example). Do we think a computer could generate an effective close reading of a Black drama?

      I’m reminded of an instance where the simplest form of text mining (searching a pdf) was really useful to me–do you remember in our Women’s Writing class last term, when we discovered that for all the classification and hand-wringing about The Golden Notebook as a feminist text, the word “feminist” only appears once in 600+ pages? I wouldn’t have trusted myself to mere page-flipping to get that frequency for certain…but once we knew the word itself was only there once, it made me reconsider some of the critical reception 🙂

      Like

      1. I meant that in reference to how they classified actions in the article in relation to conventional gender roles – I think they had actions that sounded passive and/or emotional, such as “watched” and “wept” categorized as feminine, for instance. So that’s something the computer can do in a study like that, although it probably wouldn’t work for a more complex analysis.

        I think computers are definitely useful for things like that, but that these articles also show that human scholars are still definitely necessary for analyzing the data! Like we could probably identify feminist ideas in The Golden Notebook even when the word itself doesn’t really come up, while it might be hard to make software that can do that – simply going by what the computer says about the words used might make it seem like the text is barely feminist at all.

        Liked by 1 person

    2. That’s a keen observation about corpus; the question of how one should normalize or account for gender difference when using large corpora has been increasingly prominent lately, with some people arguing that the kind of results provided here can be at best provisional until we have more representative corpora. As Amy Earhart, Katherine Harris and others have argued, there’s an unfortunately replication of old canons happening in digitized collections.

      Like

  4. I don’t want to give too much away in advance of my discussion launch on “Why Are The Digital Humanities So White?” tomorrow (I’ll be glossing some keywords and focusing on secondary contexts that helped me make better sense of the technical language), but I thought for our blog context, I could focus on a couple moments where I felt particularly interpellated by McPherson.

    Near the end of her essay, she writes,

    “The emergence of covert racism and its rhetoric of color blindness are not so much intentional as systemic. Computation is a primary delivery method of these new systems, and it seems at best naive to imagine that cultural and computational operating systems don’t mutually infect one another.”
    I was reminded here of our discussion from last week on GamerGate, and how the cultural ideologies that arose from this movement were founded within computational operating systems. Without access to that conference Google Doc, members of 4chan message boards could not have disparaged female academics to the extent that they did. I am bearing in mind McPherson’s assertion that our tech tools are *not* neutral, and that it may be violent to assume otherwise.

    A half-formed question that I welcome anyone’s perspective on: McPherson suggests we move beyond representations and visuality (visuality is a distraction), to questions of culture and meaning –> BUT… can’t you not have one without the other?

    Secondly, I’m stuck on Gerald Graff’s idea of “patterned isolation.” McPherson writes, “university departments (his example is English) begin to operate by a field-based and modular strategy of “coverage,” in which subfields proliferate and exist in their own separate chunks of knowledge, rarely contaminated by one another’s “internals.” As an English student I have to agree that even interdepartmentally, many academics’ work never even gets close to intersecting. Is this making our scholarship less rigorous? (I recognize that is not a new question to be asking). Can we have a “common knowledge” of literature without our studies becoming watered down, or succumbing to standardization? I admit I felt the most called out when McPherson asserts a need for “hybrid practitioners”: While many of us identify as interdisciplinary, very few of us extend that border crossing very far. What, then, does interdisciplinarity look like? McPherson offers a hopeful possibility in linking code and culture–perhaps beginning to create if you’ve been doing more analysis, analyzing if you’ve been doing more creating. I’m not done considering the possibilities of her term “intellectual generosity” …I think this is an admirable goal for all of us moving forward as scholars 🙂

    **forced optimism tonight brought to you as an act of resistance against this horrid weather…international students, I’m sorry this is the Canada you first got to experience!

    Liked by 3 people

  5. My comments on Jockers and Kirilloff’s “Understanding Gender and Character Agency in the 19th-Century Novel,” Bode’s “Graphically Gendered: A Quantitative Study of the Relationships between Australian Novels and Gender from the 1830s to the 1930s” and Underwood and Bamman’s “The Instability of Gender:”

    I find Underwood and Bamman’s examination of the instability of the clues associated with gender in fiction interesting. The way in which they investigate how gender influences fictional characterization is revelatory of the fact that works of fiction cannot but be historically and culturally situated and cannot be considered in isolation. However, I think that limiting a research just to the usage of certain words by supposedly gendered characters or to words generally associated with the concept of masculinity/femininity denotes a certain naive binarism. What is the use of those outcomes? What can we infer from the fact that linguistic differences between male and female characters get blurrier in the period going from the 1840s to the 1980s? Yes, gender is unstable, but there are many factors underlying such an instability, and I think they need consideration, too.

    Similarly, I think that Jockers and Kirilloff’s article, by presenting us with a quantitative analysis of the dovetailings of gender and character agency in the Victorian novel, focuses on overarching categorizations rather than investigating the forces in place beyond those categorizations. Also, I find their understanding of characters’ agency exclusively in terms of verbs of action pretty limiting. Indeed, even if it is true that many 19th-century works of fiction reflect Victorian notions of propriety with regards to the portrayal of women as the passive angels in the house, Jockers and Kirilloff’s pronoun gender normalization taps into the mere surface of what really goes on in a novel. What is the meaning of the characters’ actions if taken out of context? Why are notions of motion and action considered synonymous with masculine activity, while those of emotion and sentiment are just seen as an expression of feminine passivity? Aren’t most diegeses set in motion by the characters’ very feelings, after all? And, if so, why do we still look at them as a symbol of passivity?

    Yet, what I have found thoughtprovoking about this article is the fact that Jockers and Kirilloff discovered that the male and female novelists considered seem to be rather conventional when designing characters of their own gender, while they get to be more inventive when creating characters of the opposite sex, as they show a tendency to make them act in a non stereotypical way. I thus think that this aspect might need further investigation.

    Moving on to Bode’s article, I like how she historically contextualizes the relationship between novels’ authorship and gender in Australian 18th/19th-century fiction. She provides us with a thorough investigation of the factors influencing Australian women’s writing during that period, thus touching on questions such as the influence of British constructions of authorship and novel, demographic aspects, book market innovations and cultural as well as political forces. Nevertheless, as I have expressed my doubts about the quantitative approaches presented in the articles mentioned above, here again I am not sure what to make of Bode’s research. Yes, it is certainly important to remark the fact that Australian women did write novels in the 19th century, but why just limiting ourselves to that? Why not investigating also what they wrote about and why they chose to write and what were the reasons behind their decision to stop writing at some point? If a higher number of women writes works of fiction, it does not necessarily mean that they are more emancipated or that they write about feminist concerns. In fact, Bode stresses the fact that the decrease in novel titles by Australian women during the 1890s was due to the fact that female individuals were becoming more emancipated, hence looking for paid work outside the house and giving up novel writing.

    Overall, I have enjoyed reading these articles, as I think that they are a good starting point for future analysis that might focus more on qualitative rather than on quantitative aspects. Besides, they helped me finally understand that I am more of a why and how person rather than a what/where/when/who 😉

    Liked by 1 person

    1. Interesting analysis! I’m surprised you didn’t like Bode better because she really does spend most of her article unpacking some pretty straight-forward statistics, but as you say you seem to be caught up in a healthy suspicion of quantitative work. I sympathize with your response here in terms of wanting more more more of the context, analysis of the factors etc. that explain the computational results.

      Like

  6. [Why Are the Digital Humanities So White? or Thinking the Histories of Race and Computation
    TARA MCPHERSON]

    At the beginning, I was quite distabilized by the variety of topics that the text discusses. To sum it up, Tara McPherson includes three layers of comparision.
    First she talks about the post-Second World War computional and encoding systems, taking the computer systems UNIX developed 50 years ago as main element of comparision. She then moves from Technological History to Political History, making a bridge between UNIX coding strategies to manage data, and Western Governments’ policies to deal with racial and social issues. She makes a quite unexpected comparision with the lenticular logic in order to understand the two parallel structures ; both politicians and computer engineers use it because it is ‘a way of seeing the world as discrete modules or nodes, a mode that suppresses relation and context’, and because ‘(it) also manages and controls complexity’. Such a lenticular system therefore hides and filters informartion, making data much clearier for the lambda user, but then again makes him blind to the complexity of the system in which he develops.
    The third and last ‘comparative layer’ is the structuralist and formalist structures of universities during the same period (from the 1970s till the beginning of the 21st century if I am not mistaking), which were in favor of hyper-specialization, modularity and ‘chunks of knowledge’. This part had a strong personal echo to me, since I have a really negative feeling of my experience of French university, especially because there we were encouraged to go hyperspecialized in the fields we chose when we were 18. Because of that I also had the constant feeling of being ten years late concerning what was surrounding me outside of the uni. Just to give a precise example, in my timetable I was allowed to take only one class a week outside of the Comparative Literature Department. After this year in Paris-Sorbonne I developed what I label ‘academic boxes claustrophobia’.

    The reference to Antonio Gramsci’s conception of ‘common sense’ was also really highlighting. The shaping of what is clear is always to be understood in context, and I also believe that clarity is actually an ideal inherited from the ‘(phallo)logocentric’ legacy and the Cartesian paradigm. Giving credence to a possibly crystal-clear system actually often leads to uniformization, generalization and information retention. In my head I compare it to the act of recovering enormous and stylized architectural buildings with sand. Sand makes it look flat for a while, but as soon as you start to dig in or as the wind blows on it, you can see it was much more complex than you thought.

    If I simplify it with a chart, I understood the structure of the article in the following way :

    elaboration of coding systems (UNIX) American/Western Cold Ward governments (neo-liberalism logics) University systems and bureaucracies (structuralism, formalism, hyperspecialization)
    (I use the sign ‘’ as a link that both includes cause + effect)

    As I went through the text, I was also wondering when McPherson would explicitly answer her initial question ; but as I followed I found one passage where she is quite auto-critic concerning her article, and when she admits that the beginning of the text is dealing with very specific forms of computing knowledge, apparently far from racial issues. As a whole, I think that her point is to be auto-critic about Digital Humanities in general, arguing that DH and other emerging fields have not reached satisfactory levels of interdisciplinarity and interconnections. Her critic is very visible here : ‘While many of us identify as interdisciplinary, very few of us extend that border crossing very far’. Yet I really liked the moment when she says that “We need new hybrid practitioners: artist-theorists, programming humanists, activist-scholars; theoretical archivists, critical race coders.’ because it made me think of Harraway’s desire of monsters ; and I find the idea of ‘monster scholars’ very attractive. But as McPherson says it very clearly, this hybridity requires a constant intellectual effort and is very hard to achieve. I am convinced that such monsters exist a bit everywhere in the academia though, but the numerous obstacles on the way to interdisciplinary approaches make them less visible than highly specialized scholars. And I still think that interdisciplinarity is somehow comparable to brain plasticity : if you are not aware it exists, it is hard to practice it.
    Yet, I do not know to what extent and how a ‘common language’ between Humanities and Computer Science is achievable. I guess that according to the author for example all the students of our own class should get their hands dirty and start to understand the mechanisms of coding languages ?
    The article was helpful, though I was expecting something less theoretical. I guess that DH are very ‘white’ also because of class, geographical and larger historical issues.

    The other articles about text mining and meta-data analysis made me think about our dependence on categories and classification strategies, in order to thread our way through massive data. Even if the DH articles we previously read all go in the same direction toward more and more blurring of borders between categories, acutally separation and categorization are necessary to comprehend large amounts of texts.
    Could those analyzing strategies be ways of understanding the long construction of categories, in order to debunk them with new arguments afterwards ?
    I guess that — as Shlomo’s article recalls it — we must always keep in mind that raw data offered by text mining are only starting-point for further interpretation ; otherwise Literature would almost merge with Sociology.

    Talking about Shlomo’s team work on Black Drama, I was stroke to see how the results they found were also accurate concerning popular culture, because when I was reading the distinctive words employed by female/male characters, I had the impression to read on the one side the skeleton of a rap song lyrics, and on the other side the one of a commercial song written for a (female) r&b diva.

    Liked by 2 people

    1. Jade, lots of great comments here. Just as an aside, there is a heavily sociological school of approaching literary studies; in Canada it seems to be much more prominent in the francophone community who use databases with considerable regularity in their approaches to studies of literary history and literary institutions.

      Like

  7. 1. Potential and limitations of corpus analysis tools
    My reading of the articles for this week was inevitably shaped by some first considerations on their style: reading articles about literature and literary history that are written in a precise ‘scientific’ way, structured as proper experiments and supported by clear visualizations and data (rather than closely analyzed textual passages), is at once very appealing and disorientating to me. It is disorientating because some rather technical machine learning methods are used, and I cannot help but feel that I am missing an important part of how the final results are obtained, and that I am only understanding the surface of it. At the same time, I am also very fascinated by what these tools can do, and so I wonder: how important is it, for a digital humanist or literary scholar, to actually have a grasp of the computational tools that their work is based on? The article by Tara McPherson “Why Are the Digital Humanities so White?” could be seen as possible answer. In fact, she suggests the necessity of bridging the gap between the digital and the humanities, which was voiced by Alan Liu as well – even though she argues for it on different grounds, and moved by different concerns that have to do more specifically with discussions of race in the digital field. Anyways, her claim that the very design of our technological systems shapes what we can do with them could be seen as an answer to my question. Still, I found it interesting to notice that I could appreciate the findings of these computational studies without having any technical background in computer science.

    2. What is the potential of distant reading?
    One thing that I found really interesting, and that is tackled by Katherine Bode’s “Graphically Gendered” in particular, is how distant reading might question the notion of the canon. As Bode observes, the idea of the canon was reinforced, to some extent, by feminist scholarship on Australian women writers. Bode’s study considers ‘the entire spectrum of texts and authors’ in the history of the Australian novel from 1830s to 1930s, and overlooks traditional paradigms of literary quality, which is extremely interesting to fill in those gaps in literary history that have been inevitably created by traditional scholarship. Another interesting possibility offered by quantitative studies is the literally distant, ‘zoomed out’ perspective they can offer on large bodies of texts: for instance Bode’s study was able to compare Australian novel writing with the British development of the genre, while the study on “Gender, Race and Nationality in Black Drama” allowed for contrastive findings of American vs. non-American works and authors.
    I really liked the comparison between quantitative methods and maps, quoted at the end of Bode’s article: much like maps, quantitative studies allow for good visualizations that can lead to new insights.

    3. What might go lost in distant reading?
    At the same time, just like maps, quantitative methods imply an abstraction, and distance inevitably brings limitations. For instance, in the study on the Black Drama Database, it is highlighted that the patterns that emerge from comparative studies based on binary oppositions can offer simplistic or stereotypical results. The ‘zooming out’ allowed by distant reading also means that particular instances and exceptions are overlooked in favour of a bigger picture. Similarly, texts have to be broken down and modified in order to maximize the performance of classification tests. For instance, in the analysis of the Black Drama Database, it was necessary to erase skewing factors such as the stage directions, thus ‘creating composite documents which do not reflect the structure and organization of the plays as individual works’.

    To conclude, I think there is a lot to think about in terms of what quantitative studies can offer and what, on the other hand, remains for ‘close readers’ to be looked at (and many of your comments have pointed this out in different ways!)

    Like

    1. Your comments about comparison are pretty perceptive, Maria–Franco Moretti, who coined the term “distant reading”, is a comp lit scholar and it is precisely part of an attempt to get beyond narrow nationalist perspectives on literature.

      Like

  8. What struck me most this week was how micro (close read, qualitative analysis) and macro (distant reading, qualitative analysis) are shown to bear on one another, particularly in Underwood and Bannman’s post, “The Instability of Gender,” and Jockers and Kirilloff’s article, “Understanding Gender and Character Agency in the 19th Century Novel.”

    In the first, Underwood and Bannman ask to what degree characterization is shaped by gender in the novel from the nineteenth to the early twentieth century — in other words, can grammatical (he/she) gender be inferred from the verbs, nouns, and adjectives that are associated with a character? Their findings reflect much of what we already know from literary analysis and cultural theory: that gender is fluid and that the ways in which gender are represented change over time. Similarly, Jockers and Kirilloff’s analysis seems to prove a strong correlations between the verbs in 19th century novels and character gender, as well as complicate prior understandings of the relationship between women and agency.

    What adds richness and depth to these analyses, however, is a methodology that applies both quantitative and qualitative analysis, that is, close and distant reading — where each approach bears on the other, pushing the analysis that much further. Or, put simply, it makes for a much more interesting read and compelling argument when both practices come together in a DH project (as opposed to pitting one against the other, or including both without placing them in conversation); where methodological boundaries are stretched, tested, and pushed further by way of looking at the tools of literary analysis themselves.

    Lastly, “the point” of this kind of research as outlined by Underwood and Bannman reminds me of the importance of experimentation, of play — where “the point of research is also to discover things that weren’t hard-coded in the original plan” and to then follow those divergences to new places.

    Liked by 2 people

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s