#hist5702

This page is dedicated to blog posts related to Carleton University’s Digital History course. They’ll deal with some of the tutorials we’ve been instructed to try out, and readings/topics related to DH. Enjoy!

Why am I taking this class?

by emilykkeyes (01-12-2016)

I’m interested in taking this class because I’m hoping to learn more about Digital History. While this seems like a generic answer, I’m interested in the issues that are being discussed. On our first day Dr. Graham raised a good point, which was that historians  often get so excited about the digital that they  forget to look at it like any other source.

Not only am I guilty of this, I can see parallels in my own research of performance history. Often an audience forgets that costumes, lighting, casting etc., is done deliberately, designed to present the events in a particular way, whether it is to make it more dramatic, relatable, etc. A helpful example is looking at the most recent adaptation of Macbeth. The three witches (classic and memorable characters in the story) rather than looking like this:

V0025894 Macbeth meets the three witches; scene from Shakespeare's 'M

V0025894 Macbeth meets the three witches; scene from Shakespeare’s ‘M Credit: Wellcome Library, London. Wellcome Images images@wellcome.ac.uk http://wellcomeimages.org Macbeth meets the three witches; scene from Shakespeare’s ‘Macbeth’. Wood engraving, 19th century. after: William ShakespearePublished: – Copyrighted work available under Creative Commons Attribution only licence CC BY 4.0 http://creativecommons.org/licenses/by/4.0/

Look more like this:

 

This isn’t because the director thought himself better than Shakespeare, but because creepy children provide the contemporary audience with the same cringe factor that witches provided Shakespeare’s audience.

I’m also interested in learning more about Digital History for future career opportunities. Just from going over the syllabus, I can see many ways in which the skills I’ll be learning could help my work as a historical researcher. The most obvious is a better learning about SNA, something we use a lot in our genealogy work.

Sitting down to think about my experience with Digital History, I actually have a bit more experience than I gave myself credit for (not that this is a lot.) My lovely and forward thinking parents enrolled me in Virtual Ventures (a summer camp run by the Faculty of Engineering and Design at Carleton University), and I remember learning how to create my own website using html (complete with garish colours, and images.) It seems I couldn’t escape html, as I later had to deal with it in high school when I was enrolled in a special course where we created things like photographic essays using PowerPoint, built websites using html etc.

Other than those two examples, my experience is relatively small. I am the go to “tech” person at my office, which generally means I am in charge of fixing the printer, and setting up laptops. I also manage the company’s online presence, including running the Twitter, Facebook, and Instagram accounts.

I’d like to come away from the course having a better understanding of the nitty-gritty behind Digital History, and how these tools and programs can help my research. I can tell already that this can be done, and that the more I put in the more I will get out.

Automated Downloading with Wget

Programming Historian Tutorial, by Ian Milligan
by emilykkeyes (01-27-2016)

Each week in #hist5702w, we’re assigned tutorials/readings. The tutorials usually come from The Programming Historian, a website that offers tutorials on digital tools/techniques for historians/individuals.

This week’s tutorial focused on a tool called Wget, which can help you download online material. I was very excited about how I might be able to use this tool, since in my part-time work with Know History we often need to download a large number of historical documents (in most cases census records, birth records etc.) Usually in these cases the task of copy and saving these files falls to me. Not only is this usually tedious, but there is a huge margin for error, not to mention the decisions that need to be made on how these documents should be filed and stored once they are downloaded.

Just looking at the tutorially, I was already  appreciated how it was broken down for Mac and Windows users. In this course I’ve been cautioned that Windows is very different (i.e. more problematic) than Mac.

The first issue I ran into was the downloading instructions. Admittedly, this was more of a reader/user error. Eagerly following the link to download Wget, I was immediately confused with all the download options:

so many options

So many options, what version do I need?!

Lucikly, per the #hist5702w mantra, I turned to my classmates (shout out to Laurel), and the problem was easily solved. What I needed was just wget.exe.

Second problem, again a great reader/user problem! When downloading wget.exe, I did not put it in the right directory. Instead of putting it in C:Windows, I left it in C:. This would cause problems for me later, forcing me to start at the top of the tutorial and read carefully again (something I’m growing increasingly familiar with doing…there’s a lesson to be learned there, but maybe I just need to be hit with it a few more times.) Again, shout out to Laurel for bringing this error to my attention.

SO, finally armed and ready with wget.exe, I proceeded to input the commands using Powershell. Powershell was used in the Command Line Bootcamp tutorial was recommended at the beginning of the Wget tutorial, so I went with that. (side bar: that was a great tutorial. Really clear and easy to follow along.)

Moving along, I was fine, right up until this appeared on my screen:

wget issues

…well this doesn’t look like what the tutorial said it would.

Again, turning to my trusty DH guide, Laurel stepped in. Deciding to give Powershell the proverbial finger, Laurel switched me to CommandPrompt. Repeating the steps, things worked fine, and I finally completed the tutorial yahoo!

Thoughts on Wget: It seems like it could be an immensely useful program to download journal articles, historical documents etc. However, I’m curious to see what kind problems you’d encounter trying to download from an archival repository such as Library and Archives Canada. Do they have safeguards in place? Also, what are the implications of ripping documents from their archival context?

Complete the Wget tutorial here.

 

A note on pre-requisite tutorials

by emilykkeyes (01-29-2016)

Having now managed to get a few Programming Historian tutorials under my belt (with great difficulty), I have a suggestion to make.

In almost all of the Programming Historian tutorials I’ve completed they refer back to previous Programming Tutorials, suggesting that the reader/user complete those prior to completing this tutorial. This in itself is fine. Obviously to move forward, they idea is to build on previous knowledge.

The problem arises when you get to the pre-requisite tutorial, and IT recommends you complete a pre-requisite tutorial. For example, in Data Mining the Internet Archive Collection, they explain that you will need something called pip. They recommend that you download this using the instructions in the Installing Python Module with pip tutorial. Okay, fine. But then when you get to the Installing Python Module with pip, they explain that the easiest way to install pip is by using a python program….what if I don’t have Python installed? Or even know what Python is? Well, there’s a Programming Historian tutorial for that, which is great. But what ends up happening is I spend the time I should have been spending completing my first tutorial, completing this tutorial, and then have to work my way back.

My advice would be that each tutorial be self-contained. Obviously, this can’t be the case for every one, but by continually sending users to other tutorials you risk losing them. As one of the targeted users for the Programming Historian tutorials, I likely would have given up if I hadn’t been required to complete it for class. Harsh, but true.

Otherwise, keep up the good work Programming Historians!

Using historical documents for mapping

by emilykkeyes (02-17-2016)cartodbroyalnavy

In preparation for my final project (which I discussed a bit in my last post), I decided to check out my competition and see what other kinds of maps are being generated based on census records and other historical documents. There are many out there, so in this post I’ll highlight two, and talk about what I found interesting, problematic etc.

The first one I looked at was from the Smithsonian magazine, by Lincoln Mullen, on slavery in the United States from 1790-1860 (which you can access here). Mullen created two interactive maps, using data taken from census records, that have a time lapse featured.

One of the first things that caught my attention was that the accompanying article was centered around the maps Mullen generated, rather than the other way around. Reflecting on this, I thought it was really important, and showed that Mullen wasn’t just using the maps as filler, or to make this article look cool. He had created the maps, and then was reflecting on them, and using them to build an argument.

The trends seen in the maps are very clear. Moving forward through time the Western United States becomes increasingly populated with slaves. Mullen uses this to talk about how rather than something that was confined to the Southern United States, slavery was widespread across the country. Additionally, he mentions the difficulties of using data from sources such as censuses, citing the example of how no slaves were enumerated in the State of Vermont in 1860, but how historical research has shown that African-Americans were kept in bonds in Vermont during this period.

The second map I looked at was made using a program called CartoDB (a program I think I’ll be using for my own project, and something I’ll discuss in another post.) It shows the movement (reportedly over 1 million locations) of the United Kingdom’s Royal Navy during WWI. It’s data set was created using captain’s logs. (You can see it here.)

The map is animated using a function called “Torque”, which allows you to create animations based on the locations in your data set. The result is the ability to watch your data zip around your screen, as time-lapse moves forward. The clock on the bottom counts us forward through time, and this map even features short summaries for activities (“Trade routes resume” “War begins with Germany”).

One of the first things I noticed was the inability to control the time. In Mullen’s maps you can move forwards and backwards, but with The Guardian’s we can only advance forward (and start/stop.) While aesthetically much nicer than Mullen’s, I found this less user-friendly. The Guardian’s map was also completely standalone and didn’t feature any accompanying text. While initially I liked this, after giving it some though I found it didn’t really prompt me to do any reflection. Instead, I watched the map a few times, mostly noting that it looked cool. I had to actually stop and prompt myself to think about what all the lights meant, and what kind of trends are visible.

Something I noticed about both maps was that they were working with large data sets. They used the data from thousands of documents, spanning hundreds of years. For my final project I want to keep my data set small, specifically focusing on one particular family. I’m curious then how features like Torque in CartoDB will work with my data set.

Deceptive data visualisations

by emilykkeyes (02-25-2016)

As part of my final project I’ll be using CartoDB to make a map of the movements of a family throughout time and space. So, doing some due diligence, I thought I would read a bit about data visualizations. I came across a paper, “How Deceptive are Deceptive Visualizations?”, and thought I would take a look to see what they found.

They start off the article by explaining how useful visualizations can be, but how with the wrong selection of colours, scaling etc., the data can be misinterpreted.

In a way this questions reminds me of the pictures you can find that show 2 images, such as this one below (which I talked about in an older blog post about an art exhibit I saw.)

2009_Vanity_Gilbert_Drawing_700px

To see just how deceptive bad visualizations can be, the authors (in connected with an NYU lab class) tested a series of well-known graphical distortions on participants. Below are some examples:

inverted-300x118 - Copytruncated - Copy

For the study half of the participants received a deceptive chart, and the other a controlled one. Each were asked the same questions, which were essentially to measure the difference between the two (exp: How much better are the drinking water conditions in Willowtown as compared to Silvatown?) What the tests showed was the the deceptive chart led to more participants answering the questions with a larger/bigger estimate.

So, how can I transfer over some of these ideas into my own final project. It made me think about the options that will be available to me when I create my map. I know that CartoDB features different visualization options, include to change the shapes of makers, make it animated, and change the basemap. Below are some screenshots of the map options.

While absolutely none of the data has changed in any of these maps, at first glance they do appear to be very different.

Variety can be a great thing, but evidently if we don’t think about how the data will be used and who will be using it, we can run into some problems.

Seeing your words: Using Voyant on my MRE script

by emilykkeyes (03-08-2016)

In this post, I’ll be talking a bit about my graduate research, and about the tool Voyant.

As part of my graduate research  at Carleton University I wrote a script based on a shooting outside of Ottawa, in the township of Goulbourn. One evening in August 1882, Robert McCaffrey was confronted by his lover, Maria Spearman, and her brother, on the side of the road. Maria was reportedly in the “family way”, and the two had sought out McCaffrey in order to arrange a marriage. When McCaffrey refused, a struggle occurred, which resulted in a gun being fired, and McCaffrey’s death. The headline “Murder! Shot Through the Heart” was splashed in newspapers as far away as Washington. Maria and her brother were arrested for murder, and taken to the Carleton County Gaol (now the HI-Ottawa Jail Hostel) to await the upcoming fall assizes. The story was taken up by anonymous writers to discuss the current issues of the day, including women’s rights, and the inequality of the justice system. Public outrage over the death of Robert McCaffrey soon turned to sympathy, and Maria quickly became characterized as a helpless victim, who had no other course but to take matters into her own hands. In the end, although Maria admitted she had accidentally fired the gun, the jury found her not guilty, and she returned to Goulbourn following her release.

The script I’ve written tells this story, using historical records as the skeleton of the piece. I created dialogue by combining verbatim excerpts from primary sources and then using my imagination to fill in the remaining gaps. The script also features characters based off of individuals who were involved in the creation of the script, including myself, that work to highlight the complexity of creating and performing the past. They also are a reflection on the evolution of script, and my journey throughout my research.

This script was performed by The Cellar Door Project at the end of February 2016. Now, I’m in the phase of my research that involves writing a reflection on this project.

A few weeks ago I was introduced to a tool in my Digital History course called Voyant. Voyant is a web-based tool that searches through a text you’ve uploaded, and provides information on words that frequently appear in it. For it can provide you with a graph that shows the trend of a particular word through multiple texts.

Someone in my class had mentioned they had tried Voyant on their thesis/MRE paper, to see what kinds of trends in the words they could see. I decided to give this a try on my own work, specifically with the script.

voyant2

So, there are a total of 5,315 words. The most frequently used words are the (182), you (162), I (148) etc.

If you click on the small cog wheel, you have the ability to edit out these kinds of words. Select English (Taporware) from the dropdown list, and then click “OK.”

voyant3

Pretty neat!

voyant4

Looking at the new cirrus (the word bubble), there are a couple of words that I expected to see “Emily” “Maria” “Chester” “Robert.” These are all major characters throughout the script. I can even generate graphs that show trends in specific words. (Sorry, I went a little bananas!)

By clicking on a specific word in the cirrus, I can see how many times it appears. “Maria” appears just 5 more times than “Emily.”

Seeing “Maria” just barely scrape ahead of “Emily” prompts a wave of guilt. Seeing them side by side hit the issue of authorial presence I’ve be struggling with in my research.

“Emily” (surprise surprise) is based on me. The idea to insert myself into the script came in a roundabout way; I had been speaking with friend of mine, and while discussing my research I remember that he had been the one who had taken me out to the site of McCaffrey’s death when I was doing my early research.  I reminded him of the trip, and to my surprise he remembered it right away, even referencing the music we had been listening to on our drive out.  I was struck by this.  While I have shared my research (and the story of the shooting) with many, I hadn’t realized until that moment that my audience was listening, or that they would might take part of the story away with them.

I decided to write a scene for the script based on this conversation, which eventually became the last scene in the script. I felt that this would help me navigate some of my feelings on the subject, and be cathartic.  When I discussed the conversation I had had with members of The Cellar Door Project production team I was encouraged to cultivate this more. They urged me to consider adding myself as a primary character to the script. Immediately my guard was up.  I didn’t want to include myself in the script, after all, this was supposed to be a play about Maria Spearman.

Greg Dening in book Performances, articulates my reluctance.  He explains that most historians find authorial presence disturbing. Furthermore, he explains that the use of the subjective “I” is seen as “complicated and untrustworthy.”[1]  Historians find authorial presence disturbing because we have been instructed that our writing should be objective.  Bruno Ramirez in his work explains that the application of a structured rationality is inherent to the discipline of history, and is perceived as necessary for the attainment of historical truth.[2]  Logically, historians know objectivity is impractical, as well as unobtainable.  However, a small part of us still clings to the illusion.

Seeing “Maria” and “Emily like this a tangible manifestation of all this. Whether I like it or not, the script I’ve written is just as much about me as it is about Maria. And that, like everything I create, it comes from me.

It’s also pretty moving to see all my research filtered down into a colourful blob of words. Since after all, all the script is is a stringing together or words. It makes me think more about the words I’ve chosen to use, and what they reflect about me.

Here are some of the word trends.

[1] Greg Dening, Performances (Chicago: The University of Chicago Press, 1996), 111.

[2] Bruno Ramirez, “Clio in Words and in Motion: Practices of Narrating the Past,” The Journal of American History 86, no.  3 (1999): 998.

tumblr_nxbeneEegW1ufgrsho1_500

 

Flipping the Stage: show me more ugly

by emilykkeyes 12-02-2016

Earlier this week, Dr. Shawn Graham forwarded me on this website and project, knowing about my interest in theatre. This post is going to talk about that project, and I’ll be using both my history/theatre and digital history hats!

St. Lawrence Performing Arts Program is undertaking something that to my knowledge, is pretty unique in the performing arts world. Their using social media platforms and tools, like Instagram, Tumblr, and Snapchat, to give the public and audience members a behind the scenes look at the making of “Ash Girl”, a play by Timberlake Wertenbaker. They’re calling it “Flipping the Stage.” Here’s an explanation of what they’re doing:

Typically a theatre production is experienced for a narrowly prescribed moment—the 2-2.5 hours of performance. “Flipping the Stage” looks to lengthen and broaden the theatrical experience for the students involved in the production as well as the broader PCA department and SLU campus population by offering in-depth exposure to the production via the cast, crew, and production team.

The program was developed through the Digital Initiatives Faculty Fellowship Program. Through Flipping the Stage, they showcase the production work that goes into a show. They’ve broken down their posts into different themes/categories, including “Mystery Monday” “Technical Tuesday” “Wisdom Wednesday” “Funday Friday” “Selfie Sunday.”

The photos and snapshots show pictures of the cast picking apples together, costume mock-ups, pre-show rehearsals, script readings, inspiration, game nights etc. Here’s a snapshot of some of their posts (you can see much more through their website.)

CaptureCapture2
This is a pretty brave step. Showcasing what goes on from script to stage is never easy. This kind of transparency excites me, as it’s something that I think a lot of historians are calling for in the field (and are reluctant to do). Talking about these decisions and negotiations is something I tried to do with my MRE script, and in the reflection I’ll be writing.

What the St. Lawrence Performing Arts Program is doing deserves applause, but there are some issues with it as well. I’m confident that someone involved in the Flipping the Stage program has  A) already thought of these issues, and B) will likely write/present about them at some point, but since I haven’t seen anything as of yet, I’m going to throw in my 2 cents, coming from a historical/theatre perspective, and combined with some of the things I’ve learned thus far from my Digital History class.

The biggest issue is that they’re buying into the idea that by using things like Twitter, Instagram, and Snapchat, the public is getting a “real look” at what is going on in production. Whether we like to admit it or not, whenever we post something on the Web, we’re performing.  Think about how many selfies you’ve taken when you have a huge pimple on your chin…None? Yeah, not surprising. Knowing that the images we’re taking are going to be seen by others causes us to automatically filter ourselves. We all want to look like we’re living happy, fabulous, and fulfilling lives.

Not surprisingly then, the majority of the Flipping the Stage content shows a happy, fun cast. While I am in no way saying that these people weren’t happy when these photos were taken, I can’t ignore the fact that there aren’t too many photos/videos that show a stressed out director, dealings with a difficult cast member, budget issues etc. This is because we don’t often have the knee-jerk response to whip out a camera when people are fighting, and things are going wrong. However, those are the realities of theatrical production. Nothing every goes 100% according to plan, there are always hiccups.

I’d argue then that in carrying Flipping the Stage forward (which St. Lawrence Performing Arts Program should 100% do, because again it is an amazing initiative), they should strive to catch more of these kinds of interactions. Leave the camera running during rehearsal, capture those creative differences. Because it’s those things that will be interesting for public, the conversations that happen surrounding blocking, readings, and character development, that show all the minute decisions that go into a production. All the little negotiations that shape how an audience will learn a story.

Congratulations to the cast/crew of The Ash Girl, and to St. Lawrence Performing Arts Program for being bold and brave! Keep pushing forward, and keep up the good work!

 

I can do that: Women and computer science

by emilykkeyes 03-23-2016

code

This past weekend, I caught part of the documentary “CODE: Debugging the Gender Gap”, being aired on CBC. CODE explores the female minority in software engineering, and computer programming. The documentary features women employed at some of the top tech companies, including Pinterest, Twitter, Apple, Facebook, Reddit etc. The documentary puts a harsh light on the blatant discrimination that goes on in the industry, and the internalized idea that women don’t do science. Evidently, by interviewing women in top positions in the field, they show that this isn’t true. That women are just as good at programming and engineering.

They also discuss how in the history of programming and software engineering, women held key roles in its advancement. For instance Grace Hopper was a computer scientist and admiral in the US Navy. She not only worked as a programmer on the Harvard Mark I (an early “proto computer” designed and built by IBM in 1944), she was the only female. She’s credited as having invented the first compiler for a computer programming language. Even the US Navy named this ship the USS Hopper after her.

Clearly an impressive woman!

CODE made me think about my own experience with programming, and engineering. My dad is an electrical engineer, and works at a company that builds solar chips and panels. My mom is the head of a computer support unit. As a kid, they enrolled me in Virtual Ventures, a summer camp run by Carleton University that taught me how to use basic html coding to build colourful websites. In high school, they signed me up for a year long set of courses that encouraged me to create photo essays using Microsoft PowerPoint, and use html coding in place of traditional essays.

Looking at my history, I can see there is a strong backing for me to do something computer related. But (and this is not a criticism of my parents in any way), I never rely felt that a career in computing was an option. The sign never went on in my head that told me “I can do this.”

Now, maybe this is because I was just “average” when it came to computing. Maybe if I had excelled, I would have been encouraged. Or, maybe it’s because I didn’t really like the camp, or the course (I can’t really remember anymore.) But, even if I was just average or un-interested, I don’t remember anyone else ever coming into my classes, or pulling me aside to talk about careers in programming.

If I start to think of this as connected to gender, I see some patterns. First, there’s the running joke in my family that my dad always wanted an engineer, and that my two male cousins (both of whom did engineering at Queen’s) fulfilled this desire. Knowing my family, I suspect that if one of my female cousins had become an engineer, he’d still be making the joke, considering I have an uncle who complains that I never became a basketball player, despite having 2 nephews and a niece who play competitively (one of whom is “6’7”.)

But in light of CODE, I feel a bit more suspicious about this. If I had been a boy, would I have been pushed more?

A clearer example of this gender bias comes again from my family. Even though my mom has more computer knowledge then my dad, both her family and his family always turn to him for computing advice…in which (after hanging up the phone), he always asks her. They never call and ask for her when there is an issue. This makes her pretty crazy.

All of this is to say that I think there is a lot of truth to what CODE has to say. We (and I’m including myself in this one) have internalized this notion that only men do science. Which is pretty ludicrous, considering there are a ton of women in my life who are interested (and good at) programming and engineering.

And I should include myself in that category! I am by no means an expert, but I shouldn’t downplay what I can do. I do all the tech support at the company I work for. While this most revolves around email and account setup, I can still do it. I still have some of the skills I picked up from my high school course including html coding. And in my Digital History class alone I’ve learned so many great tools that I can use going forward. It’s been difficult, but eventually I got there. And I’m pretty certain that this won’t stop when the class ends.

Anyways, all in all, CODE is well worth the watch!

And I think I’m going to suggest a screening at Carleton. Maybe it’s worth seeing if some of the other women in my class might be interested in holding a working group once a month where we try out different tutorials/programs…

Learn more about CODE here.

Why am I taking this class:  What I learned from this class

by emilykkeyes (04-02-2016)

This post is going take a look at the first blog post I did for this class.  I thought it would be useful to look back, reflect on some of the things I learned, the challenges etcetera.

I’m interested in taking this class because I’m hoping to learn more about Digital History. While this seems like a generic answer, I’m interested in the issues that are being discussed. On our first day Dr. Graham raised a good point, which was that historians often get so excited about the digital that they forget to look at it like any other source.

This lesson has definitely stayed with me throughout the course. We’ve worked with a lot of interesting and innovative tools, and I think with the majority of them there was an excitement over the things they could do, and how it could impact our research. It was easy to get caught up in this excitement, and forget that each fulfills a specific purpose, one that is never objective or impartial.

This doesn’t necessarily mean that we should ignore them. Instead, the products of these tools (whether a word jumble or a graph) need to be evaluated the same as we would a primary source document, considering the context, objectives of the creators, function etcetera.

Not only am I guilty of this, I can see parallels in my own research of performance history. Often an audience forgets that costumes, lighting, casting etc., is done deliberately, designed to present the events in a particular way, whether it is to make it more dramatic, relatable, etc. A helpful example is looking at the most recent adaptation of Macbeth. The three witches (classic and memorable characters in the story) rather than looking like this:

V0025894 Macbeth meets the three witches; scene from Shakespeare's 'M

V0025894 Macbeth meets the three witches; scene from Shakespeare’s ‘M Credit: Wellcome Library, London. Wellcome Images images@wellcome.ac.uk http://wellcomeimages.org Macbeth meets the three witches; scene from Shakespeare’s ‘Macbeth’. Wood engraving, 19th century. after: William ShakespearePublished: – Copyrighted work available under Creative Commons Attribution only licence CC BY 4.0 http://creativecommons.org/licenses/by/4.0/

Look more like this:

This isn’t because the director thought himself better than Shakespeare, but because creepy children provide the contemporary audience with the same cringe factor that witches provided Shakespeare’s audience.

I’m also interested in learning more about Digital History for future career opportunities. Just from going over the syllabus, I can see many ways in which the skills I’ll be learning could help my work as a historical researcher. The most obvious is a better learning about SNA, something we use a lot in our genealogy work.

This has also proven true. Consistently over the course I’ve been sending emails to my employer telling them about some of the tools I’m learning about (wget, Palladio, SNA programs, voyant) and how they could be used for our research. One of these actually got put to use on a project we are working on, as a direct result of my email. This was wget, which I talked about in this post, and which I had to talk about as part of my seminar leadership in the class.  

Part of me still slips into the thinking that this is a “wonder tool”, forgetting again that there are implications from its use. For our project it had positive implications, including that we were able to access and download primary source documents really quickly. However, we did encounter problems with it, such as realizing the image size was compressed (meaning the image was pixelated and blurry), and we had to stop and use some creative group problem solving.

This creative group problem solving is probably the second most important thing I’ve picked up from the course. There were a lot of instances (especially at the beginning) when I was really frustrated. In some cases this was because I hadn’t taken the time to read a tutorial properly, too accustomed to speed reading, and in other cases this was because a tutorial was may have been written using overly-complex language, and relied on previous knowledge (which I talked about here). This frustration at times was very isolating. While in the real world group work is a necessity, we don’t often do this in university. Until I started reaching out to my classmates (who to my surprise were often struggling with the same things I was), I felt very isolated. The old saying two heads is better than one held true in these cases. And even if we weren’t able to figure it out, there was a comfort knowing that it wasn’t “just me.”

Sitting down to think about my experience with Digital History, I actually have a bit more experience than I gave myself credit for (not that this is a lot.) My lovely and forward thinking parents enrolled me in Virtual Ventures (a summer camp run by the Faculty of Engineering and Design at Carleton University), and I remember learning how to create my own website using html (complete with garish colours, and images.) It seems I couldn’t escape html, as I later had to deal with it in high school when I was enrolled in a special course where we created things like photographic essays using PowerPoint, built websites using html etc.

It surprises me, looking back on this last paragraph how much my writing came full circle. I started off giving myself some credit for the experience I did have, and ended up writing this post towards the end on a similar subject. I think somewhere in the middle I lost a lot of this confidence, especially when I felt frustrated, but it makes me feel sort of proud that Emily 12 weeks ago, believed in herself.

Other than those two examples, my experience is relatively small. I am the go to “tech” person at my office, which generally means I am in charge of fixing the printer, and setting up laptops. I also manage the company’s online presence, including running the Twitter, Facebook, and Instagram accounts.

I’d like to come away from the course having a better understanding of the nitty-gritty behind Digital History, and how these tools and programs can help my research. I can tell already that this can be done, and that the more I put in the more I will get out.

Coming away from the course, I think accomplished most of these things. I can’t say I always understood how the tools I was learning might help my research, but when I did I was really excited. And while I can’t say either that I have a full understanding of the nitty-gritty of Digital History, I think I have at least an ankle deep understanding. I understand that the field is large, but it isn’t as complex as I had feared. There are a lot of assumptions that go on about Digital History, and within the field, especially on things like gender. Rather than look at “digital” as something scary and unknowable, (something like this):

10178670633_fa038999fa_b

I see it more as a toolbox.

Advertisements