On the Subject of Digital Literacy

August 7, 2013 § 1 Comment

Hello Folks.

Lately I’ve been a part of an onslaught of irate professors coming into the office asking for help with Canvas, a new utility that has replaced Blackboard at our campus. I’m not dealing with these professors directly, rather that’s the job of the students in the lab that I happen to work in – our jobs are separate but we share the same office. The initial idea was to create more Web 2.0-friendly interface and wipe out the oft-complained-about Blackboard. This resulted in professors lamenting their oft-complained-about-yet-we-did-retrospectively-love-it Blackboard and hating Canvas. I heard about this second-hand through the Desktop support people, who as I mentioned share the same office.

What worries me more than all the above is how professors appear to understand and refer to digital tools. This seems – to me – to spread to their students as well. There’s usage of special modifiers and nouns to describe things as simple as a laptop or computer. Data in Canvas is stored in a shared cloud, and professors love to make up terminology and metaphors for this. In a classroom for philosophy or literature I think this would be interesting and thought-provoking, but in a tech-support context it can be frustrating. It frustrates me just to hear it through a cubicle wall how a professor can waste 15 minutes explaining how he or she can’t wrap their brains around saving data.

Well so what? Professors shouldn’t be expected to wrap their heads around anything new, shouldn’t have to adapt. Yet that is what we all do in any job in the 21st Century (Or back in the late-20th Century for that matter). Internet technology has made a staggering impact on our society, the way we work, live, and communicate. When an archivist back in 2007 made a keynote speech at the North Carolina Librarian’s Association about how “This Internet Fad” might go away (Paraphrasing, but seriously, that’s what the dude was getting at) I couldn’t believe it. The Internet is here, people, and it’s here to stay. Some humanists seem to disregard that even today and shun anything new – particularly if it stems from the computer software world. Adaptation to the Internet way of things isn’t a choice anymore – in many ways it’s a necessity. Just as cars, telephones, and the mail system have changed our overall structure, so has the Internet

Schools need to implement courses that counteract this ignorance. By the end of high school I think it is reasonable for students to have been exposed to:

  1. How the Internet works (Servers, Client-side vs. Server-side, IP addresses, network nodes)
  2. How E-mail works
  3. Internet security
  4. Cloud data
  5. Basic HTML
  6. Basic computer terminology (What a CPU is, what drivers are, etc.)
  7. Basic computer programming (e.g. Assembly, Javascript)

What I’m getting at is that we’re leaving out a bulk of information to our prodigy. An 18-year-old, for example, should know that posting images of themselves online means that data is completely and totally public – and why that is exactly. Someone entering the workforce shouldn’t be a blank slate when it comes to how a website is structured or the basics of how it works. Furthermore, everyone should be able to interact intelligently and without vagueness when it comes to a technical issue with a computer. In Drivers Ed we were told how an engine works and what to say to a mechanic – why can’t this be the same with the Internet and computers – two things everyone is bound to come into contact with if they enter the workforce? Internet and computer technology isn’t regulated anymore to “nerds” or those interested in it – everyone has to deal with it at some time or another. Such an education would make huge improvements in efficient communication with IT staff and give the next generation a huge leg-up.

But that’s me! What do you think?

Drupal V. WordPress

April 5, 2013 § 3 Comments

At my recent job, I’m working 100% (Sometimes it feels like 200%) with Drupal. Overall my first impressions with Drupal have been that it’s a huge, complex system. One opinion (Ref: Taken from this site) was that the divide is between Enterprise systems and light-weight, user-friendly systems. I hear this a lot and wonder what it means; what are the pros and cons?

Some ideas of the top of my head:

Enterprise

  • Pro: You can do practically anything. If there is not something that it allows for you to do, you can create a module for that application or functionality.
  • Con: You can do practically anything. When building a system that addresses multiple, enterprise-level needs, it tends to be huge, bulky. In Drupal’s case it has a problem with being a severe memory hog. Every time I set it up I am REQUIRED to manually edit my local machine’s php.ini and my.cnf in order to keep the beast tame.
  • Pro: Security. For every item, there are security features to protect that item. In Drupal’s case, you can have a Unix-like hierarchy of users with distinct permissions, down to which node(s) in the datastore they can view.
  • Pro: It’s so huge, you tend to respect it more. When people buy enterprise systems or get trained in using them, the tendency is to respect either A.) the price or B.) the complexity. In Drupal most of my clients at work are either afraid of what’s happening or confused; either way, they listen carefully when I help them and only the bravest of coders tends to say “Hey, let me make a module for that!”, thus cutting down on code creep.

Light-weight

  • Pro: It works out of the box. WordPress at least does this.
  • Pro: People are less intimidated by the system. Light-weight systems tend to be more minimalist, with fewer menus and buttons. Therefore people that I’ve worked with in WordPress tend to “get it” earlier and can work on their own faster.
  • Con: Expandability. Sometimes with WordPress I find that I have to create a new function in PHP or go searching for a plugin. These tend to have minimal consequences and I tend to need very few of them, but they do build up over time.
  • Con: Security. This tends to be low – you can create several users with limited types of permissions, but none of the “User X can not see Node Y” functionality.
  • Pro: Community. This may be unique only to WordPress, but they seem to build a very user-friendly, open community. Every time I go to the codex.wordpress.com or see blog posts on wordpress, they tend to be clear-cut and easy to read. Stackoverflow also has more wordpress items that drupal.
    • As a side note, Drupal’s forums are a mess (drupal.org/forum). When I Google any error or issue I have with Drupal and expect an answer, their Forum links pop up at the top of the list. I complain because often they have some good advice, but it’s buried under mounds of comments. I wish they could improve their forum by having a Stackoverflow type design, bubbling the most relevant answers to the top of the forum pile. Rant over.

Well that’s about it for now – everything else gets too technical. In a nutshell (I say that phrase way too often) I prefer WordPress over Drupal. It just makes life easier when you don’t have to finagle every single node and view in order to create a site. Plus, WordPress looks better out of the box, probably due to the high amount of designers working in the community. Drupal I reserve for anything that requires a LOT of requirements – it’s perfect for government and Universities as well, since they tend to have many security issues and requirements.

…And we’re back!

August 22, 2012 § Leave a comment

Hello everyone:

I’m back online again after some technical hiccups over at Amazon AWS. Well, not actually hurdles of any kind, rather it was more that they wanted to charge me for services and I would rather go with a free, no-fuss solution here at WordPress.com. A story for another time maybe.

It’s been too long since I’ve posted. In the time between my last post and now, I accepted a new position over a the College of Arts and Humanities in College Park. I’m a Web Support Specialist (Fancy) and work with professors, Grad Students, and the Administrative staff here on their web projects. It’s a good opportunity for me to engage more with the web-using public here on campus and am looking forward to a good year of it.

This blog will hopefully soon be full of me regaling about work projects, side projects (Building an Android/iPhone app soon!), and other paraphernalia. Stay tuned!

Cheers,
Grant

Post on MITH Blog about Interedition

April 11, 2012 § Leave a comment

There is a post on the MITH blog from me about my experience at the Interedition Symposium (March 19 – 20, 2012). A lot of what I have to say goes into the reasoning behind hackathons from the perspective of a career, non-tenure-track coder. I should also say that a lot of my words paraphrase or borrow from the inspiring speech made by Doug Reside at the Symposium.

Here’s a link to my post: mith.umd.edu/looking-back-and-looking-ahead-interedition-symposium-2012/

Interedition Symposium: Tools for Digital Scholarship

March 25, 2012 § Leave a comment

“Here’s a depository of data … what can interoperable tools add to this … what new questions can arise? [sic]”

– H.A.G. Houghton, Institute for Textual Scholarship and Electronis Editions (ITSEE)

One of the oft-repeated arguments in Digital Humanities is that there is a lot of data out there and not enough tools to disseminate the data. Counters to this argument have been that coders do not do enough dissemination themselves of their own work, i.e., we don’t promote what we do, documentation is often a joke, and what we make doesn’t connect to the data types or formats that scholars currently use. This is true for me: on the TILE project, there were numerous snags just in the before-mentioned items alone.

Interedition serves as a gathering of like minds to create tools for dissemination of data. From Monday, March 19th to midday Tuesday the 20th of March, the administrative team of Interedition hosted a Symposium at their headquarters, the Huygens ING in Den Haag, Netherlands. Speakers represented a wide range of textual scholarly interests and development projects. The program for the event and speaker lists can be found here (As of this writing, there are promises to put up PDFs of the speaker slides, so stay tuned for that if they are not already there): http://www.interedition.eu/?page_id=212.

Texts repositories presented by the speakers for study ranged from the usual suspects (Hathi Trust) to the more unusual, such as ancient Greek witnesses of the Bible. The University of Goettingen presented ideas very similar to that of the Bamboo project, where they hope to create tool workflows for crunching the large amounts of cloud data existent. Some bold ideas and calls to action were presented by Doug Reside about how Agile development camps such as THATCamp, Interedition, and the XML Barnraising project he organized can promote better experimentation results than the more organized “Waterfall” designs of project management. Comparisons with larger initiatives such as Bamboo were made, with Doug arguing for the smaller groups working together to experiment while allowing larger groups to take those results and initiate code releases. Clearly, the arguments at the Symposium were in favor of diversity, interaction, and experimentation.

Our group, consisting of Moritz Wissenbach (Faust Edition, University of Wuerzburg), Marco Petris (CATMA, University of Hamburg), and me presented our work on developing services and interfaces for annotation (Slides here). Among our discussion was talk about the OAC Beta specifications, which people politely ignored in favor of Moritz’s amazing browser-integrated annotation engine. I did, however, find a fellow convert to the OAC ideals in Dirk Roorda from DANS, who was interested in developing a method to connect stored queries as Linked Data.

The final day saw a presentation from Joris Van Zundert, sometimes called master of slides, always known as the head of Interedition. This was a 30-minute call for continuation of Interedition, any and all like efforts that are similar to the methodology of bootcamps, and for institutions opening up to developers experimenting rather than being assigned to route tasks. As usual this spoke to me and my colleagues wholly, but this time it had a sad down-note. Funding for the COST action of Interedition is running dry and Joris is looking for new sources to replenish the project. This was a huge topic on the following day, where the regular Boot-campers stayed after the large Symposium crowd had left. While the day was planned for informal coding, most of it was spent somberly discussing how to continue our regular meet-ups. One positive outcome of this was that there are many great ideas that we will try and act on in order to re-envision and influence the scholarly community to support informal code camps. I even promised to join the effort to apply for a DFG-NEH start-up in order to continue the Annotation services and interfaces our team worked so hard on.

Yet that large cloud of corpora, websites rich for annotation, and stacks upon stacks of digital editions of classic works stands still in the digital sphere. They stand there without any real way to access them interoperably or, in some cases, usefully. While efforts such as Corpora Space and DARIAH may present solutions, I have a deep, positive feeling that code camps such as ours will play a vital role in piercing through the existing wall.

Lunch Note

February 14, 2012 § Leave a comment

Thought I’d breathe some fun into a workday lunch…

Grant's: Don't Eat

GLAMCamp Day 3

February 13, 2012 § Leave a comment

Today was wrap up for individual group projects, as well as farewells to the GLAMCamp participants. Three days now seems a lot shorter than when we started early Friday morning. Regardless of the time crunch, we were able to produce quality work and ideas.

What is both exhilarating and a little overwhelming about this conference is how many interesting people are here. For example, I was able to hear a bit of a presentation on starting a Wiki Loves Monuments movement in the US by Kaldari, a Wikimedia employee from the Bay Area. Kaldari spoke with me later and gave me some more details. Wiki Loves Monuments has been a long-standing contest in the European Union to document where historic landmarks and scenes are located (http://www.wikilovesmonuments.eu/). Kaldari’s idea is to take the US historic register and on one day, as many people in the US as possible will upload wiki pages on the listed historic sites. Participants will work based on a master list with sites broken up by state and county. This results in the Wiki Commons having a large data store for historic sites in the United States. So, something exciting to look forward to.

Asaf, Danny B., and I continued slogging through our code. We were able to document our work within the Wiki Etherpad system, so I’ll just copy what we have about the features here:

  • Given OPAC pages, scrape data (from MARC) and generate a Wikipedia-style citation
  • Support Firefox and Chrome (via light-weight HTML+JavaScript browser extensions)
  • Configurable and updatable without requiring frequent software updates
  • Major moving parts:
      • Logic:
      • user is at an item page of a supported OPAC system
      • user clicks the Wikicite (or whatever) button
      • extension calls Sitos to lookup the current domain, checking whether it’s supported, and if so, how to scrape data from it.  (If not supported, notify user and stop.)
      • Using list of element keys to scrape, retrieved from Sitos, the extension retrieves the MARC tags from the OPAC (via an AJAX request), scrapes them for the given keys, and populates a hash with Wikipedia-meaningful field name as keys, and values taken from the MARC tags.  Optionally, some minor manipulations (see Platform support below) are done on scraped values to populate the hash with partial values or to split one MARC value into more than one
      • From the populated hash, the extension proceeds to generate a Wikipedia citation in the desired language, via retrieving a template string from Sitos and making a series of substitutions from the populated hash.
    • Platform support: given OPAC system X, identify fields N1..Nn, including small manipulations out of a stock of known manipulations (e.g. split value before/after first comma)

At about 3:30pm, everyone reported back to the group on our progress, which has been dutifully documented here: (http://etherpad.wikimedia.org/GLAMcampDC-Outcomes). Some highlights:

One group reported having worked on the idea of a Bulk Uploader for massive content into Wikicommons. Bulk Uploading has too many obvious use cases and hurdles to get through, so while the group didn’t present software, they were able to present on specifications, templates for data upload, and general comments and ideas on bulk uploading.

The GLAM US Portal site has been re-modified over the weekend to make it more accessible and informative: http://en.wikipedia.org/wiki/Wikipedia:GLAM/US

One section of the GLAM portal project created a one-page description and information sheet on their project and how others can get involved. All I can say is that it looked very snazzy and sharp, due to the participation of Wiki-master Andrew Lih.

A final and touching piece of the event was the farewell award ceremony. The conference organizers Pete, Sarah, and Lori awarded outstanding participants T-Shirts and keychains. This group may be serious about editing and strategizing their plans, but they are also serious about being a warm and welcoming group. I got awarded, especially for my “smiling face” in the tech room.

Most of us ended up lingering in the conference room after the Bootcamp was officially over, which left me some time to catch up with people. Asaf and I agreed to continue working on the wiki citation service as a possible entry for Wikimania. Sarah Stierch and I talked about the DCC program and it’s Women’s Studies component, so hopefully she’ll be showing up at some of our Digital Dialogues in the future.

What a great conference. I loved the energy and passion about this group. Plus, I got to learn about the cool feature of Wikimedia software I had no idea about – among them: Wikimedia’s instance of Etherpad, the WikiData initiative, and some behind-the-scenes knowledge of how Wikimedia Commons works. More importantly, I got to learn about the people behind the Wikimedia sites and am looking forward to seeing them again.