Showing posts with label mapping. Show all posts
Showing posts with label mapping. Show all posts

Monday, 12 November 2012

Reflections on teaching Neatline

I've called this post 'Reflections on teaching Neatline' but I could also have called it 'when new digital humanists meet new software'. Or perhaps even 'growing pains in the digital humanities?'.

A few months ago, Anouk Lang at the University of Strathclyde asked me to lead a workshop on Neatline, software from the Scholar's Lab that plots 'archives, objects, and concepts in space and time'. It's a really exciting project, designed especially for humanists - the interfaces and processes are designed to express complexity and nuance through handcrafted exhibits that link historical materials, maps and timelines.

The workshop was on Thursday, and looking at the evaluation forms, most people found it useful but a few really struggled and teaching it was also slightly tough going. I've been thinking a lot about the possible reasons for that and I'm sharing them both as a request for others to share their experiences in similar circumstances and also in the hope that they'll help others.

The basic outline of the workshop was an intros round (who I am, who they are and what they want to learn); information on what Neatline is and what it can do; time to explore Neatline and explore what the software can and can't do (e.g. login, follow the steps at neatline.org/plugins/neatline to create an item based on a series of correspondence Anouk had been working on, deciding whether you want to transcribe or describe the letter, tweaking its appearance or linking it to other items); and a short period for reflection and discussion (e.g. 'What kinds of interpretive decisions did you find yourself making? What delighted you? What frustrated you?') to finish. If you're curious, you can follow along with my slides and notes or try out the Neatline sandbox site.

The first half was fine but some people really struggled with the hands-on section. Some of it was to do with the software itself - as a workshop, it was a brilliant usability test of the admin interfaces of the software for audiences outside the original set of users. Neatline was only launched in July this year and isn't even in version 2 yet so it's entirely understandable that it appears to have a few functional or UX bugs. The documentation isn't integrated into the interface yet (and sometimes lacks information that is probably part of the shared tacit knowledge of people working on the project) but they have a very comprehensive page about working with Neatline items. Overall, the process of handcrafting timelines and maps for a Neatline exhibit is still closer to 'first, catch your rabbit' than making a batch of ready-mix cupcakes. Neatline is also designed for a particular view of the world, and as it's built on top of other software (Omeka) with another very particular view of the world (and hello, Dublin Core), there's a strong underlying mental model that informs the processes for creating content that is foreign to many of its potential users, including some at the workshop.

But it was also partly because I set the bar too high for the exercises and didn't provide enough structure for some of the group. If I'd designed it so they created a simple Neatline item by closely following detailed instructions (as I have done for other, more consciously tech-for-beginners workshops), at least everyone would have achieved a nice quick win and have something they could admire on the screen. From there some could have tried customising the appearance of their items in small ways, and the more adventurous could have tried a few of the potential ways to present the sample correspondence they were working with to explore the effects of their digitisation decisions. An even more pragmatic but potentially divisive solution might have been to start with the background and demonstration as I did, but then do the hands-on activity with a smaller group of people who were up for exploring uncharted waters. On a purely practical level, I also should have uploaded the images of the letters used in the exercise to my own host so that they didn't have to faff with Dropbox and Omeka records to get an online version of the image to use in Neatline.

And finally it was also because the group had really mixed ICT skills. Most were fine (bar the occasional bug), but some were not. It's always hard teaching technical subjects when participants have varying levels of skill and aptitude, but when does it go beyond aptitude into your attitude about being pushed out of your comfort zone? I'd warned everyone at the start that it was new software, but if you haven't experienced beta software before I guess you don't have the context for understanding what that actually means.

I should make it clear here that I think the participants' achievements outshine any shortcomings - Neatline is a great tool for people working with messy humanities data who want to go beyond plonking markers on Google Maps, and I think everyone got that, and most people enjoyed the chance to play with Neatline.

But more generally, I also wonder if it has to do with changing demographics in the digital humanities - increasingly, not everyone interested in DH is an early, or even a late adopter, and someone interested in DH for the funding possibilities and cool factor might not naturally enjoy unstructured exploration of new software, or be intrigued by trying out different combinations of content and functionality just 'to see what happens'.

Practically, more information for people thinking of attending would be useful - 'if you know x already, you'll be fine; if you know y already, you'll be bored' would be useful in future. Describing an event as 'if you like trying new software, this is for you' would probably help, but it looks like the digital humanities might also now be attracting people who don't particularly like working things out as they go along - are they to be excluded? If using software like this is the onboarding experience for people new to the digital humanities, they're not getting the best first impression, but how do you balance the need for fast-moving innovative work-in-progress to be a bit hacky and untidy around the edges with the desires of a wider group of digital humanities-curious scholars? Is it ok to say 'here be dragons, enter at your own risk'?

Monday, 5 November 2012

The ever-morphing PhD

I wrote this for the NEH/Polis Summer Institute on deep mapping back in June but I'm repurposing it as a quick PhD update as I review my call for interview participants. I'm in the middle of interviews at the moment (and if you're an academic historian working on British history 1600-1900 who might be willing to be interviewed I'd love to hear from you) and after that I'll no doubt be taking stock of the research landscape, the findings from my interviews and project analyses, and updating the shape of my project as we go into the new year. So it doesn't quite reflect where I'm at now, but at the very least it's an insight into the difficulties of research into digital history methodologies when everything is changing so quickly:

"Originally I was going to build a tool to support something like crowdsourced deep mapping through a web application that would let people store and geolocate documents and images they were digitising. The questions that are particularly relevant for this workshop are: what happens when crowdsourcing or citizen history meet deep mapping? Can a deep map created by multiple people for their own research purposes support scholarly work? Can a synthetic, ad hoc collection of information be used to support an argument or would it be just for the discovery of spatio-temporarily relevant material? How would a spatial narrative layer work?

I planned to test this by mapping the lives and intellectual networks of early scientific women. But after conducting a big review of related projects I eventually realised that there's too much similar work going on in the field and that inevitably something similar would have been created by someone with more resources by the time I was writing up. So I had to rethink my question and my methods.

So now my PhD research seeks to answer 'how do academic and family/local historians evaluate, use and contribute to crowdsourced resources, especially geo-located historical materials?', with the goal of providing some insight into the impact of digitality on research practices and scholarship in the humanities. ... How do trained and self-taught historians cope with changes in place names and boundaries over time, and the many variations and similarities in place names. Does it matter if you've never been to the place and don't know that it might be that messy and complex?

I'm interested how living in a digital culture affects how researchers work. What does it mean to generate as well as consume digital data in the course of research? How does user-created content affect questions of authorship, authority and trust for amateur historians and scholarly practice? What are the characteristics of a well-designed digital resource, and how can resources and tools for researchers be improved? It's a very Human-Computer Interaction/Infomatics view of the digital humanities but it addresses the issues around discoverability and usability that are so important for people building projects.

I'm currently interviewing academic, family and local historians, focusing on those working on research on people or places in early modern England - very loosely defined, as I'll go 1600-1900. I'm asking them about the tools do they currently use in their research; how they assess new resources; if or when they might you use a resource created through crowdsourcing or user contributions? (e.g. Wikipedia or ancestry.com); how do you work out which online records to trust? How they use place names or geographic locations in your research?

So far I've mostly analysed the interviews for how people think about crowdsourcing, I'll be focusing on the responses to place when I get back.

More generally, I'm interested in the idea of 'chorography 2.0' - what would it look like now? The abundance of information is as much of a problem as an opportunity: how to manage that?"

Tuesday, 26 June 2012

Halfway through 'deep maps and spatial narratives' summer institute

I'm a week and a bit into the NEH Institute for Advanced Topics in the Digital Humanities on 'Spatial Narrative and Deep Maps: Explorations in the Spatial Humanities', so this is a (possibly self-indulgent) post to explain why I'm over in Indianapolis and why I only seem to be tweeting with the #PolisNEH hashtag.  We're about to dive into three days of intense prototyping before wrapping things up on Friday, so I'm posting almost as a marker of my thoughts before the process of thinking-through-making makes me re-evaluate our earlier definitions.  Stuart Dunn has also blogged more usefully on Deep maps in Indy.

We spent the first week hearing from the co-directors David Bodenhamer (history, IUPUI), John Corrigan (religious studies, Florida State University), and Trevor Harris (geography, West Virginia University) and guest lecturers Ian Gregory (historical GIS and digital humanities, Lancaster University) and May Yuan (geonarratives, University of Oklahoma), and also from selected speakers at the Digital Cultural Mapping: Transformative Scholarship and Teaching in the Geospatial Humanities at UCLA. We also heard about the other participants projects and backgrounds, and tried to define 'deep maps' and 'spatial narratives'.

It's been pointed out that as we're at the 'bleeding edge', visions for deep mapping are still highly personal. As we don't yet have a shared definition I don't want to misrepresent people's ideas by summarising them, so I'm just posting my current definition of deep maps:
A deep map contains geolocated information from multiple sources that convey their source, contingency and context of creation; it is both integrated and queryable through indexes of time and space.  
Essential characteristics: it can be a product, whether as a snapshot static map or as layers of interpretation with signposts and pre-set interactions and narrative, but is always visibly a process.  It allows open-ended exploration (within the limitations of the data available and the curation processes and research questions behind it) and supports serendipitous discovery of content. It supports curiosity. It supports arguments but allows them to be interrogated through the mapped content. It supports layers of spatial narratives but does not require them. It should be compatible with humanities work: it's citable (e.g. provides URL that shows view used to construct argument) and provides access to its sources, whether as data downloads or citations. It can include different map layers (e.g. historic maps) as well as different data sources. It could be topological as well as cartographic.  It must be usable at different scales:  e.g. in user interface  - when zoomed out provides sense of density of information within; e.g. as space - can deal with different levels of granularity.
Essential functions: it must be queryable and browseable.  It must support large, variable, complex, messy, fuzzy, multi-scalar data. It should be able to include entities such as real and imaginary people and events as well as places within spaces.  It should support both use for presentation of content and analytic use. It should be compelling - people should want to explore other places, times, relationships or sources. It should be intellectually immersive and support 'flow'.

Looking at it now, the first part is probably pretty close to how I would have defined it at the start, but my thinking about what this actually means in terms of specifications is the result of the conversations over the past week and the experience everyone brings from their own research and projects.

For me, this Institute has been a chance to hang out with ace people with similar interests and different backgrounds - it might mean we spend some time trying to negotiate discipline-specific language but it also makes for a richer experience.  It's a chance to work with wonderfully messy humanities data, and to work out how digital tools and interfaces can support ambiguous, subjective, uncertain, imprecise, rich, experiential content alongside the highly structured data GIS systems are good at.  It's also a chance to test these ideas by putting them into practice with a dataset on religion in Indianapolis and learn more about deep maps by trying to build one (albeit in three days).

As part of thinking about what I think a deep map is, I found myself going back to an embarrassingly dated post on ideas for location-linked cultural heritage projects:
I've always been fascinated with the idea of making the invisible and intangible layers of history linked to any one location visible again. Millions of lives, ordinary or notable, have been lived in London (and in your city); imagine waiting at your local bus stop and having access to the countless stories and events that happened around you over the centuries. ... The nice thing about local data is that there are lots of people making content; the not nice thing about local data is that it's scattered all over the web, in all kinds of formats with all kinds of 'trustability', from museums/libraries/archives, to local councils to local enthusiasts and the occasional raving lunatic. ... Location-linked data isn't only about official cultural heritage data; it could be used to display, preserve and commemorate histories that aren't 'notable' or 'historic' enough for recording officially, whether that's grime pirate radio stations in East London high-rise roofs or the sites of Turkish social clubs that are now new apartment buildings. Museums might not generate that data, but we could look at how it fits with user-generated content and with our collecting policies.
Amusingly, four years ago my obsession with 'open sourcing history' was apparently already well-developed and I was asking questions about authority and trust that eventually informed my PhD - questions I hope we can start to answer as we try to make a deep map.  Fun!

Finally, my thanks to the NEH and the Institute organisers and the support staff at the Polis Center and IUPUI for the opportunity to attend.

Sunday, 13 May 2012

'...and they all turn on their computers and say 'yay!'' (aka, 'mapping for humanists')

I'm spending a few hours of my Sunday experimenting with 'mapping for humanists' with an art historian friend, Hannah Williams (@_hannahwill).  We're going to have a go at solving some issues she has encountered when geo-coding addresses in 17th and 18th Century Paris, and we'll post as we go to record the process and hopefully share some useful reflections on what we found as we tried different tools.

We started by working out what issues we wanted to address.  After some discussion we boiled it down to two basic goals: a) to geo-reference historical maps so they can be used to geo-locate addresses and b) to generate maps dynamically from list of addresses. This also means dealing with copyright and licensing issues along the way and thinking about how geospatial tools might fit into the everyday working practices of a historian.  (i.e. while a tool like Google Refine can generate easily generate maps, is it usable for people who are more comfortable with Word than relying on cloud-based services like Google Docs?  And if copyright is a concern, is it as easy to put points on an OpenStreetMap as on a Google Map?)

Like many historians, Hannah's use of maps fell into two main areas: maps as illustrations, and maps as analytic tools.  Maps used for illustrations (e.g. in publications) are ideally copyright-free, or can at least be used as illustrative screenshots.  Interactivity is a lower priority for now as the dataset would be private until the scholarly publication is complete (owing to concerns about the lack of an established etiquette and format for citation and credit for online projects).

Maps used for analysis would ideally support layers of geo-referenced historic maps on top of modern map services, allowing historic addresses to be visually located via contemporaneous maps and geo-located via the link to the modern map.  Hannah has been experimenting with finding location data via old maps of Paris in Hypercities, but manually locating 18th Century streets on historic maps then matching those locations to modern maps is time-consuming and she suspects there are more efficient ways to map old addresses onto modern Paris.

Based on my research interviews with historians and my own experience as a programmer, I'd also like to help humanists generate maps directly from structured data (and ideally to store their data in user-friendly tools so that it's as easy to re-use as it is to create and edit).  I'm not sure if it's possible to do this from existing tools or whether they'd always need an export step, so one of my questions is whether there are easy ways to get records stored in something like Word or Excel into an online tool and create maps from there.  Some other issues historians face in using mapping include: imprecise locations (e.g. street names without house numbers); potential changes in street layouts between historic and modern maps; incomplete datasets; using markers to visually differentiate types of information on maps; and retaining descriptive location data and other contextual information.

Because the challenge is to help the average humanist, I've assumed we should stay away from software that needs to be installed on a server, so to start with we're trying some of the web-based geo-referencing tools listed at http://help.oldmapsonline.org/georeference.

Geo-referencing tools for non-technical people

The first bump in the road was finding maps that are re-usable in technical and licensing terms so that we could link or upload them to the web tools listed at http://help.oldmapsonline.org/georeference.  We've fudged it for now by using a screenshot to try out the tools, but it's not exactly a sustainable solution.  
Hannah's been trying georeferencer.org, Hypercities and Heurist (thanks to Lise Summers ‏@morethangrass on twitter) and has written up her findings at Hacking Historical Maps... or trying to.  Thanks also to Alex Butterworth @AlxButterworth and Joseph Reeves @iknowjoseph for suggestions during the day.

Yahoo! Mapmixer's page was a 404 - I couldn't find any reference to the service being closed, but I also couldn't find a current link for it.

Next I tried Metacarter Labs' Map Rectifier.  Any maps uploaded to this service are publicly visible, though the site says this does 'not grant a copyright license to other users', '[t]here is no expectation of privacy or protection of data', which may be a concern for academics negotiating the line between openness and protecting work-in-progress or anyone dealing with sensitive data.  Many of the historians I've interviewed for my PhD research feel that some sense of control over who can view and use their data is important, though the reasons why and how this is manifested vary.

Screenshot from http://labs.metacarta.com/rectifier/rectify/7192

The site has clear instructions - 'double click on the source map... Double click on the right side to associate that point with the reference map' but the search within the right-hand side 'source map' didn't work and manually navigating to Paris, then the right section of Paris was a huge pain.  Neither of the base maps seemed to have labels, so finding the right location at the right level of zoom was too hard and eventually I gave up.  Maybe the service isn't meant to deal with that level of zoom?  We were using a very small section of map for our trials.

Inspired by Metacarta's Map Rectifier, Map Warper was written with OpenStreetMap in mind, which immediately helps us get closer to the goal of images usable in publications.  Map Warper is also used by the New York Public Library, which described it as a 'tool for digitally aligning ("rectifying") historical maps ... to match today's precise maps'.  Map Warper also makes all uploaded maps public: 'By uploading images to the website, you agree that you have permission to do so, and accept that anyone else can potentially view and use them, including changing control points', but also offers 'Map visibility' options 'Public(default)' and 'Don't list the map (only you can see it)'.
Screenshot showing 'warped' historical map overlaid on OpenStreetMap at http://mapwarper.net/
Once a map is uploaded, it zooms to a 'best guess' location, presumably based on the information you provided when uploading the image.  It's a powerful tool, though I suspect it works better with larger images with more room for error.  Some of the functionality is a little obscure to the casual user - for example, the 'Rectify' view tells me '[t]his map either is not currently masked. Do you want to add or edit a mask now?' without explaining what a mask is.  However, I can live with some roughness around the edges because once you've warped your map (i.e. aligned it with a modern map), there's a handy link on the Export tab, 'View KML in Google Maps' that takes you to your map overlaid on a modern map.  Success!

Sadly not all the export options seem to be complete (they weren't working on my map, anyway) so I couldn't work out if there was a non-geek friendly way to open the map in OpenStreetMap.

We have to stop here for now, but at this point we've met one of the goals - to geo-reference historical maps so locations from the past can be found in the present, but the other will have to wait for another day.  (But I'd probably start with openheatmap.com when we tackle it again.  Any other suggestions would be gratefully received!)

(The title quote is something I heard one non-geek friend say to another to explain what geeks get up to at hackdays. We called our experiment a 'hackday' because we were curious to see whether the format of a hackday - working to meet a challenge within set parameters within a short period of time - would work for other types of projects. While this ended up being almost an 'anti-hack', because I didn't want to write code unless we came across a need for a generic tool, the format worked quite well for getting us to concentrate solidly on a small set of problems for an afternoon.)

Thursday, 28 July 2011

Quick PhD update from InterFace 2011

It feels like ages since I've posted, so since I've had to put together a 2 minute lightning talk for the Interface 2011 conference at UCL (for people working in the intersection of humanities and technology), I thought I'd post it here as an update.  I'm a few months into the PhD but am still very much working out the details of the shape of my project and I expect that how my core questions around crowdsourcing, digitisation, geolocation, researchers and historical materials fit together will change as I get further into my research. [Basically I'm acknowledging that I may look back at this and cringe.]

Notes for 2 minute lightning talk, Interface 2011
'Crowdsourcing the geolocation of historical materials through participant digitisation' 
Hi, I'm Mia, I'm working on a PhD in Digital Humanities in the History department at the Open University.
I'm working on issues around crowdsourcing the digitisation and geolocation of historical materials. I'm looking at 'participant digitisation' so I'll be conducting research and building tools to support various types of researchers in digitising, transcribing and geolocating primary and secondary sources.
I'll also create a spatial interface that brings together the digitised content from all participant digitisers. The interface will support the management of sources based on what I've learned about how historians evaluate potential sources.
The overall process has three main stages: research and observation that leads to iterative cycles of designing, building and testing the interfaces, and finally evaluation and analysis on the tools and the impact of geolocated (ad hoc) collections on the practice of historical research.

Thursday, 23 June 2011

Notes from a preview of the updated Historypin

The tl;dr version: inspiring project, great enhancements; yay!

Longer version: last night I went to the offices of We Are What We Do for a preview of the new version of HistoryPin. Nick Poole has already written up his notes, so I'm just supplementing them with my own notes from the event (and a bit from conversations with people there and the reading I'd already done for my PhD).

Screenshot with photo near WAWWD office (current site)
Historypin is about bridging the intergenerational divide, about mass participation and access to history, about creating social capital in neighbourhoods, conserving and opening up global archival resources (at this stage that's photographs, not other types of records).  There's a focus on events and activities in local communities. [It'd be great to get kids to do quick oral history interviews as they worked with older people, though I think they're doing something like it already.]

New features will include a lovely augmented reality-style view in streetview; the ability to upload and explore video as well as images; a focus on telling stories - 'tours' let you bring a series of photos together into a narrative (the example was 'the arches of New York', most of which don't exist anymore).  You can also create 'collections', which will be useful for institutions.  They'll also be available in the mobile apps (and yes, I did ask about the possibility of working with the TourML spec for mobile tours).

The mobile apps let you explore your location, explore the map and contribute directly from your phone.  You can use the augmented reality view to overlap old photos onto your camera view so that you can take a modern version of an old photo. This means they can crowdsource better modern images than those available in streetview as well as getting indoors shots.  This could be a great treasure hunt activity for local communities or tourists.  You can also explore collections (as slideshows?) in the app.

They're looking to work with more museums and archives and have been working on a community history project with Reading Museum.  Their focus on inclusion is inspiring, and I'll be interested to see how they work to get those images out into the community.  While there are quite a few 'then and now' projects focused on geo-locating old images around I think that just shows that it's an accessible way of helping people make connections between their lives and those in the past.

A quick correction to Nick's comments - the Historypin API doesn't exist yet, so if you have ideas for what it should do, it's probably a good time to get in touch.  I'll be thinking hard about how it all relates to my PhD, especially if they're making some of the functionality available.

Wednesday, 18 February 2009

The location-aware future is here (and why cities suck but are good)

Thought-provoking article in Wired on the implications of location-aware devices for our social relationships, privacy concerns, and how we consume and publish geo-located content:

I Am Here: One Man's Experiment With the Location-Aware Lifestyle
The location-aware future—good, bad, and sleazy—is here. Thanks to the iPhone 3G and, to a lesser extent, Google's Android phone, millions of people are now walking around with a gizmo in their pocket that not only knows where they are but also plugs into the Internet to share that info, merge it with online databases, and find out what—and who—is in the immediate vicinity. That old saw about how someday you'll walk past a Starbucks and your phone will receive a digital coupon for half off on a Frappuccino? Yeah, that can happen now.

Simply put, location changes everything. This one input—our coordinates—has the potential to change all the outputs. Where we shop, who we talk to, what we read, what we search for, where we go—they all change once we merge location and the Web.
The article neatly finishes with a sense of 'the more things change, the more things stay the same', which seems to be one of the markers of the moments when technologies are integrated into our lives:
I had gained better location awareness but was losing my sense of place. Sure, with the proper social filters, location awareness needn't be invasive or creepy. But it can be isolating. Even as we gradually digitize our environment, we should remember to look around the old-fashioned way.
Found via Exporting the past into the future, or, "The Possibility Jelly lives on the hypersurface of the present" which in turn came via a tweet.

I also recently enjoyed 'How the city hurts your brain... and what you can do about it'. It's worth learning how you can alleviate the worst symptoms, because it seems cities are worth putting up with:
Recent research by scientists at the Santa Fe Institute used a set of complex mathematical algorithms to demonstrate that the very same urban features that trigger lapses in attention and memory -- the crowded streets, the crushing density of people -- also correlate with measures of innovation, as strangers interact with one another in unpredictable ways. It is the "concentration of social interactions" that is largely responsible for urban creativity, according to the scientists.

Tuesday, 15 July 2008

Portable mapping applications make managers happy

This webmonkey article, Multi-map with Mapstraction, about an 'open source abstracted JavaScript mapping library' called Mapstraction is perfectly on target for organisations that worry about relying on one mapping provider.

How many of these have you heard as possible concerns about using a particular mapping service?
  • Current provider might change the terms of service
  • Your map could become too popular and use up too many map views
  • Current provider quality might get worse, or they might put ads on your map
  • New provider might have prettier maps
  • You might get bored of current provider, or come up with a reason that makes sense to you
They're all reasonable concerns. But look what the lovely geeks have made:
The promise of Mapstraction is to only have to change two lines of code. Imagine if you had a large map with many markers and other features. It could take a lot of work to manually convert the map code from one provider to another.
And functionality is being expanded. I liked this:
One of my favorite Mapstraction features is automatic centering and zooming. When called on a map with multiple markers, Mapstraction calculates the center point of all markers and the smallest zoom level that will contain all the markers.
Open source rocks! Not only can you grab the code and have someone maintain it for you if you ever need to, but it sounds like a labour of geek love:
Mapstraction is maintained by a group of geocode lovers who want to give developers options when creating maps.

Monday, 22 January 2007

Minding your Ts and Qs

Purely because I always forget the details of how to do it when I need to mind my Ts and Qs (see what I did there?), How to plot a National Grid Reference.