To an unprecedented degree, market research about the needs, wants, fears, and anxieties of visitors is shaping how museums are designed. "We got a lot of comments that it's just overwhelming to come to museums," says Lori Fogarty, director of the Oakland Museum of California, which inaugurates a complete reinstallation of its art, natural history, and science collections this fall. So the new galleries will feature "loaded lounges" where visitors can relax, read catalogues, or do hands-on activities, along with open spaces that accommodate up to 25 people for concerts, storytelling, or other such programs.
But a bigger change in her plan is connecting people who might never have visited art museums with the people who curate them. Fogarty calls it transparency—"breaking the fourth wall"—having curators answer questions about how and why they choose works. Visitor feedback will be encouraged, and the exhibitions, in turn, will be based on the "wiki model," with curators representing only one voice in a mix that includes conservators, community members, and artists. "We can't count on the fact that potential visitors were brought to museums as kids," Fogarty says. "Many have no cultural or experiential reference; they don't think of the museum as a place that welcomes them or has anything of interest to them."
At the Walker Art Center in Minneapolis, director Olga Viso is also using a major reinstallation as an opportunity to remake the museum into a more civic space. "We want to be in dialogue with the audience instead of in the place of authority," as she puts it. Such efforts may mean involving the community in the organization of shows or asking people to vote on the selection of artworks. When the new installation opens in November, says chief curator Darsie Alexander, curators will hold in-gallery office hours—giving visitors insights into the way exhibitions happen, and giving the staff a chance to find out "how visitors encounter work in space—the kinds of questions they ask about art, what they find interesting, and how long they stay."
And for all the innovations in programming, marketing, and education, Campbell argues, the core mission remains the same. "We can make ourselves more user-friendly, but ultimately one of the key experiences of visiting a museum is that moment of standing in front of an object," he says. "Suddenly you're responding to something physical, real, that changes your own perspective. And great museums will always do that, as long we get people through the doors."
Thursday, 21 May 2009
Wednesday, 13 May 2009
I went because it tied in really well with some work projects (like the museum metadata mashup competition we're running later in the year or the attempt to get a critical mass of vaguely compatible museum data available for re-use) and stuff I'm interested in personally (like modern bluestocking, my project for this summer - let me know if you want to help, or just add inspiring women to freebase).
I'm also interested in creating something like a Dopplr for museums - you tell it what you're interested in, and when you go on a trip it makes you a map and list of stuff you could see while you're in that city.
Like: I like Picasso, Islamic miniatures, city museums, free wine at contemporary art gallery openings, [etc]; am inspired by early feminist history; love hearing about lived moments in local history of the area I'll be staying in; I'm going to Barcelona.
The 'list of cultural heritage stuff I like' could be drawn from stuff you've bookmarked, exhibitions you've attended (or reviewed) or stuff favourited in a meta-museum site.
(I don't know what you'd call this - it's like a personal butlr or concierge who knows both your interests and your destinations - curatr?)
The talks on RDFa (and the earlier talk on YQL at the National Maritime Museum) have inspired me to pick a 'good enough' protocol, implement it, and see if I can bring in links to similar objects in other museum collections. I need to think about the best way to document any mapping I do between taxonomies, ontologies, vocabularies (all the museumy 'ies') and different API functions or schemas, but I figure the museum API wiki is a good place to draft that. It's not going to happen instantly, but it's a good goal for 2009.
These are the last of my notes from the weekend's Open Hack London event, my notes from various talks are tagged openhacklondon.
He's since posted his links and queries - excellent links to endpoints you can test queries in.
Semantic web often thought of as long-promised magical elixir, he's here to say it can be used now by showing examples of queries that can be run against semantic web services. He'll demonstrate two different online datasets and one database that can be installed on your own machine.
First - dbpedia - scraped lots of wikipedia, put it into a database. dbpedia isn't like your averge database, you can't draw a UML diagram of wikipedia. It's done in RDF and Linked Data. Can be queried in a language that looks like SQL but isn't. SPARQL - is a w3c standard, they're currently working on SPARQL 2.
Go to dbpedia.org/sparql - submit query as post. [Really nice - I have a thing about APIs and platforms needing a really easy way to get you to 'hello world' and this does it pretty well.]
[Line by line comments on the syntax of the queries might be useful, though they're pretty readable as it is.]
'select thingy, wotsit where [the slightly more complicated stuff]'
Can get back results in xml, also HTML, 'spreadsheet', JSON. Ugly but readable. Typed.
[Trying a query challenge set by others could be fun way to get started learning it.]
One problem - fictional places are in Wikipedia e.g. Liberty City in Grand Theft Auto.
Libris - how library websites should be
[I never used to appreciate how much most library websites suck until I started back at uni and had to use one for more than one query every few years]
Has a query interface through SPARQL
Comment from the audience BBC - now have SPARQL endpoint [as of the day before? Go BBC guy!].
Playing with mulgara, open source java triple store. [mulgara looks like a kinda faceted search/browse thing] Has own query language called TQL which can do more intresting things than SPARQL. Why use it? Schemaless data storage. Is to SQL what dynamic typing is to static typing. [did he mean 'is to sparql'?]
Question from audence: how do you discover what you can query against?
Answer: dbpedia website should list the concepts they have in there. Also some documentation of categories you can look at. [Examples and documentation are so damn important for the update of your API/web service.]
Coming soon [?] SPARUL - update language, SPARQL2: new features
[These are more (very) rough notes from the weekend's Open Hack London event - please let me know of clarifications, questions, links or comments. My other notes from the event are tagged openhacklondon.
Quick plug: if you're a developer interested in using cultural heritage (museums, libraries, archives, galleries, archaeology, history, science, whatever) data - a bunch of cultural heritage geeks would like to know what's useful for you (more background here). You can comment on the #chAPI wiki, or tweet @miaridge (or @mia_out). Or if you work for a company that works with cultural heritage organisations, you can help us work better with you for better results for our users.]
There were other lightning talks on Pachube (pronounced 'patchbay', about trying to build the internet of things, making an API for gadgets because e.g. connecting hardware to the web is hard for small makers) and Homera (an open source 3d game engine).
Tuesday, 12 May 2009
Update: some of the criticism rumbling on twitter yesterday has been neatly summarised by Ian Davis in 'Google's RDFa a Damp Squib':
However, a closer look reveals that Google have basically missed the point of RDFa. The RDFa support is limited to the properties and classes defined on a hastily thrown together site called data-vocabulary.org. There you will find classes for Person and Organization and properties for names and addresses, completely ignoring the millions of pieces of data using well established terms from FOAF and the like. That means everyone has to rewrite all their data to use Google's schema if they want to be featured on Google's search engine. Its like saying you have to write your pages using Google's own version of html where all the tags have slightly different spellings to be listed in their search engine!
The result is a hobbled implementation of RDFa. They've taken the worst part – the syntax – and thrown away the best – the decentralized vocabularies of terms. It's like using microformats without the one thing they do well: the simplicity.
Further, in the comments:
the point of decentralization is not to encourage fragmentation and isolation, but to allow people to collaborate without needing permission from a middleman. Google's approach imposes a centralized authority.
There's also a (slightly disingenuous, IMO) response from Google:
For Rich Snippets, Google search need to understand what the data means in order to render it appropriately. We will start incorporating existing vocabularies like FOAF, but there's no way for us to have a decent user experience for brand-new vocabularies that someone defines. We also need a single place where a webmaster can come and find all the terms that Google understands. Which is why we have data-vocabulary.org.
Isn't the point of Google that it can figure stuff out without needing to be told?
Mashups made of messages, Matt Biddulph (Dopplr)
Systems architecture on Doppler lets them combine 3rd party systems with their stuff without tying their servers up in knots.
At a rough count, Dopplr uses about 25 third party web APIs.
If you're going to make a web service, site, concentrate on the stuff you're good at. [Use what other people are good at to make yours ace.]
But this also means you're outsourcing and part of your reliability to other people. For each bit of service you add, network latency [is?] putting another bit of risk into your web architecture. Use messaging systems to make server side stuff asynchronous.
'&' is his favourite thing about Linux. Fundamental in Unix that work is divided into packets; each doing the thing it does well. Not even very tightly coupled. Anything that can be run on the command line, stick & on the end, do it in the background. Can forget about things running in the background - don't have to manage the processes, it's not tightly coupled.
Nothing in web apps is simple these days - lots of interconnected bits.
In the physical world, big machines use gearing - having different bits of system run at different speeds. Also things can freewheel then lock in to system again when done.
When building big systems, there's a worry that one machine, one bit it depends on can bring down everything else.
[Slide of a] Diagram of all the bits of the system that don't run because someone has sent an HTTP request - [i.e. background processes]
Flickr is doing less database work up front to make pages load as quickly as possible. They queue other things in the background. e.g. photos load, tags added slightly later. (See post 'Flickr engineers do it offline'.)
Enterprise Integration Patterns (Hohpe et al) is a really good book. Banks have been using messaging for years to manage the problems. Atomic packets of data can be sent on a channel - 'Email for applications'.
Designing - think about what needs to be done now, what can be done in the background? Think of it as part of product design - what has instant effect, what has slower effect? Where can you perform the 'sleight of hand' without people noticing/impacting their user experience?
Example using web services 1: Dopplr and AMEE. What happens when someone asks to see their carbon impact? A request for carbon data goes to Ruby on Rails (memory hungry, not the fastest thing in the world, try to take things off that and process elsewhere). Refresh user screen 'check back soon', send request to message broker (in JSON). Worker process connected to message broker sends request to AMEE. Update database.
Keeps open connection, a way to push messages to the client while it's waiting to do something.
When processing lots of stuff, worker processes write to memcache as a form of progress bar, but the process is actually disconnected from the webserver so load/risk is outsourced.
'Sites built with glue and string don't automatically scale for free.' You can have many webservers, but the bottleneck might be in the database. Splitting work into message queues is a way of building so things can scale in parallel.
Slide of services, companies that offer messaging stuff. [Did anyone get a photo of that?]
Because of abstraction and with things happening in the background, it's a different flow of control than you might be used to - monitoring is different. You can't just sit there with a single debugger.
[Slide] "If you can't see your changes take effect in a system your understanding of cause and effect breaks down" - not just about it being hard to debug, it's also about user expectations.
I really liked this presentation - it's always good to learn from people who are not only innovating, but are also really solid on performance and reliability as well as the user experience.
[Update: a version of this talk is on the Dopplr blog with slides and notes.]
Saturday, 9 May 2009
Hacking with PHP, Rasmus Lerdorf
Goal of talk: copy and pastable snippets that just work so you don't have to fight to get things that work [there's not enough of this to help beginners get over that initial hump]. The slides are available at http://talks.php.net/show/openhack and these notes are probably best read as commentary alongside the code examples.
[Since it's a hack day, some] Hack ideas: fix something you use every day; build your own targeted search engine; improve the look of search results; play with semantic web tools to make the web more semantic; tell the world what kind of data you have - if a resume, use hResume or other appropriate microformats/markup; go local - tools for helping your local community; hack for good - make the world a better place.
SearchMonkey and BOSS are blending together a little bit.
What we need to learn
parsing XML: simpleXML_load_file() - can load entire URL or local file.
Attributes on node show up as array. Namespace attributes call children of node, name namespace as argument.
Now know how to parse XML, can get lots of other stuff.
Context extraction service, Yahoo - doesn't get enough attention. Post all text, gives you back four or five key terms - can then do an image search off them. Or match ads to webpages.
Can use get or post (curl) - usually too much for get.
If you can figure out these six lines of code, you can write anything in the world. How every modern web application works.
'There's nothing to building web applications, you just have to break everything down into small enough chunks that it all becomes trivial'.
AJAX in 30 seconds.
Inline comments in code would help for people reading it without hearing the talk at the same time.
load maps API, create container (div) for the map, then fill it.
Form - on submit call return updateMap(); with new location.
YGeoRSS - if have GeoRSS file... can point to it.
GeoPlanet - assigns a WOE ID to a place. Locations are more than just a lat long - carry way more information. Basically gives you a foreign key. YQL is starting to make the web a giant database. Can make joins across APIs - woeid works as fk.
YQL - 'combines all the APIs on the web into a single API'.
Add a cache - nice to YQL, and also good for demos etc. Copy and paste cache function from his slides - does a local cache on URL. Hashed with md5. Using PHP streams - #defn. Adding a cache speeds up developing when hacking (esp as won't be waiting for the wifi). [This is a pretty damn good tip cos it's really useful and not immediately obvious.]
XPath on URL using PHP's OAuth extension
SearchMonkey - social engineering people into caring about semantic data on the web. For non-geeks, search plug-in mechanism that will spruce up search results page. Encourages people to add semantic data so their search result is as sexy as their competitors - so goal is that people will start adding semantic data.
'If you're doing web stuff, and don't know about microformats, and your resume doesn't have hResume, you're not getting a job with Yahoo.'
Question: how are microformats different to RDFa?
Answer: there are different types of microformats - some very specific ones, eg hResume, hCal. RDFa - adding arbitrary tags to page. even if no specific way to describe your data. But there's a standard set of mark-ups for a resume so can use that. if your data doesn't match anything at microfomats.org then use RDFa or erdf (?).
I'm putting my rough and ready notes online so that those who couldn't make it can still get some of the benefits. Apologies for any mishearings or mistakes in transcription – leave me a comment with any questions or clarifications.
One of the reasons I was going was to push my thinking about the best ways to provide API-like access to museum information and collections, so my notes will reflect that but I try to generalise where I can. And if you have thoughts on what you'd like cultural heritage institutions to do for developers, let us know! (For background, here's a lightning talk I did at another hack event on happy museums + happy developers = happy punters).
RDFa - now everyone can have an API.
Going to cover some basic mark-up, and talk about why RDFa is a good thing. [The slides would be useful for the syntax examples, I'll update if they go online.]
RDFa is a new syntax from W3C - a way of embedding metadata (RDF) in HTML documents using attributes.
e.g. <span property="dc:title"> - value of property is the text inside the span.
Because it's inline you don't need to point to another document to provide source of metadata and presentation HTML.
One big advance is that can provide metadata for other items e.g. images, so you can e.g. attach licence info to the image rather than page it's in – e.g. <img src="" rel="licence" resource="[creative commons licence]">
Putting RDFa into web pages means you've now got a feed (the web page is the RSS feed), and a simple static web page can become an API that can be consumed in the same way as stuff from a big expensive system. 'Growing adoption'.
Government department Central Office of Information [?] is quite big on RDFa, have a number of projects with it. [I'd come across the UK Civil Service Job Service API while looking for examples for work presentations on APIs.]
RDFa allows for flexible publishing options. If you're already publishing HTML, you can add RDFa mark-up then get flexible publishing models - different departments can keep publishing data in their own way, a central website can go and request from each of them and create its own database of e.g. jobs. Decentralised way of approaching data distribution.
Can be consumed by: smarter browsers; client-side AJAX, other servers such as SearchMonkey.
RDFa might be going into Drupal core.
Example of putting isbn in RDFa in page, then a parser can go through the page, pull out the triples [some explanation of them as mini db?], pull back more info about the book from other APIs e.g. Amazon - full title, thumbnail of cover. e.g. pipes.
Example of FOAF - twitter account marked up in page, can pull in tweets. Could presumably pull in newer services as more things were added, without having to re-mark-up all the pages.
Example of chemist writing a blog who mentions a chemical compound in blog post, a processor can go off and retrieve more info - e.g. add icon for mouseover info - image of molecule, or link to more info.
Next plan is to link with BOSS. Can get back RDFa from search results - augment search results with RDFa from the original page.
Search Monkey (what it is and what you can do with it)
Neil Crosby (European frontend architect for search at Yahoo).
SearchMonkey is (one of) Yahoo's open search platforms (along with BOSS). Uses structured data to enhance search results. You get to change stuff on Yahoo search results page.
SearchMonkey lets you: style results for certain URL patterns; brand those results; make the results more useful for users.
[examples of sites that have done it to see how their results look in Yahoo? I thought he mentioned IMDb but it doesn't look any different - a film search that returns a wikipedia result, OTOH, does.]
Make life better for users - not just what Yahoo thinks results should be, you can say 'actually this is the important info on the page'
Three ways to do it [to change the SERP [search engine results page]: mark up data in a way that Yahoo knows about - 'just structure your data nicely'. e.g. video mark-up; enhance a result directly; make an infobar.
Infobar - doesn't change result see immediately on the page, but it opens on the page. e.g. of auto-enhanced result- playcrafter. Link to developer start page - how to mark it up, with examples, and what it all means.
User-enhanced result - Facebook profile pages are marked up with microformats - can add as friend, poke, send message, view friends, etc from the search results page. Can change the title and abstract, add image, favicon, quicklinks, key/value pairs. Create at [link I can't see but is on slides] Displayed in screen, you fill it out on a template.
Infobar - dropdown in grey bar under results. Can do a lot more, as it's hidden in the infobar and doesn't have to worry people.
Data from: microformats, RDF, XSLT, Yahoo's index, and soon, top tags from delicious.
If no machine data, can write an XSLT. 'isn't that hard'. Lots of documentation on the web.
Examples of things that have been made - a tool that exposes all the metadata known for a page. URL on slide. can install on Yahoo search page, add it in. Use location data to make a map - any page on web with metadata about locations on it - map monkey. Get qype results for anything you search for.
There's a mailing list (people willing and wanting to answer questions) and a tutorial.
Question: do you need to use a special doctype [for RDFa]?
Answer: added to spec that 'you should use this doctype' but the spec allows for RDFa to be used in situations when can't change doctype e.g. RDFa embedded in blogger blogpost. Most parsers walk the DOM rather than relying on the doctype.
Jim O'D - excited that SearchMonkey supports XSLT - if have website with correctly marked up tables, could expose those as key/value pairs?
Answer: yes. XSLT fantastic tool for when don't have data marked up - can still get to it.
Frankie - question I couldn't hear. About info out to users?
Answer: if you've built a monkey, up to you to tell people about it for the moment. Some monkeys are auto-on e.g. Facebook, wikipedia... possibly in future, if developed a monkey for a site you own, might be able to turn it auto-on in the results for all users... not sure yet if they'll do it or not.
Frankie: plan that people get monkeys they want, or go through gallery?
Answer: would be fantastic if could work out what people are using them for and suggest ones appropriate to people doing particular kinds of searches, rather than having to go to a gallery.