Saturday, 30 June 2007
And in case you didn't know what all the fuss is about, here's a BBC report on the first purchases.
In less than three years time, more than half of UK GDP will be generated by people who create something from nothing, according to the 2007 Developing the Future (DtF) report launched today at the British Library.
The report, commissioned by Microsoft and co-sponsored by Intellect, the BCS and The City University, London, sets out the key challenges facing the UK as it evolves into a fully-fledged knowledge-based economy. The report also sets out a clear agenda for action to ensure the UK maintains its global competitiveness in the face of serious challenges.
The report identifies a number of significant challenges that the technology industry needs to address if these opportunities are to be grasped. Primarily, these are emerging markets and skills shortages:
- At current rates of growth China will overtake the UK in five years in the knowledge economy sector.
- The IT industry faces a potential skills shortage: The UK’s IT industry is growing at five to eight times the national growth average, and around 150,000 entrants to the IT workforce are required each year. But between 2001 and 2006 there was a drop of 43 per cent in the number of students taking A-levels in computing.
- The IT industry is only 20 per cent female and currently only 17 per cent of those undertaking IT-related degree courses are women. In Scotland, only 15 per cent of the IT workforce is female.
BCS: Developing the future.
The report also suggests that the 'IT industry should look to dramatically increase female recruitment' - I won't comment for now but it will be interesting to see how that issue develops.
Over the last two weeks I've reviewed eight British newspaper web sites in depth, trying to identify where and how they are using the technologies that make up the so-called "Web 2.0" bubble. I've examined their use of blogs, RSS feeds, social bookmarking widgets, and the integration of user-generated content into their sites.
Friday, 29 June 2007
ZDNet's David Berlind got some time with Sir Tim Berners-Lee, the inventor of the World Wide Web. Topics covered include the semantic Web (see also: Microformats), mashups, and the benefits of open standards versus proprietary development environments such as Flash and Silverlight.
Thursday, 28 June 2007
Disclosure: I have a vested interest because it's a work project, but I'm also enjoying this way too much not to share it. We've been working with the LAARC (London Archaeological Archive Resource Centre, part of the Museum of London Group) on pilots for increasing user interaction and engagement.
Tuesday, 26 June 2007
Sunday, 24 June 2007
It's a good place to start if you're not sure what people are saying about your institution, exhibitions or venues or whether they might already be creating content about you. Don't forget to search Flickr and YouTube too.
So this post is me thinking aloud about the possible next steps - what might be required; what might be possible; and what might be desired but would be beyond the scope of any of those groups to resolve so must be worked around. I'll probably say something stupid but I'll be interested to see where these conversations go.
I might be missing out lots of the subtleties but seems to me that there are a few basic things we need: shared technical and semantic data standards or the ability to map between institutional standards consistently and reliably; shared data, whether in a central repository or a service/services like federated searches capable of bringing together individual repositories into a virtual shared repository. The implementation details should be hidden from the end user either way - it should Just Work.
My preference is for shared repositories (virtual or real) because the larger the group, the better the chance that it will be able to provide truly permanent and stable URIs; and because we'd gain efficiencies when introducing new partners, as well as enabling smaller museums or archaeological units who don't have the technical skills or resources to participate. One reason I think stable and permanent URIs are so important is that they're a requirement for the semantic web. They also mean that people re-using our data, whether in their bookmarks, in mashup applications built on top of our data or on a Flickr page, have a reliable link back to our content in the institutional context.
As new partners join, existing tools could often be re-used if they have a collections management system or database used by a current partner. Tools like those created for project partners to upload records to the PNDS (People's Network Discovery Service, read more at A Standards Framework For Digital Library Programmes) for Exploring 20th Century London could be adapted so that organisations could upload data extracted from their collections management, digital asset or excavation databases to a central source.
But I also think that each (digital or digitised) object should have a unique 'home' URI. This is partly because I worry about replication issues with multiple copies of the same object used in various places and projects across the internet. We've re-used the same objects in several Museum of London projects and partnerships, but the record for that object might not be updated if the original record is changed (for example, if a date was refined or location changed). Generally this only applies to older projects, but it's still an issue across the sector.
Probably more importantly for the cultural heritage sector as a whole, a central, authoritative repository or shared URL means we can publish records that should come with a certain level of trust and authority by virtue of their inclusion in the repository. It does require playing a 'gate keeper' role but there are already mechanisms for determining what counts as a museum, and there might also be something for archaeological units and other cultural heritage bodies. Unfortunately this would mean that the Framley Museum wouldn't be able to contribute records - maybe we should call the whole thing off.
If a base record is stored in a central repository, it should be easy to link every instance of its use back to the 'home' URI, or to track discoverable instances and link to them from the home URI. If each digital or digitised object has a home URI, any related content (information records, tags, images, multimedia, narrative records, blog posts, comments, microformats, etc) created inside or outside the institution or sector could link back to the home URI, which would mean the latest information and resources about an object are always available, as well as any corrections or updates which weren't replicated across every instance of the object.
Obviously the responses to Michelangelo's David are going to differ from those to a clay pipe, but I think it'd be really interesting to be able to find out how an object was described in different contexts, how it inspired user-generated content or how it was categorised in different environments.
I wonder if you could include the object URL in machine tags on sites like Flickr? [Yes, you could. Or in the description field]
There are obviously lots of questions about how standards would be agreed, where repositories would be hosted, how the scope of each are decided, blah blah blah, and I'm sure all these conversations have happened before, but maybe it's finally time for something to happen.
[Update - Leif has two posts on a very similar topic at HEIR tonic and News from the Ouse.
Also I found this wiki on the business case for web standards - what a great idea!]
[Update - this was written in June 2007, but recent movements for Linked Open Data outside the sector mean it's becoming more technically feasible. Institutionally, on the other hand, nothing seems to have changed in the last year.]
Saturday, 23 June 2007
"Sharing authorship and authority: user generated content and the cultural heritage sector" online now
On a personal note, I realised that I've used 'extensible, re-usable and interoperable' in every paper I've given in the past two years. I guess you can take the geek out of Open Source but you can't take the Open Source out of the geek.
Thursday, 21 June 2007
I'm having a go at generating dynamic XML data sources from collections data sources for the Simile Timeline widget; it'll be interesting to see if we can link this with Google maps mash-ups that other people are producing.
Wednesday, 20 June 2007
Monday, 18 June 2007
We've been having more conversations about how we can use tagging to publish more information about LAARC photos. This would allow us and our users to explore, use and compare site photos in lots of different ways. Eventually it may also be applied related data streams such as archaeological finds/museum objects and related media, but for the moment we're just exploring what we can do with an existing platform like Flickr.
Using machine tags to add latitude and longitude looks simple enough, and seems there's a de facto standard for geo:lat and geo:lon. But is there a similar de facto (or proper) namespace standard for machine tags for UK National Grid references? Leave a comment or email me if you know of any, or even if you're just using them yourself.
Some relevant links:
Flickr: Discussing Machine tags in Flickr API
geobloggers » Advanced Tagging and TripleTags
geobloggers » Flickr Ramps up Triple Tag (Machine Tags) Support.
Members of Antiquist, Digital Classicist, the Text Encoding Initiative, and Digital Medievalist believe that the AHDS's services play a vital role within the Digital Arts and Humanities. We are concerned that the consequences of this decision could be severe unless part of a larger strategy of support and have issued the following request for information...
Leading User-Generated Content Sites See Exponential Growth in UK Visitors During the Past Year
“Web 2.0 is clearly architected for participation, as it attempts to harness the collective intelligence of Web users,” commented Bob Ivins, managing director of comScore Europe. “Many of the sites experiencing the fastest growth today are the ones that understand their audience’s need for expression and have made it easy for them to share pictures, upload music and video, and provide their own commentary, thus stimulating others to do the same. It is the classic network effect at work.”
While uniformly demonstrating strong traffic growth, UGC sites are also adept at keeping users engaged.
Friday, 15 June 2007
On the other hand, Introduction to Abject-Oriented Programming is a very quick read, and laugh-out-loud funny (if you're a tragic geek like me).
Wednesday, 13 June 2007
And, "The comScore study revealed that many of the sites with particular appeal to the 15 to 24 age segment fall into the Social Networking category, including Facebook.com, Bebo.com and Tagged.com. Other properties with strong teen and young adult appeal include ARTISTdirect Network and Alloy, which are news and entertainment sites."
Tuesday, 12 June 2007
Government must do more to embrace Web 2.0 tools and communities, says a report.
The report said that some public data, such as post codes, was already widely used but much more could be done to open up access to official information.
It said public data should be published in open formats to encourage use.
The review, called The Power of Information, aimed to find out more about Web 2.0 tools and communities to see how the government can get involved to help Britons make the most of this "new pattern of information creation and use".
The review was intended to "explore the role of government in helping to maximise the benefits for citizens from this new pattern of information creation and use."
The report encouraged the government to do more to ensure a good fit between web communities and official information to "grasp the opportunities that are emerging in terms of the creation, consumption and re-use of information".
The authors recommended that the government work more closely with existing sites and communities that share official aims; do more to help innovators use public data and work to ensure people know what to do with public data and how to get at it.
Among 15 specific recommendations the report said the government should not set up its own sites if existing web communities do a good job of getting information to people.
It also said it should speed up efforts to put data in open formats and publish under terms that let people freely use it.
They've linked to a PDF of the report at Power of Information report.
Thursday, 7 June 2007
"People will pay more for goods if a website does a good job of protecting their privacy, a study shows."
The report is in the context of e-commerce but the findings probably apply to the cultural heritage sector.
Tuesday, 5 June 2007
And an article from an Australian newspaper on the possibilities of Web 2.0 for business: "Australian companies are starting to twig that Web 2.0 isn't just the latest trend for designing web pages - it can be a vital business tool."
Speaking of Web 2.0 business models, I noticed that Rough Guides have made free audio downloads available for some of their phrasebooks so you can practise with words and phrases recorded by native speakers. The audio files work best when you've got a phrasebook in front of you, so they're probably not losing much business by giving away the audio files; in fact they're probably gaining.
Friday, 1 June 2007
Computational thinking could be considered to be a manifesto for computer science and is what every computer scientist has within them, without their equipment. It might be seen as being a common language for solving problems.
Computational thinking helps iron out the problems from abstraction - determining what it is that can be computed. Some felt that it was a form of intellectual property - a way of thinking which aids the 'user' in solving problems and tapping into their constructive imagination.
Computational thinking has an obligation to find a solution and is sometimes used to crystallise natural phenomena by naming things that haven't already had names in the past.
It was thought that it helps us to deal with systems, which generate too much data, complete with false positives and negatives and helps us to better understand the constraints to a problem.