Showing posts with label development models. Show all posts
Showing posts with label development models. Show all posts

Sunday, 21 November 2010

Museums and iterative agility: do your ideas get oxygen?

Re-visiting the results of the survey I ran about issues facing museum technologists has inspired me to gather together some great pieces I've read on museum projects moving away from detailed up-front briefs and specifications toward iterative and/or agile development.

In 'WaterWorx – our first in-gallery iPad interactive at the Powerhouse Museum', Seb Chan writes:
"the process by which this game was developed was in itself very different for us. ... Rather than an explicit and ‘completed’ brief be given to Digital Eskimo, the game developed using an iterative and agile methodology, begun by a process that they call ‘considered design‘. This brought together stakeholders and potential users all the way through the development process with ‘real working prototypes’ being delivered along the way – something which is pretty common for how websites and web applications are made, but is still unfortunately not common practice for exhibition development."
I'd also recommend the presentation 'Play at Work: Applying Agile Methods to Museum Website Development' given at the 2010 Museum Computer Network Conference by Dana Mitroff Silvers and Alon Salant for examples of how user stories were used to identify requirements and prioritise development, and for an insight into how games can be used to get everyone working in an agile way.  If their presentation inspires you, you can find games you can play with people to help everyone understand various agile, scrum and other project management techniques and approaches at tastycupcakes.com.

I'm really excited by these examples, as I'm probably not alone in worrying about the mis-match between industry-standard technology project management methods and museum processes. In a 'lunchtime manifesto' written in early 2009, I hoped the sector would be able to 'figure out agile project structures that funders and bid writers can also understand and buy into' - maybe we're finally at that point.

And from outside the museum sector, a view on why up-front briefs don't work for projects that where user experience design is important.  Peter Merholz of Adaptive Path writes:
"1. The nature of the user experience problems are typically too complex and nuanced to be articulated explicitly in a brief. Because of that, good user experience work requires ongoing collaboration with the client. Ideally, client and agency basically work as one big team.
2. Unlike the marketing communications that ad agencies develop, user experience solutions will need to live on, and evolve, within the clients’ business. If you haven’t deeply involved the client throughout your process, there is a high likelihood that the client will be unable to maintain whatever you produce."

Finally, a challenge to the perfectionism of museums.  Matt Mullenweg (of WordPress fame), writes in '1.0 Is the Loneliest Number': 'if you’re not embarrassed when you ship your first version you waited too long'.  Ok, so that might be a bit difficult for museums to cope with, but what if it was ok to release your beta websites to the public?  Mullenweg makes a strong case for iterating in public:
"Usage is like oxygen for ideas. You can never fully anticipate how an audience is going to react to something you’ve created until it’s out there. That means every moment you’re working on something without it being in the public it’s actually dying, deprived of the oxygen of the real world.
...
By shipping early and often you have the unique competitive advantage of hearing from real people what they think of your work, which in best case helps you anticipate market direction, and in worst case gives you a few people rooting for you that you can email when your team pivots to a new idea. Nothing can recreate the crucible of real usage.
You think your business is different, that you’re only going to have one shot at press and everything needs to be perfect for when Techcrunch brings the world to your door. But if you only have one shot at getting an audience, you’re doing it wrong."
* The Merholz article above is great because you can play a fun game with the paragraph below - in your museum, what job titles would you put in place of 'art director' and 'copywriter'?  Answers in a comment, if you dare!  I think it feels particularly relevant because of the number of survey responses that suggested museums still aren't very good at applying the expertise of their museum technologists.
"One thing I haven’t yet touched on is the legacy ad agency practice where the art director and copywriter are the voices that matter, and the rest of the team exists to serve their bidding. This might be fine in communications work, but in user experience, where utility is king, this means that the people who best understand user engagement are often the least empowered to do anything about it, while those who have little true understanding of the medium are put in charge. In user experience, design teams need to recognize that great ideas can come from anywhere, and are not just the purview of a creative director."

--
If you liked this post, you may also be interested in Confluence on digital channels; technologists and organisational change? (29 September 2012) and A call for agile museum projects (a lunchtime manifesto) (10 March 2009).

Sunday, 24 January 2010

Performance testing and Agile - top ten tips from Thoughtworks

I've got a whole week and a bit off uni (though of course I still have my day job) and I got a bit over-excited and booked two geek talks (and two theatre shows). This post is summarising a talk on Top ten secret weapons for performance testing in an agile environment, organised by the BCS's SPA (software practice advancement) group with Patrick Kua from ThoughtWorks.

His slides from an earlier presentation are online so you may prefer just to head over and read them.

[My perspective: I've been thinking about using Agile methodologies for two related projects at work, but I'm aware of the criticisms from a requirements engineering perspective that doesn't deal with non-functional requirements (i.e. not requirements about what a system does, but how it does it and the qualities it has - usability, security, performance, etc) and of the problems integrating graphic and user experience design into agile processes (thanks in part to an excellent talk @johannakoll gave at uni last term.  Even if we do the graphic and user experience design a cycle or two ahead, I'm also not sure how it would work across production teams that span different departments - much to think about.

Wednesday's talk did a lot to answer my own questions about how to integrate non-functional requirements into agile projects, and I learned a lot about performance testing - probably about time, too. It was intentionally about processes rather than tools, but JMeter was mentioned a few times.]

1. Make performance explicit.
Make it an explicit requirement upfront and throughout the process (as with all non-functional requirements in agile).
Agile should bring the painful things forward in the process.

Two ways: non-functional requirements can be dotted onto the corner of the story card for a functional requirement, or give them a story card to themselves, and manage them alongside the stories for the functional requirements.  He pointed out that non-functional requirements have a big effect on architecture, so it's important to test assumptions early.

[I liked their story card format: So that [rationale] as [person or role] I want [natural language description of the requirement].]

2. One team.
Team dynamics are important - performance testers should be part of the main team. Products shouldn't just be 'thrown over the wall'. Insights from each side help the other. Someone from the audience made a comment about 'designing for testability' - working together makes this possible.

Bring feedback cycles closer together. Often developers have an insight into performance issues from their own experience - testers and developers can work together to triangulate and find performance bottlenecks.

Pair on performance test stories - pair a performance tester and developer (as in pair programming) for faster feedback. Developers will gain testing expertise, so rotate pairs as people's skills develop.  E.g. in a team of 12 with 1 tester, rotate once a week or fortnight.  This also helps bring performance into focus through the process.

3. Customer driven
Customer as in end user, not necessarily the business stakeholder.  Existing users are a great source of requirements from the customers' point of view - identify their existing pain points.  Also talk to marketing people and look at usage forecasts.

Use personas to represent different customers or stakeholders. It's also good to create a persona for someone who wants to bring the site down - try the evil hat.

4. Discipline
You need to be as disciplined and rigorous as possible in agile.  Good performance testing needs rigour.

They've come up with a formula:
Observe test results - what do you see? Be data driven.
Formulate hypothesis - why is it doing that?
Design an experiment - how can I prove that's what's happening? Lightweight, should be able to run several a day.
Run experiment - take time to gather and examine evidence
Is hypothesis valid? If so -
Change application code

Like all good experiments, you should change only one thing at a time.

Don't panic, stay disciplined.

5. Play performance early
Scheduling around iterative builds makes it more possible. A few tests during build is better than a block at the end.  Automate early.

6. Iterate, Don't (Just) Increment
Fishbone structure - iterate and enhance tests as well as development.

Sashimi slicing is another technique.  Test once you have an end-to-end slice.

Slice by presentation or slice by scenario.
Use visualisations to help digest and communicate test results. Build them in iterations too. e.g. colour to show number of http requests before get error codes. If slicing by scenario, test by going through a whole scenario for one persona.

7. Automate, automate, automate.
It's an investment for the future, so the amount of automation depends on the lifetime of the project and its strategic importance.  This level of discipline means you don't waste time later.

Automated compilation - continuous integration good.
Automated tests
Automated packaging
Automated deployment [yes please - it should be easy to get different builds onto an environment]
Automated test orchestration - playing with scenarios, put load generators through profiles.
Automated analysis
Automated scheduling - part of pipeline. Overnight runs.
Automated result archiving - can check raw output if discover issues later

Why automate? Reproducible and constant; faster feedback; higher productivity.
Can add automated load generation e.g. JMeter, which can also run in distributed agent mode.
Ideally run sanity performance tests for show stoppers at the end of functional tests, then a full overnight test.

8. Continuous performance testing
Build pipeline.
Application level - compilation and test units; functional test; build RPM (or whatever distribution thingy).
Into performance level - 5 minute sanity test; typical day test.

Spot incremental performance degradation - set tests to fail if the percentage increase is too high.

9. Test drive your performance test code
Hold it to the same level of quality as production code. TDD useful. Unit test performance code to fail faster. Classic performance areas to unit test: analysis, presentation, visualisation, information collecting, publishing.

V model of testing - performance testing at top righthand edge of the V.

10. Get feedback.
Core of agile principles.
Visualisations help communicate with stakeholders.
Weekly showcase - here's what we learned and what we changed as a result - show the benefits of on-going performance testing.

General comments from Q&A: can do load generation and analyse session logs of user journeys. Testing is risk migitation - can't test everything. Pairing with clients is good.

In other news, I'm really shallow because I cheered on the inside when he said 'dahta' instead of 'dayta'. Accents FTW! And the people at the event seemed nice - I'd definitely go to another SPA event.

Tuesday, 28 April 2009

Christian Heilmann on Yahoo!'s YQL, open data tables, APIs

My notes from Christian Heilmann's talk on 'Reaching those web folk' with Yahoo!'s new-ish YQL, open data tables and APIs at the National Maritime Museum [his slides]. My notes are a bit random, but might be useful for people, especially the idea of using YQL as an easy way to prototype APIs (or implement APIs without too much work on your part).

For him it's about data on the web, not just technology.

Number of users is a crap metric, [should consider the user experience].

Stats should be what you use to discover areas where are the problems, not to pat yourself on the back.

People with blackberries have no Javascript, no CSS. Don't have front-loading navigation they have to scroll through - cos they won't.

If you think of your site as content, then visitors can become 'broadcasting stations' and relay your message. Information flows between readers and content. They're passing it on through distribution channels you're not even aware of.

Content on the web is validated with links and quotes from other sources e.g. Wikipedia. People mix your information with other sources to prove a point or validate it. eg. photos on maps.

How can you be part of it?
Make it easy to access. Structure your websites in (plain old semantic HTML) a semantic manner. Title is important, etc. Add more semantic richness with RDF and microformats. Provide data feeds or RSS. Consider the Rolls Royce of distribution - an API. Help other machines make sense of your content - search engines will love you too.

Yahoo index via BOSS API - Yahoo do it because they know 'search engines are dying'. Catch-all search engines are stupid. Apples are not the same apples for everyone. Build a cleverer web search.

http://ask-boss.appspot.com/ - nlp analysis of search results. Try 'who is batman in the dark knight' - amazing.

BOSS provides mainstream channel for semantic web and microformats. Microformats are chicken and egg problem. Using searchmonkey technology, BOSS lists this information in the results. BOSS can return all known information about a page, structured.

Key terms parameter in BOSS - what did people enter to find a site/page? http://keywordfinder.org/ - what successful websites have for a given keyword.

Clean HTML is the most important thing, semantic and microformats are good.

If your data is interesting enough, people will try to get to it and remix it.

[Curl has grown up since I last used it! Can be any browser, do cookies, etc.]

Now the web looks like an RSS reader.

Include RSS in your stats.

Guardian - any of their content websites put out RSS through CMS. They then provided an API so end users can filter down to the data they need.

Programmable Web - excellent resource but can be overwhelming.

The more data sources you use, the more time you spend reading API documentation, sos every API is different. Terms, formats, etc. The more sources you connect to, the more chances of error. The more stuff you pull in, the slower the performance of your website.

So you need systems to aggregate sources painlessly. Yahoo Pipes. A visual interface, changes have to be made by hand.

You can't quickly use a pipe in your code and change it on the fly. e.g. change a parameter for one implementation. No version control.

So that's one of the reasons for YQL: Yahoo Query Language. SQL style interface to all yahoo data (all Yahoo APIs) and the web. Yahoo build things with APIs cos it's the only way to scale. Book: 'scalable websites', all about APIs.

Build queries to Yahoo APIs, try them out in YQL console. Provides diagnostics - which URLs, how long it took, any problems encountered. Allows nesting of API calls.

Outputs XML or JSON, consistent format so you know how to use that information.

YQL also helped internally because of varying APIs between departments.

Gives access to all Yahoo services, any data sources on the web, including html and microformats, and can scrape any website.

Open tables
Easy way to add own information to YQL. Tell Yahoo end point where can get the info.

Jim wanted to allow people to access data without building an API. All it needed was a simple XML file.

[Though you do need RSS results from a search engine to point to - I'm going to see what we can output from our Google Mini and will share any code - or would appreciate some time-saving pointers if anyone has any. Yes, hello, lazyweb, that's my coat, thanks.]

Basically it's a way of providing an API without having to develop one.

Concluding: you can piggyback on people's social connections with other people by making data shareable. [Then your data is shared, yay. Assuming your institution is down with that, and no copyrights or puppies were hurt in the process.]

APIs are a commitment - have to be available all the time, lot of traffic, but hard to measure traffic and benefits. Making APIs scale is a pain and have to be clever to do it. Pointing YQL open data table pointing to search engine on your site also works.

Saves documenting API? [??]

YQL handles the interface, caching and data conversion for you. Also limits the access to sensible levels - 10,000 hits/hour.

Jim - 'images from collection' displayed on page as badge thing with YQL as RSS browser. Can just create RSS feed for exhibition than can new badge for new exhibition.

Using YQL protects against injection attacks.

Comment from audience - YQL as meta-API.

Registering is basically making the XML file. You need a Yahoo ID to use the console. [The console is cool, basically like a SQL 'enterprise' system console, with errors and transaction processing costs.]

We had questions about adding in metrics, stats, to use both for reporting and keeping funders/bosses happy and for diagnostics - to e.g. find out which areas of the collection are being queried, what people are finding interesting.

github repository as place to register open tables to make them discoverable.

There's a YQL blog.

[So, that's it - it's probably worth a play, and while your organisation might not want to use it in production without checking out how long the service is likely to be around, etc, it seems like an easy way of playing with API-able data. It'd be really interesting to see what happened if a few museums with some overlap in their collections coverage all made their data available as an open table.]

Monday, 6 April 2009

Woohoo!

The results of JISC's dev8D 'developer happiness' prize have been announced - congratulations to List8D and their "web 2.0-friendly reading lists" - it's something I'd love to see in my own uni course.

And yay the Three Lazy Geeks, because we came second! That was a lovely surprise, and really the glace cherry on the icing of the cake because the whole event was a great experience, and I really enjoyed working with Ian and Pete, as rushed as the whole thing was.

Tuesday, 10 March 2009

Agile development presentation, dev8D

These are my very rough notes on Grahame Klyne's talk on Agile Development as JISC's dev8D event in February. Grahame works with bioinformatics and the semantic web at the zoology department of Oxford University. He is interested in how people can make useful things out of semantic web technologies.

Any mistakes are mine, obviously, and any comments or corrections are welcome. (I should warn that they're still rough, even though it's taken me a month to get them online.) My notes in [square brackets] below.

He started by asking people in the room to introduce themselves, which was a nice idea as most people hadn't met.

Agile and their project
Agile seemed to be particularly appropriate for development in a research context, where the final outcomes are necessarily uncertain. [I'm particularly interested in how they managed to build Agile development into the JISC project proposal, as reflected in 'A call for agile museum projects (a lunchtime manifesto)'. Even getting agile working in a university environment seems like an impressive achievement to me, though universities tend to be more up-to-date with development models than museums.]

They had real users, and wanted to apply semantic web technologies to help them do their research.

At the start of the project, all the developers went on week-long training course on agile development, which was really important for the smooth running of the project as they all came out with a common view on how they might go forward. Everyone worked with 'how can we make this work best for us' view of agile development.

Agile - what's it about?
Agile values individuals and interactions over processes and tools. It values responding to change over following a plan - this is particularly interesting when writing proposals for funders like JISC.

From a personal perspective, the key principles became: what we do is (end) user led; spend a lot of time communicating; build on existing working system code (i.e. value code over documentation); develop incrementally. It's not in all the textbooks but they found it's important - they 'retrospect'.

User-led: you need a real user as a starting point. Not sure how to advise someone working on a project without real users [I didn't ask, but why would you be doing a project without a real users?]. It's far easier to elicit requirements from user if you have a working system.

Build and maintain trust in the team. [Important.]

Building on working code: start with something simple that works. Build in small steps - it's easy to say, but the discipline of forcing yourself to take tiny steps is quite hard. The temptation is to cram in a bit more functionality and think you'll sort it out later. When you get used to the discipline of small steps, it gets so much easier to maintain flow of development.

Minimise periods of time when you don't have working system.

Don't sacrifice quality.

Always look for easy ways of implementing what you need to do now. Don't bring in more complex solution because you think you might need it later.

Retrospection - 'the one indispensable practice of agile'? As a team, take time regularly to review and adjust.

More random points: Test-lead development is often assoc with agile development. Think of test cases as specification - it's also a useful mindset for standards development groups. Test cases are particularly useful when working with external code bases or applications - even more so than when working with your own code. [There was quite a bit of discussion about this, I think whether it made sense to you depended on your experiences commissioning or reviewing possible applications for purchase for institutional use. I can think of occasions when it would have been a very useful approach for dealing with vendor oversell - it sounds like a sensible way to test the fit of third-party applications for your situation.]

Keep refactoring separate from the addition of new functionality.

Card planning: for e.g. user stories, tasks. It's a good solution in an environment with very strong characters, it allows everyone to be confident that their input was being noted, which means they don't hijack the session to make sure their points have been heard... the team can then review and decide which are most important in next small block of work.

Outcomes: progress had been steady and continuous rather than meteoric. What seems like a slow pace at the time actually gets you quite far. It produces a sense of continuous progress.

Unexpected architectural choices - had particular view about how web interactions were going to work in the project, e.g. choice between server side or client side JavaScript - he was sceptical, but knew he could change his mind later, could follow one path and change if necessary. But actually resulted in architectural choices that would never have made upfront but which were best for the situation.

Discussion
Never refactor until you have to. Don't make stuff you *might* need, make it when you need it.

A call for agile museum projects (a lunchtime manifesto)

Yet another conversation on twitter about the NMOLP/Creative Spaces project lead to a discussion of the long lead times for digital projects in the cultural heritage sector. I've worked on projects that were specced and goals agreed with funders five years before delivery, and two years before any technical or user-focussed specification or development began, and I wouldn't be surprised if something similar happened with NMOLP.

Five years is obviously a *very* long time in internet time, though it's a blink of an eye for a museum. So how do we work with that institutional reality? We need to figure out agile, adaptable project structures that funders and bid writers can also understand and buy into...

The initial project bid must be written to allow for implementation decisions that take into account the current context, and ideally a major goal of the bid writing process should be finding points where existing infrastructure could be re-used. The first step for any new project should be a proper study of the needs of current and potential users in the context of the stated goals of the project. All schema, infrastructure and interface design decisions should have a link to one or more of those goals. Projects should built around usability goals, not object counts or interface milestones set in stone three years earlier.

Taking institutional parameters into account is of course necessary, but letting them drive the decision making process leads to sub-optimal projects, so projects should have the ability to point out where institutional constraints are a risk for the project. Constraints might be cultural, technical, political or collections-related - we're good at talking about the technical and resourcing constraints, but while we all acknowledge the cultural and political constraints it often happens behind closed doors and usually not in a way that explicitly helps the project succeed.

And since this is my lunchtime dream world, I would like plain old digitisation to be considered sexy without the need to promise funders more infrastructure they can show their grandkids.

We also need to work out project models that will get buy-in from contractors and 3rd party suppliers. As Dan Zambonini said, ''Usability goals' sounds like an incredibly difficult thing to quantify' so existing models like Agile/sprint-esque 'user stories' might be easier to manage.

We, as developers, need to create a culture in which 'failing intelligently' is rewarded. I think most of us believe in 'failing faster to succeed sooner', at least to some extent, but we need to think carefully about the discourse around public discussions of project weaknesses or failures if we want this to be a reality. My notes from Clay Shirky's ICA talk earlier this year say that the members of the Invisible College (a society which aimed to 'acquire knowledge through experimental investigation') "went after alchemists for failing to be informative when they were wrong" - " it was ok to be wrong but they wanted them to think about and share what went wrong". They had ideas about how results should be written up and shared for maximum benefit. I think we should too.

I think the MCG and Collections Trust could both have a role to play in advocating more agile models to those who write and fund project bids. Each museum also has a responsibility to make sure projects it puts forward (whether singly or in a partnership) have been reality checked by its own web or digital specialists as well as other consultants, but we should also look to projects and developers (regardless of sector) that have managed to figure out agile project structures that funders and bid writers can also understand and buy into.

So - a blatant call for self-promotion - if you've worked on projects that could provide a good example, written about your dream project structures, know people or projects that'd make a good case study - let me know in the comments.

Thanks also to Mike, Giv and Mike, Daniel Evans (and the MCG plus people who listened to me rant at dev8D in general) for the conversations that triggered this post.


--
If you liked this post, you may also be interested in Confluence on digital channels; technologists and organisational change? (29 September 2012) and Museums and iterative agility: do your ideas get oxygen? (21 November 2010).

Wednesday, 9 July 2008

20% time - an experiment (with some results)

A company called Atlassian have been experimenting with allowing their engineers 20% of their time to work on free or non-core projects (a la Google). They said:
You see, while everyone knows about Google's 20% time and we've heard about all the neat products born from it (Google News, GMail etc) - we've found it extremely difficult to get any hard facts about how it actually works in practice.
So they started with a list of questions they wanted to answer through their experiment, and they've been blogging about it at http://blogs.atlassian.com/developer/20_percent_time/. It makes for interesting reading, and it's great to see some real evidence starting to emerge.

Hat tip: Tech-Ed Collisions.

Tuesday, 15 April 2008

How I do documentation: a column of bumph and a column of gold

All programmers hate documentation, right? But I've discovered a way to make it less painful and I'm posting in case it helps anyone else.

The first trick is to start documenting as soon as you start thinking about a project - well before you've written any code. I keep a running document of the work I've done, including the bits I'm about to try, information about links into other databases or applications, issues I need to think about or questions I need to ask someone, rude comments (I know, I look like such a nice girl), references, quick use cases, bits about functions, summary notes from meetings, etc.

Mostly I record by date, blog style. Doing it by date helps me link repository files, paper notes and emails with particular bits of work, which can otherwise be tricky if it's a while since you worked on a project or if you have lots of projects on the go. It's also handy if you need to record the time spent on different projects.

I just did it like this for a while, and it was ok, but I learnt the hard way that it takes a while to sort through it if I needed to send someone else some documentation. Then I made a conscious decision to separate the random musings from the decisions and notes on the productive bits of code.

So now my document has two columns. This first column is all the bumph described above - the stuff I'd need if I wanted to retrace my steps or remind myself why I ended up doing things a certain way. The second column records key decisions or final solutions. This is your column of gold.

This way I can quickly run down the items in the second column, organise it by area instead of by date and come up with some good documentation without much effort. And if I ever want to write up the whole project, I've got a record of the whole process in the column of bumph.

You could add a third column to record outstanding tasks or questions. I tend to mark these up with colour and un-colour them when they're done. It just depends how you like to work.

It's amazingly simple, but it works. I hope it might be useful for you too. Or if you have any better suggestions (or a better title for this post), I'd love to hear them.

Friday, 22 February 2008

Open Source Jam (osjam) - designing stuff that gets used by people

On Thursday I went to Google's offices to check out the Open Source Jam. I'd meant to check them out before and since I was finally free on the right night and the topic was 'Designing stuff that gets used by people' it was perfect timing. A lot of people spoke about API design issues, which was useful in light of the discussions Jeremy started about the European Digital Library API on the Museums Computer group email list (look for subject lines containing 'APIs and EDL' and 'API use-cases').

These notes are pretty much just as they were written on my phone, so they're more pointers to good stuff than a proper summary, and I apologise if I've got names or attributions wrong.

I made a note to go read more of Duncan Cragg on URIs.

Paul Mison spoke about API design antipatterns, using Flickr's API as an example. He raised interesting points about which end of the API provider-user relationship should have the expense and responsibility for intensive relational joins, and designing APIs around use cases.

Nat Pryce talked about APIs as UIs for programmers. His experience suggests you shouldn't do what programmers ask for but find out what they want to do in the end and work with that. Other points: avoid scope creep for your API based on feature lists. Naming decisions are important, and there can be multilingual and cultural issues with understanding names and functionality. Have an open dialogue with your community of users but don't be afraid to selectively respond to requests. [It sounds like you need to look for the most common requests as no one API can do everything. If the EDL API is extensible or plug-in-able, is the issue of the API as the only interface to that service or data more tenable?] Design so that code using your API can be readable. Your API should be extensible cos you won't get it right first time. (In discussion someone pointed out that this can mean you should provide points to plug in as well as designing so it's extensible.) Error messages are part of the API (yes!).

Christian Heilmann spoke on accessibility and make some really good points about accessibility as a hardcore test and incubator for your application/API/service. Build it in from the start, and the benefits go right through to general usability. Also, provide RSS feeds etc as an alternative method for data access so that someone else can build an application/widget to meet accessibility needs. [It's the kind of common sense stuff you don't think someone has to say until you realise accessibility is still a dirty word to some people]

Jonathan Chetwynd spoke on learning disabilities (making the point that it includes functional illiteracy) and GUI schemas that would allow users to edit the GUI to meet their accessibility needs. He also mentioned the possibility of wrapping microformats around navigation or other icons.

Dan North talked about how people learn and the Dreyfus model of skill acquisition, which was new to me but immediately seemed like something I need to follow up. [I wonder if anyone's done work on how that relates to models of museum audiences and how it relates to other models of learning styles.]

Someone whose name I didn't catch talked about Behaviour driven design which was also new to me and tied in with Dan's talk.

Monday, 10 December 2007

'The BBC's Fifteen Web Principles'

An old post (February this year), but one worth noting: The BBC's Fifteen Web Principles.

(I've been in Japan and was busy in that 'pre-holiday' way beforehand, so haven't updated recently. Naughty me.)
An old post (February this year), but one worth noting: The BBC's Fifteen Web Principles.

(I've been in Japan and was busy in that 'pre-holiday' way beforehand, so haven't updated recently. Naughty me.)

Thursday, 27 September 2007

Two unrelated posts I've liked recently, on Navigators, Explorers, and Engaged Participants as user models; and going back to basics for digital museum content:

We don't need new technologies to attract teen audiences, we need, if anything, to revisit how we (and others) interpret our collections and ideas and then decide what new technologies can best convey the information

Thursday, 23 August 2007

In a post titled, What is Web 3.0?, Nicholas Carr said:

"Web 3.0 involves the disintegration of digital data and software into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people."

And recently I went to a London Geek Girl Dinner, where Paul Amery from Skype (who hosted the event) said
"the next big step forward in software is going to be providing the plumbing, to provide people what they want, where they want ...start thinking about plumbing all this software together, joining solutions together... mashups are just the tip of the iceberg".

So why does that matter to us in the cultural heritage sector? Without stretching the analogy too far, we have two possible roles - one, to provide the content that flows through the pipes, ensuring we use plumbing-compatible tubes so that other people can plumb our content into new applications; the second is to build applications ourselves, using our data and others. I think we're are brilliant content producers, and we're getting better at providing re-usable data sources - but we often don't have the resources to do cool things with them ourselves.

Maybe what I'm advocating is giving geeks in the cultural heritage sector the time to spend playing with technology and supplying the tools for agile development. Or maybe it's just the perennial cry of the backend geek who never gets to play with the shiny pretty things. I'm still thinking about this one.

Friday, 15 June 2007

File under 'fabulous resources that I doubt I'll ever get time to read properly': the Journal of Universal Computer Science, D-Lib Magazine ('digital library research and development, including but not limited to new technologies, applications, and contextual social and economic issues') and transcripts from the Research Library in the 21st Century symposium.

On the other hand, Introduction to Abject-Oriented Programming is a very quick read, and laugh-out-loud funny (if you're a tragic geek like me).

Wednesday, 7 March 2007

Interesting BBC article on the philosophy behind Craig's List:

"Initially it was mostly coming in via email which we would reply to, but we've grown so much that now the more common thing is you set up a series of discussion forums in which users bring up various things that they think are important to change or modify in some way.

Users talk amongst themselves about things we're doing poorly or could be doing better, and then we're able to observe that interaction. It proves to be a very kind of efficient and interesting and useful way, nowadays, of digesting that feedback.

The other important aspect that you might not imagine initially is that all of the feedback is coming in as 'voting with their feet'. We just watch how people are using particular categories.

If we see that, 'oh users want to do this and we're not currently enabling this', then we try to code up some changes to better enable them to do whatever that is."