Showing posts with label usability. Show all posts
Showing posts with label usability. Show all posts

Sunday, 21 September 2014

These are a few of my favourite (audience research) things

On Friday I popped into London to give a talk at the Art of Digital meetup at the Photographer's Gallery. It's a great series of events organised by Caroline Heron and Jo Healy, so go along sometime if you can. I talked about different ways of doing audience research. (And when I wrote the line 'getting to know you' it gave me an earworm and a 'lessons from musicals' theme). It was a talk of two halves. In the first, I outlined different ways of thinking about audience research, then went into a little more detail about a few of my favourite (audience research) things.

There are lots of different ways to understand the contexts and needs different audiences bring to your offerings. You probably also want to test to see if what you're making works for them and to get a sense of what they're currently doing with your websites, apps or venues. It can help to think of research methods along scales of time, distance, numbers, 'density' and intimacy. (Or you could think of it as a journey from 'somewhere out there' to 'dancing cheek to cheek'...)

'Time' refers to both how much time a method asks from the audience and how much time it takes to analyse the results. There's no getting around the fact that nearly all methods require time to plan, prepare and pilot, sorry! You can run 5 second tests that ask remote visitors a single question, or spend months embedded in a workplace shadowing people (and more time afterwards analysing the results). On the distance scale, you can work with remote testers located anywhere across the world, ask people visiting your museum to look at a few prototype screens, or physically locate yourself in someone's office for an interview or observation.

Numbers and 'density' (or the richness of communication and the resulting data) tend to be inversely linked. Analytics or log files let you gather data from millions of website or app users, one-question surveys can garner thousands of responses, you can interview dozens of people or test prototypes with 5-8 users each time. However, the conversations you'll have in a semi-structured interview are much richer than the responses you'll get to a multiple-choice questionnaire. This is partly because it's a two-way dialogue, and partly because in-person interviews convey more information, including tone of voice, physical gestures, impressions of a location and possibly even physical artefacts or demonstrations. Generally, methods that can reach millions of remote people produce lots of point data, while more intimate methods that involve spending lots of time with just a few people produce small datasets of really rich data.

So here are few of my favourite things: analytics, one-question surveys, 5 second tests, lightweight usability tests, semi-structured interviews, and on-site observations. Ultimately, the methods you use are a balance of time and distance, the richness of the data required, and whether you want to understand the requirements for, or measure the performance of a site or tool.

Analytics are great for understanding how people found you, what they're doing on your site, and how this changes over time. Analytics can help you work out which bits of a website need tweaking, and for measuring to see the impact of changes. But that only gets you so far - how do you know which trends are meaningful and which are just noise? To understand why people are doing what they do, you need other forms of research to flesh them out. 

One question surveys are a great way of finding out why people are on your site, and whether they've succeeded in achieving their goals for being there. We linked survey answers to analytics for the last Let's Get Real project so we could see how people who were there for different reasons behaved on the site, but you don't need to go that far - any information about why people are on your site is better than none! 

5 second tests and lightweight usability tests are both ways to find out how well a design works for its intended audiences. 5 second tests show people an interface for 5 seconds, then ask them what they remember about it, or where they'd click to do a particular task. They're a good way to make sure your text and design are clear. Usability tests take from a few minutes to an hour, and are usually done in person. One of my favourite lightweight tests involves grabbing a sketch, an iPad or laptop and asking people in a cafĂ© or other space if they'd help by testing a site for a few minutes. You can gather lots of feedback really quickly, and report back with a prioritised list of fixes by the end of the day. 

Semi-structured interviews use the same set of questions each time to ensure some consistency between interviews, but they're flexible enough to let you delve into detail and follow any interesting diversions that arise during the conversation. Interviews and observations can be even more informative if they're done in the space where the activities you're interested in take place. 'Contextual inquiry' goes a step further by including observations of the tasks you're interested in being performed. If you can 'apprentice' yourself to someone, it's a great way to have them explain to you why things are done the way they are. However, it's obviously a lot more difficult to find someone willing and able to let you observe them in this way, it's not appropriate for every task or research question, and the data that results can be so rich and dense with information that it takes a long time to review and analyse. 

And one final titbit of wisdom from a musical - always look on the bright side of life! Any knowledge is better than none, so if you manage to get any audience research or usability testing done then you're already better off than you were before.

[Update: a comment on twitter reminded me of another favourite research thing: if you don't yet have a site/app/campaign/whatever, test a competitor's!]

Tuesday, 27 March 2012

Geek for a week: residency at the Powerhouse Museum

I've spent the last week as 'geek-in-residence' with the Digital, Social and Emerging Technologies team at the Powerhouse Museum. I wasn't sure what 'geek-in-residence' would mean in reality, but in this case it turned out to be a week of creativity, interesting constraints and rapid, iterative design.

When I arrived on Monday morning, I had no idea what I'd be working on, let alone how it would all work. By the end of the first day I knew how I'd be working, but not exactly what I'd focus on. I came in with fresh questions on Tuesday, and was sketching ideas by lunchtime. The next few days were spent getting stuck into wireframes to focus in on specific issues within that problem space; I turned initial ideas into wireframes and basic copy; and put that through two rounds of quick-and-dirty testing with members of the public and Powerhouse volunteers. By the time I left on Friday I was able to handover wireframes for a site called 'conversations about collections' which aims to record people's memories of items from the collection. (I ran out of time to document the technical aspects of how the site could be built in WordPress, but given the skills of the team I think they'll cope.)

The first day and a half were about finding the right-sized problem. In conversations with Paula (Manager of the Visual & Digitisation services team) and Luke (Web Manager), we discussed what each of us were interested in exploring, looking for the intersection between what was possible in the time and with the material to hand.

After those first conversations, I went back to Powerhouse's strategy document for inspiration. If in doubt, go back to the mission! I was looking for a tie-in with their goals - luckily their plan made it easy to see where things might fit. Their strategy talked about ideas and technology that have changed our world and stories of people who create and inspire them, about being open to 'rich engagement, to new conversations about the collections'.

I also considered what could be supported by the existing API, what kinds of activities worked well with their collections and what could be usefully built and tested as paper or on-screen prototypes.  Like many large collections, most of the objects lack the types of data that supports deeper engagement for non-experts (though the significance statements that exist are lovely).

Two threads emerged from the conversations: bringing social media conversations and activity back into the online collections interfaces to help provide an information scent for users of the site; and crowdsourcing games based around enhancing the collections data.
The first was an approach to the difficulties in surfacing the interesting objects in very large collections. Could you create a 'heat map' based on online activity about objects to help searchers and browsers spot objects that might be more interesting?

At one point Nico (Senior Producer) and I had a look at Google Analytics to see what social media sites were sending traffic to the collections and suss out how much data could be gleaned. Collection objects are already showing up on Pinterest, and I had wild thoughts about screen-scraping Pinterest (they have no API) to display related boards on the OPAC search results or object pages...

I also thought about building a crowdsourcing game that would use expert knowledge to data to make better games possible for the general public – this would be an interesting challenge, as open-ended activities are harder to score automatically so you need to design meaningful rewards and ensure an audience to help provide them. However, it was probably a bigger task than I had time for, especially with most of the team already busy on other tasks, though I've been interested in that kind of dual-phased project since my MSc project on crowdsourcing games for museums.

But in the end, I went back to two questions: what information is needed about the collections, what's the best way to get it?  We decided to focus on conversations, stories and clues about objects in the collections with a site aimed at collecting 'living memories' about objects by asking people what they remember about an object and how they'd explain it to a kid.  The name, 'Conversations about collections' came directly from the strategy doc and was just too neat a description to pass up, though 'memory bank' was another contender.
I ended up with five wireframes (clickable PDF at that link) to cover the main tasks of the site: to persuade people (particularly older people) that their memories are worth sharing, and to get the right object in front of the right person.  Explaining more about the designs would be a whole other blog post, but in the interests of getting this post out I'll save that for another day... I'm dashing out this post before I head out, but I'll update in response to questions (and generally things out when I have more time).

My week at the Powerhouse was a brilliant chance to think through the differences between history of science/social history objects and art objects, and between history and art museums, but that's for another post (perhaps when if I ever get around to posting my notes from the MCN session on a similar topic).
It also helped me reflect on my interests, which I would summarise as 'meaningful audience participation' - activities that are engaging and meaningful for the audience and also add value for the museum, activities that actually change the museum in some way (hopefully for the better!), whether that's through crowdsourcing, co-curation or other types of engagement.

Finally, I owe particular thanks to Paula Bray and Luke Dearnley for running with Seb Chan's original suggestion and for their time and contributions to shaping the project; to Nicolaas Earnshaw for wireframe work and Suse Cairns for going out testing on the gallery floor with me; and to Dan Collins, Estee Wah, Geoff Barker and everyone else in the office and on various tours for welcoming me into their space and their conversations.

Saturday, 1 October 2011

Usability: the key that unlocks geeky goodness

This is a quick pointer to three posts about some usability work I did for the JISC-funded Pelagios project, and a reflection on the process. Pelagios aims to 'help introduce Linked Open Data goodness into online resources that refer to places in the Ancient World'. The project has already done lots of great work with the various partners to bring lots of different data sources together, but they wanted to find out whether the various visualisations (particularly the graph explorer) let users discover the full potential of the linked data sets.

I posted on the project blog about how I worked out a testing plan to encourage user-centred design and set up the usability sessions in Evaluating Pelagios' usability, set out how a test session runs (with sample scripts and tasks) in Evaluating usability: what happens in a user testing session? and finally I posted some early Pelagios usability testing results. The results are from a very small sample of potential users but they were consistent in the issues and positive results uncovered.

The wider lesson for LOD-LAM (linked open data in library, archives, museums) projects is that user testing (and/or a strong user-centred design process) helps general audiences (including subject specialists) appreciate the full potential of a technically-led project - without thoughtful design, the results of all those hours of code may go unloved by the people they were written for. In other words, user experience design is the key that unlocks the geeky goodness that drives these projects. It's old news, but the joy of user testing is that it reminds you of what's really important...

Wednesday, 12 November 2008

WCAG 2.0 is coming!

That'd be the 'Web Content Accessibility Guidelines 2.0' - a 'wide range of recommendations for making Web content more accessible' with success criteria 'written as testable statements that are not technology-specific' (i.e. possibly including JavaScript or Flash as well as HTML and CSS, but the criteria are still sorted into A, AA and AAA).

Putting that in context, a blog post on webstandards.org, 'WCAG 2 and mobileOK Basic Tests specs are proposed recommendations', says:
It's possible that WCAG 2 could be the new accessibility standard by Christmas. What does that mean for you? The answer: it depends. If your approach to accessibility has been one of guidelines and ticking against checkpoints, you'll need some reworking your test plans as the priorities, checkpoints and surrounding structures have changed from WCAG 1. But if your site was developed with an eye to real accessibility for real people rather than as a compliance issue, you should find that there is little difference.
How to Meet WCAG 2.0 (currently a draft) provides a 'customizable quick reference to Web Content Accessibility Guidelines 2.0 requirements (success criteria) and techniques', and there are useful guidelines on Accessible Forms using WCAG 2.0, with practical advice on e.g., associating labels with form inputs. More resources are listed at WCAG 2.0 resources.

I'm impressed with the range and quality of documentation - they are working hard to make it easy to produce accessible sites.

Tuesday, 14 October 2008

UKOLN's one-stop shop 'Cultural Heritage' site

I've been a bad blogger lately (though I do have some good excuses*), so make up for it here's an interesting new resource from UKOLN - their Cultural Heritage site provides a single point of access to 'a variety of resources on a range of issues of particular relevance to the cultural heritage sector'.

Topics currently include 'collection description, digital preservation, metadata, social networking services, supporting the user experience and Web 2.0'. Usefully, the site includes IntroBytes - short briefing documents aimed at supporting use of networked technologies and services in the cultural heritage sector and an Events listing. Most sections seem to have RSS feeds, so you can subscribe and get updates when new content or events are added.

* Excuses include: (offline) holidays, Virgin broadband being idiots, changing jobs (I moved from the Museum of London to an entirely front-end role at the Science Museum) and I've also just started a part-time MSc in Human-Centred Systems at City University's School of Informatics.

Wednesday, 7 May 2008

Let's help our visitors get lost

In 'Community: From Little Things, Big Things Grow' on ALA, George Oates from Flickr says:

It's easy to get lost on Flickr. You click from here to there, this to that, then suddenly you look up and notice you've lost hours. Allow visitors to cut their own path through the place and they'll curate their own experiences. The idea that every Flickr visitor has an entirely different view of its content is both unsettling, because you can't control it, and liberating, because you’ve given control away. Embrace the idea that the site map might look more like a spider web than a hierarchy. There are natural links in content created by many, many different people. Everyone who uses a site like Flickr has an entirely different picture of it, so the question becomes, what can you do to suggest the next step in the display you design?

I've been thinking about something like this for a while, though the example I've used is Wikipedia. I have friends who've had to ban themselves from Wikipedia because they literally lose hours there after starting with one innocent question, then clicking onto an interesting link, then onto another...

That ability to lose yourself as you click from one interesting thing to another is exactly what I want for our museum sites: our visitor experience should be as seductive and serendipitous as browsing Wikipedia or Flickr.

And hey, if we look at the links visitors are making between our content, we might even learn something new about our content ourselves.

Saturday, 19 April 2008

Nielson on 'should your website have concise or in-depth content?'

Long pages with all the text, or shorter pages with links to extended texts - this question often comes up in discussions about our websites. It's the kind of question that can be difficult to answer by looking at the stats for existing sites because raw numbers mask all kinds of factors, and so far we haven't had the time or resources to explore this with our different audiences.

In Long vs. Short Articles as Content Strategy Jakob Nielsen says:
  • If you want many readers, focus on short and scannable content. This is a good strategy for advertising-driven sites or sites that sell impulse buys.
  • If you want people who really need a solution, focus on comprehensive coverage. This is a good strategy if you sell highly targeted solutions to complicated problems.
...

But the very best content strategy is one that mirrors the users' mixed diet. There's no reason to limit yourself to only one content type. It's possible to have short overviews for the majority of users and to supplement them with in-depth coverage and white papers for those few users who need to know more.

Of course, the two user types are often the same person — the one who's usually in a hurry, but is sometimes in thorough-research mode. In fact, our studies of B2B users show that business users often aren't very familiar with the complex products or services they're buying and need simple overviews to orient themselves before they begin more in-depth research.

Hypertext to the Rescue
On the Web, you can offer both short and long treatments within a single hyperspace. Start with overviews and short, simplified pages. Then link to long, in-depth coverage on other pages.

With this approach, you can serve both types of users (or the same user in different stages of the buying process).

The more value you offer users each minute they're on your site, the more likely they are to use your site and the longer they're likely to stay. This is why it's so important to optimize your content strategy for your users' needs.


So how do we adapt commercial models for a cultural heritage context? Could business-to-business users who start by familiarising or orienting themselves before beginning more in-depth research be analogous to the 'meaning making modes' for museum visitors - browsers and followers, searchers or researchers - identified by consultants Morris, Hargreaves, McIntyre?

Is a 'read more' link on shorter pages helpful or disruptive of the visitors' experience? Can the shorter text be written to suit browsers and followers and the 'read more' link crafted to tempt the searchers?

I wish I could give the answer in the next paragraph, but I don't know it myself.

Sunday, 24 February 2008

Museum technology project repository launched

MCN have announced the launch of MuseTech Central, a project registry where museum technologies can 'share information about technology-related museum projects'. It sounds like a fabulous way to connect people and share the knowledge gained during project planning and implementations processes, hopefully saving other museum geeks some resources (and grey hairs) along the way.

I'd love to see something like that for user evaluation reports, so that institutions with similar audiences or collections could compare the results of different approaches, or organisations with limited resources could learn from previous projects.

Friday, 22 February 2008

Open Source Jam (osjam) - designing stuff that gets used by people

On Thursday I went to Google's offices to check out the Open Source Jam. I'd meant to check them out before and since I was finally free on the right night and the topic was 'Designing stuff that gets used by people' it was perfect timing. A lot of people spoke about API design issues, which was useful in light of the discussions Jeremy started about the European Digital Library API on the Museums Computer group email list (look for subject lines containing 'APIs and EDL' and 'API use-cases').

These notes are pretty much just as they were written on my phone, so they're more pointers to good stuff than a proper summary, and I apologise if I've got names or attributions wrong.

I made a note to go read more of Duncan Cragg on URIs.

Paul Mison spoke about API design antipatterns, using Flickr's API as an example. He raised interesting points about which end of the API provider-user relationship should have the expense and responsibility for intensive relational joins, and designing APIs around use cases.

Nat Pryce talked about APIs as UIs for programmers. His experience suggests you shouldn't do what programmers ask for but find out what they want to do in the end and work with that. Other points: avoid scope creep for your API based on feature lists. Naming decisions are important, and there can be multilingual and cultural issues with understanding names and functionality. Have an open dialogue with your community of users but don't be afraid to selectively respond to requests. [It sounds like you need to look for the most common requests as no one API can do everything. If the EDL API is extensible or plug-in-able, is the issue of the API as the only interface to that service or data more tenable?] Design so that code using your API can be readable. Your API should be extensible cos you won't get it right first time. (In discussion someone pointed out that this can mean you should provide points to plug in as well as designing so it's extensible.) Error messages are part of the API (yes!).

Christian Heilmann spoke on accessibility and make some really good points about accessibility as a hardcore test and incubator for your application/API/service. Build it in from the start, and the benefits go right through to general usability. Also, provide RSS feeds etc as an alternative method for data access so that someone else can build an application/widget to meet accessibility needs. [It's the kind of common sense stuff you don't think someone has to say until you realise accessibility is still a dirty word to some people]

Jonathan Chetwynd spoke on learning disabilities (making the point that it includes functional illiteracy) and GUI schemas that would allow users to edit the GUI to meet their accessibility needs. He also mentioned the possibility of wrapping microformats around navigation or other icons.

Dan North talked about how people learn and the Dreyfus model of skill acquisition, which was new to me but immediately seemed like something I need to follow up. [I wonder if anyone's done work on how that relates to models of museum audiences and how it relates to other models of learning styles.]

Someone whose name I didn't catch talked about Behaviour driven design which was also new to me and tied in with Dan's talk.

Thursday, 21 February 2008

Another way to find out what's being said about your organisation

If you're curious to know what's being said about your institution, collection or applications, Omgili might help you discover conversations about your organisation, websites or applications. If you don't have the resources for formal evaluation programs it can be a really useful way of finding out how and why people use your resources, and figure out how you can improve your online offerings. From their 'about' blurb:

Omgili finds consumer opinions, debates, discussions, personal experiences, answers and solutions. ... [it's] a specialized search engine that focuses on "many to many" user generated content platforms, such as, Forums, Discussion groups, Mailing lists, answer boards and others. ... Omgili is a crawler based, vertical search engine that scans millions of online discussions worldwide in over 100,000 boards, forums and other discussion based resources. Omgili knows to analyze and differentiate between discussion entities such as topic, title, replies and discussion date.

Monday, 3 September 2007

Usability articles at Webcredible

The article on 10 ways to orientate users on your site is useful because more and more users arrive at our sites via search engines or deep links. Keeping these tips in mind when designing sites helps us give users a sense of the scope, structure and purpose of a website, no matter whether they start from the front page or three levels down.

How to embed usability & UCD internally "offers practical advice of what a user champion can do to introduce and embed usability and user-centered design within a company" and includes 'guerrilla tactics' or small steps towards getting usability implemented. But probably the most important point is this:

The most effective method of getting user centered design in the process is through usability testing. Invite key stakeholders to watch the usability testing sessions. Usability testing is a real eye-opener and once observed most stakeholders find it difficult to ignore the user as part of the production process. (The most appropriate stakeholders are likely to be project managers, user interface designs, creative personnel, developers and business managers.)


I would have emphasised the point above even if they hadn't. The difference that usability testing makes to the attitudes of internal stakeholders is amazing and can really focus the whole project team on usability and user-centred design.

Friday, 10 August 2007

Final diary entry from Catalhoyuk

I'm back in London now but here goes anyway:

August 1
My final entry of the season as I'm on the overnight train from Cumra to Istanbul tonight. After various conversations on the veranda I've been thinking about the intellectual accessibility of our Catalhoyuk data and how that relates to web publication and this entry is just a good way to stop these thoughts running round my head like a rogue tune.

[This has turned into a long entry, and I don't say anything trivial about the weather or other random things so you'd have to be pretty bored to read it all. Shouldn't you be working instead?]

Getting database records up on the website isn't hard - it's just a matter of resources. The tricky part is providing an interesting and engaging experience for the general visitor, or a reliable, useable and useful data set for the specialist visitor.

At the moment it feels like a lot of good content is hidden within the database section of the website. When you get down to browsing lists of features, there's often enough information in the first few lines to catch your interest. But when you get to lists of units, even the pages with some of the unit description presented alongside the list, you start to encounter the '800 lamps' problem.

[A digression/explanation - I'm working on a website at the Museum of London with a searchable/browsable catalogue of objects from Roman Londinium. One section has 800 Roman oil lamps - how on earth can you present that to the user so they can usefully distinguish between one lamp and another?]

Of course, it does depend on the kind of user and what they want to achieve on the Londinium site - for some, it's enough to read one nicely written piece on the use of lamps and maybe a bit about what different types meant, all illustrated with a few choice objects; specialist users may want to search for lamps with very particular characteristics. Here, our '800 lamps' are 11846 (and counting) units of archaeology. The average user isn't going to read every unit sheet, but how can they even choose one to start with? And how many will know how to interpret and create meaning from what they read about the varying shades of brown stuff? Being able to download unit sheets that match a particular pattern - part of a certain building, ones that contain certain types of finds, units related to different kinds of features - is probably of real benefit to specialist visitors, but are we really giving those specialist visitors (professional or amateur) and our general visitors what they need? I'm not sure a raw huge list of units or flotation numbers is of much help to anyone - how do people distinguish between one thumbnail of a lamp or one unit number and another in a useful and meaningful way? I hope this doesn't sound like a criticism of the website - it's just the nature of the material being presented.

The variability of the data is another problem - it's not just about data cleaning (though the 'view features by type' page shows why data cleaning is useful) - but about the difference between the beautiful page for Building 49 and rather less interesting page for Building 33 (to pick one at random). If a user lands on one of the pages with minimal information they may never realise that some pages have detailed records with fantastic plans and photos.

So there are the barriers to entry that we might accidentally perpetuate by 'hiding' the content behind lists of numbers; and there is the general intellectual accessibility of the information to the general user. Given limited resources, where should our energies be concentrated? Who are our websites for?

It's also about matching the data and website functionality to the user and their goals - the excavation database might not be of interest to the general user in its most raw form, and that's ok because it will be of great interest to others. At a guess, the general public might be more interested in finds, and if that's the case we should find ways to present information about the objects with appropriate interpretation and contextualisation, not only to present information about the objects but also to help people have a more meaningful experience on the site.

I wonder if 'team favourite' finds or buildings/spaces/features could be a good way into the data, a solution that doesn't mean making some kinds of finds or some buildings into 'treasure' and more important than others. Or perhaps specialists could talk about a unit or feature they find interesting - along the way they could explain how their specialism contributes to the archaeological record (written as if to an intelligent thirteen year old). For example, Flip could talk about phytoliths, or Stringy could talk about obsidian, and what their finds can tell us.

Proper user evaluation would be fabulous, but in the absence of resources, I really should look at the stats and see how the site is used. I wonder if I could do a surveymonkey thing to get general information from different types of users? I wonder what levels of knowledge our visitors have about the Neolithic, about Anatolian history, etc. What brings them to the website? And what makes them stick around?

Intellectual accessibility doesn't only apply to the general public - it also applies to the accessibility of other team's or labs content. There are so many tables hidden behind the excavation and specialist database interfaces - some are archived, some had a very particular usage, some are still actively used but still have the names of long-gone databases. It's all very well encouraging people to use the database to query across specialisms, but how will they know where to look for the data they need? [And if we make documentation, will anyone read it?]

It was quite cool this morning but now it's hot again. Ha, I lied about not saying anything trivial about the weather! Now go do some work.
(Started July 29, but finally posted August 1)

Wednesday, 4 July 2007

Collected links and random thoughts on user testing

First, some links on considerations for survey design and quick accessibility testing.

Given the constraints of typical museum project budgets, it's helpful to know you can get useful results with as few as five testers. Here's everybody's favourite, Jakob Nielsen, on why you can do usability testing with only five users, card sorting exercises for information architecture with 15 users and quantitative studies with 20 users. Of course, you have to allow for testing for each of your main audiences and ideally for iterative testing too, but let's face it - almost any testing is better than none. After all, you can't do user-centred design if you don't know what your users want.

There were a few good articles about evaluation and user-centred design in Digital Technology in Japanese Museums, a special edition of the Journal of Museum Education. I particularly liked the approach in "What Impressions Do People Have Regarding Mobile Guidance Services in Museums? Designing a Questionnaire that Uses Opinions from the General Public" by Hiromi Sekiguchi and Hirokazu Yoshimura.

To quote from their abstract: "There are usually serious gaps between what developers want to know and what users really think about the system. The present research aims to develop a questionnaire that takes into consideration the users point of view, including opinions of people who do not want to use the system". [my emphasis]

They asked people to write down "as many ideas as they could - doubts, worries, feelings, and expectations" about the devices they were testing. They then grouped the responses and used them as the basis for later surveys. Hopefully this process removes developer- and content producer-centric biases from the questions asked in user testing.

One surprising side-effect of good user testing is that it helps get everyone involved in a project to 'buy into' accessibility and usability. We can all be blinded by our love of technology, our love of the bottom line, our closeness to the material to be published, etc, and forget that we are ultimately only doing these projects to give people access to our collections and information. User testing gives representative users a voice and helps everyone re-focus on the people who'll be using the content will actually want to do with it.

I know I'm probably preaching to the converted here, but during Brian Kelly's talk on Accessibility and Innovation at UKMW07 I realised that for years I've had an unconscious test for how well I'll work with someone based on whether they view accessibility as a hindrance or as a chance to respond creatively to a limitation. As you might have guessed, I think the 'constraints' of accessibility help create innovations. As 37rules say, "let limitations guide you to creative solutions".

One of the points raised in the discussion that followed Brian's talk was about how to ensure compliance from contractors if quantitative compliance tests and standards are deprecated for qualitative measures. Thinking back over previous experiences, it became clear to me that anyone responding to a project tender should be able to demonstrate their intrinsic motivation to create accessible sites, not just an ability to deal with the big stick of compliance, because a contractors commitment to accessibility makes such a difference to the development process and outcomes. I don't think user testing will convince a harried project manager to push a designer for a more accessible template but I do think we have a better chance of implementing accessible and usable sites if user requirements considered at the core of the project from the outset.

Tuesday, 1 May 2007

I'm blogging this post on Twenty Usability Tips for Your Blog so I can find it again and because it's a useful summary. We're in the process of finding a LAMP host, which we'll be able to use for our OAI-PMH repositories as well as blogs and forums.

While on a Web 2.0-buzzword ish kick, check out the LAARC's photos on Flickr and the Museum of London photo pool. The first LAARC sets are community excavations at Bruce Castle and Shoreditch Park.

Thursday, 18 January 2007

Notes on usability testing

Further to my post about the downloadable usability.gov guidelines, I've picked out the bits from the chapter on 'Usability Testing' that are relevant to my work but it's worth reading the whole of the chapter if you're interested. My comments or headings are in square brackets below.

"Generally, the best method is to conduct a test where representative participants interact with representative scenarios.
...
The second major consideration is to ensure that an iterative approach is used.

Use an iterative design approach

The iterative design process helps to substantially improve the usability of Web sites. One recent study found that the improvements made between the original Web site and the redesigned Web site resulted in thirty percent more task completions, twenty-five percent less time to complete the tasks, and sixty-seven percent greater user satisfaction. A second study reported that eight of ten tasks were performed faster on the Web site that had been iteratively designed. Finally, a third study found that forty-six percent of the original set of issues were resolved by making design changes to the interface.

[Soliciting comments]

Participants tend not to voice negative reports. In one study, when using the ’think aloud’ [as opposed to retrospective] approach, users tended to read text on the screen and verbalize more of what they were doing rather than what they were thinking.

[How many user testers?]

Performance usability testing with users:
– Early in the design process, usability testing with a small number of users (approximately six) is sufficient to identify problems with the information architecture (navigation) and overall design issues. If the Web site has very different types of users (e.g., novices and experts), it is important to test with six or more of each type of user. Another critical factor in this preliminary testing is having trained usability specialists as the usability test facilitator and primary observers.
– Once the navigation, basic content, and display features are in place,
quantitative performance testing ... can be conducted

[What kinds of prototypes?]

Designers can use either paper-based or computer-based prototypes. Paper-based prototyping appears to be as effective as computer-based prototyping when trying to identify most usability issues.

Use inspection evaluation [and cognitive walkthroughs] results with caution.
Inspection evaluations include heuristic evaluations, expert reviews, and cognitive walkthroughs. It is a common practice to conduct an inspection evaluation to try to detect and resolve obvious problems before conducting usability tests. Inspection evaluations should be used cautiously because several studies have shown that they appear to detect far more potential problems than actually exist, and they also tend to miss some real problems.
...
Heuristic evaluations and expert reviews may best be used to identify potential usability issues to evaluate during usability testing. To improve somewhat on the performance of heuristic evaluations, evaluators can use the ’usability problem inspector’ (UPI) method or the ’Discovery and Analysis Resource’ (DARe) method.
...
Cognitive walkthroughs may best be used to identify potential usability issues to evaluate during usability testing.
...
Testers can use either laboratory or remote usability testing because they both elicit similar results.

[And finally]

Use severity ratings with caution."

Monday, 15 January 2007

Useful background on usability testing

I came across www.usability.gov while looking for some background information on usability testing to send colleagues I'm planning some user evaluation with. It looks like a really useful resource for all stages of a project from planning to deployment.

Their guidelines are available to download in PDF form, either as entire book or specific chapters.