Usablity Methods for Product Development:
This is a practice area. When I have the money for it, I've employed researchers who do this full time. Teams of researchers no less. It is a scientifically-pursued discipline, and therefore there are piles of books on it. But I'll boil it down as much as I can to buzzwords and oversimplifications because one key thing is that:
Pretty good usability methods are vastly better than well-done marketing methods.
There's probably some other name for them, but I call all the halfass research methods marketing methods as trying to get someone to give you money by calling their processes "halfassed" is not as effective as you'd think. These methods are things like:
- Focus groups
Their downfalls are that:
- They are prone to bias
- They try to measure preference
There are LOTS of types of bias. If anyone simply has to make a survey, run a demo, or do a focus group, ask me and we'll talk through ways to reduce bias and gather information more effectively. Often, these sorts methods are best used not for end user feedback but gathering stakeholder information, and buyin and you should set up a workshop instead.
Anyway, we're talking about product development getting information from end users not stakeholders, leaders, etc. So that's first. You have to identify:
- Audience – Or who the end users are.
- Environment – Or the context of use.
Note that you refine all these with the research methods I'm going to outline below. Ideally you do this iteratively. Do a bit of research, develop a product concept, do a bit of research, refine the concept, etc. Everything can change during this. If 4 iterations in you find a new constituency that has different needs, you have to try to address it or bring it up and explicitly decide you won't meet their needs.
(Design principle: you cannot meet everyone's needs, so need to focus, but you must know what the limits are and explicitly address them.)
I use both these, audience and environment, very heavily. While you have to present the same info to the leadership for politeness, CoC or political reasons, you are only really gathering info from the identified end users. And you do it in the field. You don't gather the info in labs, conference rooms or hotel lobbies. You do it where people work, whether cubefams, maintenance centers, vehicles on the move, engine rooms of boats, or the woods under fire. Go there, and watch.
Watch, measure, video, photograph, and take notes. Because we listen, sure. But most of all we care about:
Performance, not preference
I don't (much) care if you like my new website, app, control panel, wrench, body armor, rifle, as much as I want to know if it works well. Also, preference as measured by those marketing methods is first-impressions. People grow to love products that work well for them over time. Measuring how fast, well, and effectively something works.
We do this because:
Not on purpose, but they do. Why? Stupid brains, and biases again. But overall, you cannot assume that what people say is true in the ways that matter for creating new products.
There are several classes of such testing, but we'll cover two-1/2 basic types here (these are mine, others break down the methods in other ways):
You do this very, very early in the process, and if you don't have enough baseline knowledge of how people really use a process or the existing tools.
You might confuse this with anthropology, as one way to do it is simply sit there in the environment, ideally impacting the end users not at all, and watch what they do. The difference is we want to know how people do tasks, and don't much care about their society.
In practice, very few people get money to go off to sub-Sarahan Africa for 6 months and gather data (not none, but very, very few). Instead we do more intrusive and short term methods. For example, earlier this year I rode around in test cars (like the once wrapped in funny vinyl) to see what the drivers actually DID. It was not especially like what the bosses said they did, so we were able to come up with some different requirements and design a better digital product to replace the paper methods they do today.
Okay, technically there are two of these, Formative which you do with prototypes, and Summative you do with functional products. But the test methods are the same.
You let people use your product (or make them) in the most realistic environment you can. "You can" is a sliding scale of cost, safety, and plausibility. Early prototypes can be not live so you fake things. You will test armor, packs, and rifles in ranges and FTXs, not battlefields because it's hard to find usability testers who want to go to war zones and can spend time taking notes instead of ducking.
You observe actual use and measure performance on important metrics. For things with switches and buttons and info displays (from rifles, to radios, to mobile apps) you measure:
- Time on task - How long they take with each step, how long it takes to pick the right button
- Completion rates – What percentage completed the task at all, how many tries it took, and if they needed assistance or to read instructions to do it the first time
- Accuracy – Did they do it right? Did they understand the info properly or misread/misunderstand what they read?
For armor, you'd (I guess... not doing this) test if they put it on right (as wrong is bad) without assistance, over time, in various environments. If they can get it off. If they can do other tasks (go prone, get out of vehicles, don and doff packs) at the same speed as the baseline armor.
Comfort and fatigue could also be measured, and gets into complex routines like testing all day, or on repeated days to see if performance changes over time as people become familiar with it or (risk) learn to use in unexpected ways.
Okay, we do gather opinion as well. At the end of each test I like to make the participants (individually) fill out a SUS. The System Usability Scale http://www.measuringu.com/sus.php is a specific set of questions, asked in a specific way, that helps to eliminate bias and gives a single number that can be applied to any product to tell if people find it basically acceptable so will try to use it again,
It's useful partly as you can do it with each test and prove to your leadership you are improving with each iteration.
I have an article on field methods here: https://www.uxmatters.com/mt/a...lean-ethnography.php
You can do this!
I learned it, and do it as a side job. You can too! Anyone who does this sort of work and wants to do it it better can PM me and I'll secretly help you out as much as I can remotely, without knowing about your secret project.
If your org is interested and has budget, I also have a company to do the work, or train you all in how to do it.