Guerilla Competitor Analysis

Who’s the competition, and how good are their products? Everybody needs to know that. But how do you do it from a Product Owner perspective?

The Product Management community have their own ideas about this, but we’re not going to recycle that knowledge here. Instead, we’ll look at a guerilla style of doing it that’s more focused on the Product Owner. Here more focus is given to the need to understand what the competitors’ products actually do and to compare that with what our product does. It’s could be about copying them, or could be about understanding the differences. It’s really both.

Before we look at a way of doing guerilla (i.e., kind of a hack) competitor analysis, let me tell you a story.

Back when I was doing my research, I got some funding from the Health and Safety Executive in England to help them figure out what software to buy to manage their research portfolio. In the past, they’d had Cap Gemini build software for them bespoke. But it was too slow and too expensive to build their own solution. They’d been thinking about what to do for about a year and a half. When I got there, they were a little demoralized by the whole experience, because managing these research projects was being done in spreadsheets and it wasn’t working very well.

This time, they wanted to buy something ‘off the shelf’. (And in those days, the early 2000s, there was even a term for this ‘COTS’ – Commercial Off the Shelf software. Seems a silly name now.) So, we figured out the top five things we wanted to do with the software.

Here’s what we came up with:

  • Create a new research opportunity (let’s add to that edit, view and search for all opportunities)
  • Publish a research opportunity
  • Invite people to apply for a research opportunity
  • Award a research opportunity to a contractor
  • Allocate funds to a research opportunity

Now, there wasn’t any software we could find especially for managing research opportunities. It is kind of specialised. So, we identified generic tools that we thought could do the job, and we made a shortlist of five. We sent each supplier a letter inviting them to come and give us a presentation along with the list of tasks we wanted to accomplish. We asked them to show us how we could do what we wanted to do with their software.

On the day of the presentations, we had a room full of HSE people. They all knew the criteria upon which we were judging the software, and they played an active role. One supplier didn’t come. One ignored our instructions. Three gave us a presentation based around our described needs. When it was done, the HSE staff had a meeting and made a decision.

It worked well because we already knew what we wanted to do and had described it in enough detail. We didn’t get distracted by all the bells and whistles of software that we didn’t need. Everybody in the room making the decision felt they understood the process. A decision was made painlessly, and it was unanimous. And that’s the story. (I wrote the whole thing up in my Ph.D. – you can read it if you want.)

So, how can you use this approach to do guerilla competitor analysis? Let’s say you are a Product Owner, and you want to know how good the competition really is.

Let’s say you want to make an assessment based on:

  • ease of use/intuitive to use and
  • functionality

You already know the class of product you build. As an example, I did this with a learning management system for high schools. We made a list of what our software did, or was scheduled to do. It included:

As a teacher:

  • set up an account
  • set up a class
  • create a piece of homework
  • distribute the homework
  • give homework a mark

As a student:

  • set up an account
  • view tasks
  • ask question
  • submit homework assignment

As a parent:

  • set up an account
  • ask a question
  • view response
  • view student progress

You can guess what we did next. We made a list of the top 8 products in the ‘learning management’ space. We put a week aside. We went into each product and set up demo accounts and then we tried to do all the functionality we’d previously defined.

Sometimes it was fun; sometimes it was frustrating. Sometimes we couldn’t figure out how the software worked. Sometimes we were frustrated by a UX that was not intuitive. Sometimes we didn’t understand the language the software designers were using. Sometimes we got a good idea, and we put it in our own backlog.

This is not quantitative analysis; it’s very much qualitative. And you might say that we would finish up with just another copycat product. But that’s not what happened. What happened is we collated our impressions, and we presented the results of what we’d done to the team. The team loved it. They loved to be told about the competitive environment they faced and to have a look at the competitor products.

It’s not scientific. It’s a guerilla approach. But I like it because it gives a real sense of how important the look and feel of a product is. If we couldn’t understand how it worked, even if it did everything and more, we wouldn’t use it, and we thought it was probably a barrier to others using it too.

Most of all, though, we had the power to evaluate the products because we knew what we wanted to achieve. And we did the same thing with all the products. At no time did we get bogged down with all their ‘cool features’, that actually, we didn’t think were very important to the core mission of ‘helping mature learning relationships to grow’.

Here’s a link to the product we were building. You can try it out if you want. You’ll notice that the UX is very clean. It’s in German by default, but you can change the language.

This post first appeared here. 

Leave a Comment

Your email address will not be published. Required fields are marked *