mobile-phone-screen-design

Driven by a strong desire to involve our customers across the entire customer journey, at AT Internet we’ve recently taken a user-centric turn with our product strategy. This means user experience was a particularly important factor when designing the new version of our solution.
It wasn’t just a question of creating a tool whose usability and ergonomics were thoughtfully approached – we also wanted to involve the entire company and our customers in our product design process, from A to Z. Explorer – and the Analytics Suite 2 in general – represented a great opportunity to take our first steps in this direction.

 

First steps toward ‘user-centricity’

Throughout the different phases of reflecting on our vision for Explorer, we involved our colleagues via workshops and presentations. These initial exchanges enabled us to obtain the necessary material to build a prototype.

Of course, we wanted to go beyond these internal exchanges and put the product in the hands of a few customers as quickly as possible. Our goal was to get feedback very early in the development process which would enable us to test on several points: Did the proposed HCI respond to customer needs? Was our product vision correctly aligned with our users’ expectations? Were the features useful, intuitive?

To find out, we decided to test the early versions of our products with clients for feedback that would allow us to:

  • Quickly prioritise features, thanks to our future end users expressing their tangible, detailed needs
  • Make user feedback the #1 source to feed our backlogs
  • Build a product roadmap completely aligned with user expectations

 

Introducing a co-design programme

This project became a reality at the 8th edition of our Digital Analytics Forum (DAF) event, during which we presented our vision for Explorer, our new tool for data analysis and reporting. This event was the perfect occasion to involve our clients in this new project: After presenting Explorer, we invited clients to participate in one-on-one demos of the prototype and “getting started” sessions. But we wanted to involve our customers in the longer term, beyond these momentary exchanges. Our goal was twofold: give customers the opportunity to see how our products evolve throughout the development phase, and also give ourselves the opportunity to readjust developments along the way based on customer feedback, without having to wait for the release of an official version for everyone.

To achieve this, we decided to implement a “co-design” process to involve interested customers who were open to answering questionnaires, participating in workshops and more.

Following the DAF event, 50 customers volunteered to participate in this programme.

 

Leading the process

The co-design process can only work efficiently with a “give-give” approach. Co-design activity should be regular (without being too frequent) and alternate between the software creator providing elements and the participating users providing feedback to keep the flame alive.

We therefore treated our “guinea pig” clients to exclusive opportunities:

  • A presentation video of the product’s initial developments
  • A pre-version available for use in their user account
  • A “getting started” webinar
  • Newsletter updates on the product’s progress

Based on this information, our co-designer customers shared their feedback via several channels:

  • Voting for certain features via a “fake doors” system in the interface
  • A questionnaire based on the Kano model to establish priority amongst the main features
  • A preference test to validate functional approaches, notably regarding access to graphing tools
  • A feedback form within the tool’s interface to directly share feedback in real conditions
  • One-on-one interviews

 

The tools

Prioritising features: Kano & fake doors

Kano questionnaire

Our goal in using the Kano questionnaire was to present a certain number of major features and let our co-designers give feedback on their feelings about each feature. Among the proposed features, the following were rated as features that would bring the greatest satisfaction:

“Must-have” features:

  • Export an analysis
  • Combine dimensions in a table, like in Data Query

“Attractive” features:

  • Send an analysis from Explorer to a report, dashboard, or Data Query
example-kano-model

EXAMPLE OF A QUESTION BASED ON THE KANO MODEL

example-response-kano-questionnaire

EXAMPLE OF RESPONSE TO KANO QUESTIONNAIRE

Fake doors

In parallel to the Kano questionnaire, and to go beyond prioritising features into a list (which could influence users), we wanted to test their reaction when faced with each of these features in the context of direct usage.

To do this, from our very earliest developments, we integrated “fake door” buttons that seemed to enable access to certain features. This system of “fake doors” enabled us to measure clicks on these buttons (representing user interest in these features), as well as on the vote buttons available behind each fake door.

voting-pop-up

VOTING POP-UP DISPLAYED WHEN USERS CLICK ON “SEND THIS ELEMENT TO A DASHBOARD” FAKE DOOR BUTTON

voting-results

VOTING RESULTS

The results obtained via these two approaches were completely in phase with each other, and allowed us to validate the three major features we had prioritised:

  • Sending an analysis to a dashboard or to Data Query
  • Creating segments on the fly in Explorer
  • Cross-combining dimensions

These two first features were launched along with the Explorer beta, and we are working on launching the third feature in the coming weeks.

 

Preference tests

When adding new features or working on a redesign for greater usability, the UX designer often suggests several different ideas and concepts. Each concept is tested internally with different user profiles (ranging from technical to business users), and if possible, with clients during one of our after-work events. We rework the concept accordingly until we reach an optimal feature.

In certain cases, we will ask our co-designers to give their preference between two different mock-ups and explain their reasoning, if desired. For example, we were confronted with this issue when studying how to present access to graphs in Explorer.

Unfortunately, in that case, neither mock-up emerged as the winning preference… opinions were unexpectedly and perfectly split 50-50. You can’t win them all!

Results-blog-post

We therefore maintained the initial approach used in our prototypes, and will adjust if necessary based on the feedback from all our customers after the product is fully launched.

 

Feedback given directly in the solution

In addition to these occasional interactions, we also wanted to offer our co-designers the option of sharing their thoughts on-the-fly when using the solution. We therefore integrated a simple form enabling the customer to share his/her experience and feelings in the moment.

In doing so, we collected about a hundred pieces of feedback, which served to feed a part of our backlog. It also helped us prioritise the features to be included in the Explorer beta launch.

explorer-dashboard

Early results

The selection and prioritisation of features was a tricky exercise – we had to strike a thoughtful balance between strong user expectations related to the existing product, new features that would bring undeniable value to the new product, and, of course, our own vision.

Despite this (necessary) difficulty, this first experience in user-centric development was more than satisfactory for us. Our co-designers were game participants, and we were therefore able to develop, very early on, what we can proudly call a user-centric product.

We wish to maintain this kind of dynamic with all our customers, notably by keeping the feedback form integrated in the public version of Explorer, and then by rolling it out, little by little, across the rest of the Analytics Suite 2. These feedback tickets are regularly reviewed and followed up on to keep our backlogs full. We’ll talk more about this in an upcoming article.

As always, to involve our customers in product development as early as possible, all our next-generation products will have their own beta programmes.

 

Thank you to all our customers who participated (and continue to participate) in improving our solution! Keep sharing your feedback!

Author

Product Marketing Manager Having studied marketing, Mélanie has worn the product manager hat at AT Internet for nearly 10 years. She is responsible for introducing innovative dataviz products while keeping the user-centric approach front and centre.

Comments are closed.