A great UX means more sales

The secret quadrant of high conversion rates (and poor UX)

What is the true and primary goal of a company? Sales? Growth? Added value? No. The principle works the other way around. Sales and growth are the result of the actual goal and that is: customer benefit. So the simple formula is: Satisfied customers order - and ensure sales and growth, right?

No, unfortunately it's not that simple. I remember a workshop where our customer's employees reported on an A / B test.


(Who doesn't know them, those annoying tormentors? Found on thebestofemail.com)

The result was very simple:

A (disruptive) layer ensures 4.3% more orders.

How can that be? Something like that annoys everyone! The UX colleague was also outraged when it came to putting this affrond online.

"We can't do that!"

(At the same time his last vowel fell silent in an echo of despair ...)

Can we do it

I remember a case study that I presented at emetrics some time ago. In one of our MotivationLabs® we found out that a self-starting video (with sound) annoys 100% of all users - and really.

In the A / B test, however, it turned out that this video had increased the conversion rate significantly in the double-digit range.

What to do?

We also tested videos on product pages for an e-commerce customer, one version of which also started automatically. The result was not astonishing: This variant gained significantly - both in terms of sales conversion rate and revenue. Unfortunately, the telephones in the call center did not stand still because of angry customers.

So what to do

Are we on the path of “vice”? Is it bad to let videos start when it brings more sales?

No it is not evil.

It's just thought possibly stupid and short-sighted.

Because it may damage the brand in the long run. And this damage is likely to be significantly higher than the short-term gain in conversion rate.

It would therefore be simply unwise to define the conversion rate as the primary goal.

SO WHAT TO DO ???

(I know I haven't really answered the question yet ...)

If we have the feeling that our changes could actually reduce user or customer satisfaction, we have to measure precisely this satisfaction as well as the conversion rate. This is the only way we can carefully weigh up after an experiment which variant would really be the better one.

I have summarized the possibilities in a quadrant model:

Top right is the “normal area” in which almost all optimizers work - true to the motto: What the user likes better, also converts better. Anyone who works intensively with A / B tests will see that the uplifts that can be achieved mainly depend on how bad the page was before.

Unfortunately, not many sites are really bad. (Except maybe Dell's).

Bottom left is exactly this area in which we find sites like Dell's with simply poor usability. These sites do so much wrong that it leads to a poor user experience and low CR. (But there are fewer and fewer of them - even the Deutsche Bahn website has now almost gotten out of this quadrant.)

Top left is an area that many optimizers know in the form of bad results: Too many seals of approval, an unnecessary reference to data security, a section with frequently asked questions and - whoops - the conversion rate is gone. Why? Because the “well-intentioned” content actually only triggered concerns from users that weren't there before. Or because elements distracted you. There are many reasons for “well-intentioned” tweaks.


(Well meant, but too many questions about (in) security in the shopping cart distract and reduce the CR)

Bottom right is after all the harmful area of ​​dark patterns, manipulation, deception and lies. Here will against worked for customers - that cannot go well in the long term.

(On darkpatterns.org you can find examples like this one from Ryan Air, where the “No Insurance” option is cleverly camouflaged in a country selection in order to earn additional money by deception)

OK, so there is a choice between low effects, (rare) low hanging fruits, no or negative effects and damage to the brand?

Sounds like plague, cholera, whooping cough and measles.

I am sure there is another field:

This little green field is the area that everyone involved argues about over and over again. For example, when it comes to sites like this:

8 rooms left.

Booked 24 times today.

2 hotels are already sold out.

There are 9 people looking at this hotel

IN GREAT DEMAND!

LEAVE ME WITH YOUR PANIC, YOUR &% $! # !!!

Unfortunately, booking.com knows exactly when to cross the “red line of bad user experience”. In contrast to many other websites, booking.com constantly asks how satisfied the user was - for example here directly on the confirmation page:

Booking.com is also constantly looking to get feedback on the hotel:

Particularly clever: I am asked when booking when i will be checking in. About an hour after this time, I get a message asking me to rate the check-in.

Also important: all surveys on booking.com are kept very simple:

If you really want to optimize effectively, you have to know exactly how far you can go.

booking.com does nothing else than the "Sweet spot”To search between optimal user experience and maximum conversion rate. And they don't do this on the basis of assumptions - but with the help of data. This saves you a lot of time for unnecessary discussions.

Above all, however, you save yourself optimizations that damage the user experience and thus the brand in the long term.

This is how booking.com comes into the sensible area of ​​this additional quadrant - which, however, should not be entered without querying user satisfaction ...

Conclusion:

  • Optimizations can be dangerous if they damage the user experience and thus the brand
  • Unfortunately, many discussions about this are conducted subjectively
  • It is better to inquire about user satisfaction, as on booking.com, in the form of an NPS score
  • Experiments should compare both key figures (CR / satisfaction) in order to find the “sweet spot”