According to an Accenture study (available here in French), 44% of French consumers switched over to competitors in 2017 because their shopping experience was not personalised enough – and 66% of them are more likely to buy from companies that personalise customer experiences (based on their location, past interactions or preferences). This demand for ultra-personalisation, combined with highly volatile Internet users, is pushing digital marketing into the era of hyper-relevance. Consumer loyalty to a brand is becoming almost obsolete. Online shoppers increasingly take advantage of the best opportunity, regardless of where it comes from – and expect companies to offer them exactly what they need, at the right price and at the right time. So how do companies approach this modern conundrum? Welcome to Hyper relevance… And the most effective way to obtain it is to collect and analyse analytical data of impeccable quality.

In practice…

From experience, we have seen that many companies are still light years away from grasping the extent of the modern shifts in online marketing. Internet users often find themselves pursued by the advertising of a product they have already bought, and whose price goes down over time. This type of inconsistency fuels customers’ mistrust and is often caused by a lack of data quality which undermines a brand’s credibility and harms their business.

It’s all about trust

In practical terms, for marketers, it is a question of restoring relevance in their every action, campaign and in the relationship they create with the consumer. The challenge is to increase the level of trust in the brand over time. Only quality data (accurate, complete, clean, timely, consistent and compliant) can guarantee that errors are kept to a minimum during its use (there is no such thing as zero risk).

Paradoxically, 47% of French people are concerned that the new digital services will know too much about them and their family… 82% also say that it is extremely important for companies to protect the confidentiality of their personal information.

In addition to advanced personalisation, corporate transparency and ethics also become a weapon of persuasion, and therefore a necessary condition for restoring trust.

Hyper-relevance analytics

What can be offered, for example, to a customer affected by a natural disaster? Or to a user whose flight has been significantly delayed? These situations require a high degree of relevance in the response you need to provide. With a highly advanced knowledge of user expectations, it is possible to target precisely and at the right time. You can achieve this by:

  • Obtaining a perfect understanding the customer’s entire digital ecosystem, and not just their site-centric performance, which tends to be generalised by all their online devices (mobile activity can often reveal specific, often varying behaviour).
  • Detecting a strong interest in particular content through the analysis of requests made by the Internet user in an engine, or by studying the frequency of their visits.
  • Measuring underperformance: visitors’ exposure to error pages, queries without results, etc.
  • Identifying consumer trends and behaviours, the best buyers or the most volatile.
  • Anticipating traffic variations influenced by external factors such as news, weather, etc.

Machine learning – the key to value

The most mature companies already use enhanced analytical technologies. They use machine learning algorithms to detect and anticipate anomalies or precisely segment customer profiles according to various parameters. Investing in predictive and prescriptive technologies is one way to address the issue of hyper-relevance. But without ensuring data quality, all efforts are meaningless, and above all risky.

Gartner refers to increased analytics as the number one trend and priority for CDOs in 2020. Built using mathematical algorithms and models, these tools must make it possible to describe and predict the behaviour of Internet users. One of the Machine Learning applications to improve data quality is to offer an automatic anomaly-detection service. The idea is to record the time trends and any suspicious or abnormal fluctuations in the metrics (human beings are unable to do this by themselves on a large scale). These analyses help to explain the probable causes of these anomalies.

For example: if a robot passes over a site and causes a significant peak in traffic, an anomaly is detected on the number of pages viewed. By automatically exploring a whole set of related dimensions (source, device, browser, etc.), a causality analysis can be established, and it can be concluded that this anomaly was caused by an abnormal increase in traffic on the direct traffic segment in Canada on the chrome 55 version. This type of tool makes it possible to deliver a preliminary analysis, to better understand behaviours and to guarantee the reliability of your analyses.

If you are keen to find out more about Data Quality, please don’t hesitate to download our latest free guide:

Data Quality in Digital Analytics updated in 2019
Author

Editorial Manager. Bernard was responsible for the content strategy of the AT Internet brand. He has almost 10 years’ experience in technical and marketing writing in the Software industry. A word specialist, Bernard’s job sees him working on many different mediums including blogs, white papers, interviews, business cases, press, infographics, videos etc. His specialist fields? Marketing and digital analytics content of course!

Comments are closed.