Rethinking Your Measurement: The Case Against Clicks

May 12, 2022 | Adobe Analytics, Analytics Implementation

The measurement plan process is a gold standard in the analytics world.  We live and die by our measurements, as they help us refine our KPIs to help provide answers to the business “why”?  Why should we care about X being faster?  Why does it matter that users complete the funnel path?  A good measurement plan is not always easy to spot, but a bad one always is.

Hot take: You know what “measurement” I hate seeing most when I’m building requirements?

“Number of times a user clicked CTA button”

Any analyst I’ve worked with is probably tired of my follow-up joke responding to that measurement:

“So if I build a bot that just clicks on that button all day long, and prevents navigation, infinite success? Correct?”

Of course not!  What they’re really wanting is probably along the lines of one of the below (take your pick / mix & match):

  • “Number of times a user enters targeted funnel flow though CTA navigation”
  • “Percentage of users interacting with CTA button”
  • “User pathing from home page”

THOSE items are measurements, the “number of clicks” or “percentage of users who clicked” isn’t.  Often, a “click” is an abstraction of the event you really want to capture. 

What if I really need to capture clicks?

I’ll give you the tl;dr first:

“You don’t”.

Longer and more nuanced answer:

“You’re likely blurring the line between measurement and requirement, and making your life harder”.

In the recent past, the line between analyst & implementation specialist was almost non-existent.  I have worked with team members who managed both in their previous occupations, and the need to be agile and the need to simplify meant that this distinction was often not needed.  Taking the time and effort to clarify that we actually wanted “navigation” when talking about clicking a link was an unnecessary breakout point, when you had to tag the link with a tag manager anyway as the analyst.  Compare the below:

 

Analyst / Implementation Combined View

  • Analyst / Implementation
    • Measurement: Understand why user is clicking menu option
    • Build CSS rule / work with developers to build direct call rule onClick
    • Verify implemented correctly and build report

Analyst / Implementation Separated View

  • Analyst
    • Understand what business cares about (user navigation, interaction with features, etc.)
    • Build measurements that help guarantee success of KPI (i.e., how many users are navigating to new feature? how many users run into an error?, etc.)
  • Implementation
    • Determine what is needed to capture measurement
      • Clicks on link that successfully navigate
      • Automatic navigation / redirect that may occur
      • Right click link to navigate
    • Work with development to account for all case scenarios
    • Meet with analyst to verify working as expected
  • Analyst
    • Build Reporting of stated measurements and meet with implementation to verify capturing as intended.

Looking at the above examples, I completely understand why some choose to not be as granular or view as separate functions, especially since in this case scenario, it appears that what implementation is doing is “common sense.”

However, analytics implementation is becoming it’s own specialty, and the benefits of breaking this out into it’s own function, even if you manage both in your current role, cannot be understated.  Most often, if we’re given a measurement about “clicks”, analytics implementation specialists will infer what you mean measurement wise, and implement a satisfactory implementation that will get you what you need (i.e., account for all above scenarios).  The problem arises when we don’t.

How it Falls Apart

Taking that measurement we presented earlier at face value, “Number of times a user clicked CTA button”, what does that mean for the following situations?

  • Mobile, where “click” is more akin to “tap”
  • At bottom of a form, which shows “error validation” if you try to submit a blank form and prevents anything happening
  • Button that is an anchor link to another section of the page
  • Button that only has a small section that is actually linked (i.e., click on text inside button navigates, clicking edge doesn’t)
  • Right clicking button?
  • Situations where button might not appear for X reason?

 

Implementation specialists work with developers most often and their applications, and developers are extremely literal as they are needed to be due to the line of their work.  We’ve introduced a lot of ambiguity with that “measurement” as shown above, and when a developer asks us the business reason, “Number of times a user clicked CTA button” sounds more like a direct request for a type of event than a business metric we are trying to solve.  It also allows for some abstraction / obfuscation of the feature’s actual function.  What if there is a flow in the app that prevents this button from being loaded?  Are we accounting for that in reporting?  Not with the above measurement.  Or an alternate flow that loads a similar button elsewhere in the app?  What happens if a user clicks that and the event fires?  When the analyst steps into the implementation role by giving “clicks”, they are risking the possibility that they haven’t accounted for all development scenarios, which can result in implementation then missing the scenarios since the measurement isn’t relevant enough.

In the age of data layers (both W3C ones & event-driven ones), we need to banish clicks to the past as a metric we want to capture, and focus on the event(s) we DO want to capture.  We don’t care about the user clicking on a link, we care about the user navigating to the page the link is heading to.  We don’t care about the user clicking a menu option in a modal, we care about the feature they are engaging with and how many users are selecting this option.

The Golden Question

If you are unable to think past “clicks” as a measurement, try the below question to specify the KPI for a measurement:

What does knowing the answer to the measurement give me the power to do?

You can go as many layers deep with it as you want too until you get to the core KPI.

Yes, but why does the business care about that?  OR How can the team make changes to further improve that once identified?

If you apply the above question to your measurement, and the answer for your or your product manager is “nothing”, then it’s not a good or relevant measurement.  In fact, if you format your measurements as questions, you can answer them directly with your dashboards & reports and include the questions to help provide guidance on how to read your workspace!  For example:

  • How many users navigate from this feature to checkout?
    • Include funnel events in a freeform table
  • What is the most common entry point for users?
    • Graph showcasing most common entry pages
  • etc.

 

Moving Beyond “Element Clicks” & “Modal Loads” into “Actionable Event”

If there is one takeaway that I can stress for this piece, I think the time is ripe to make a distinction between the events that happen on the website and the events that the business cares about, and recognize that there is a distinction, even if it slight.  Doing this will help you avoid blinders that hinder a scalable implementation, and enable you to have better event tracking and as a direct result, better metrics.  Tagging a button click for a “submit” event in your reporting might seem like a good option, but a better option might be hooking into an API call that fires at the same time that could send complete event data directly, and avoid any pain points with the button tagging (button greyed out, data limitations, etc.)

So go forth, and make your measurements even better!