Website Event Tagging vs. Auto-Capture vs. Auto-Capture with Auto-Tagging

Daniel Bashari
Founder and CEO
11 July 2019
Convizit blog cover image

In order to track specific user activities on webpages – button clicks, video plays, form field entries and other interactive page actions – most analytics tools require the insertion of JavaScript event tracking code for each specific element that the marketer or product manager wants to track. This is often done manually by web developers, though more and more companies are implementing tag managers (such as Google Tag Manager or Tealium) that allow business-level employees to manage event tracking themselves, once a single code snippet is added to each webpage.

Either way, there are four significant downsides to these manual approaches of website event tracking:

1. Tracking is piecemeal

It is no simple matter deciding which events to track; this requires extensive planning and forethought. Marketers or product managers need to think ahead regarding what they want to measure and test in order to decide which specific events to track. It is easy to miss important elements, resulting in incomplete datasets. Over time, more and more elements may be added to the tracking, but the data timeframes won’t be aligned. Since it is impossible to track everything this way, and it is very difficult to come up with everything that might need to be tracked in the future, it is virtually impossible to get a complete picture of what users are doing on a website.

2. Data is only collected in the future

Obviously, event data is only collected from the moment tracking codes are added to a website element. This means that it is impossible to examine past user activity data, no matter how valuable it may be. For example, if you are analyzing drop-offs between two stages of a funnel and want to drill down into what’s going on between those two stages, you are out of luck until you add additional event tracking. And, even once new event tracking is added, it may still take days or weeks to accumulate enough data to provide any insights. Because not every tracking scenario can be thought of in advance, this aspect of manual, per-event tracking is a huge downside for analytics-driven marketers and product managers.

3. Complex, time-consuming and error-prone

Implementing the tracking codes themselves is a big pain. Relying on developers to manually code tracking script often involves time lags and misunderstandings. While tag managers make it easier than manually writing JavaScript for each individual event, anyone who has gone beyond the most rudimentary tagging in a tag manager tool knows that things get very complicated, very quickly. Working with tags, triggers, variables and containers can be very confusing, especially when you’re trying to run A/B tests, or if multiple users are working on tagging areas of the same website.

4. Ongoing maintenance requirements

Large websites are constantly growing and changing. Very few organizations are able to keep their event tagging in step with website changes. Tracked elements may be changed or removed, omitting them from the user activity data stream. Important new elements may be added, which aren’t tracked at all until someone realizes it and adds tracking for them. There is a lot of overhead involved with keeping event tracking in sync with the ongoing changes occurring with almost every large website.

Auto-Capture

The next generation of website event tracking is often referred to as “auto-capture.” This method eliminates the need to add tracking codes or tags to individual elements; instead, the technology records every user interaction with no prior setup required (beyond adding a single code snippet to each page). There’s no need to decide what to track, or what triggers or tag combinations are required. This approach essentially avoids all the disadvantages described above:

  1. Tracking is comprehensive: Every single user interaction is captured.
  2. User activity data is available retroactively: Whenever a marketer or product manager decides to analyze a particular event, the historical data is already there and available.
  3. Much less effort is required: There’s no need to decide which events to track, or to add the actual tracking codes/tags, although manual naming and grouping effort is still required for the captured data to be useful (more about this below).
  4. There is much less ongoing maintenance: Because every user event is captured, no matter how often the website changes, the only maintenance efforts required are to name and group new and changed events (more about this below).

While much better than manual event tracking (with or without a tag manager), auto-capture still requires marketers and product managers to manually select which events they want to analyze, to name them and to group/structure them. Until this is done for each individual event of possible interest, the collected data remains hidden beneath the surface and inaccessible for analytical purposes.

Auto-Capture with Auto-Tagging

The state-of-the-art in website user activity tracking is event auto-capture with auto-tagging. While also automatically capturing every single website event with the insertion of a single code snippet on each page (as with generic auto-capture), this technology takes a huge additional step forward: Auto-capture-and-tag solutions also use artificial intelligence (AI) and machine learning techniques to understand the meaning and context of each webpage element. This allows the software to:

  • automatically assign relevant event properties to each and every captured event, and to
  • automatically group all events in logical hierarchical structures, allowing straightforward analysis of different levels of clustered events (e.g., clicked Buy Now in any category vs. in a particular category vs. a particular brand vs. a particular product).

There is an enormous benefit of this approach: all website behavior data is immediately available for analysis. Unlike the plain-vanilla auto-capture described above, the auto-capture-and-tag technology does not burden marketers or product managers with having to decide which events to analyze and then to manually add their properties. Rather, the complete granular website behavior data stream is immediately available for analysis.

Having this high-resolution data stream opens highly valuable new possibilities in the world of website behavior analytics, including much more granular funnel analysis, UX analysis and user journey analysis.

Since generating this new type of website behavior data stream for analysis by data scientists involves nothing more than adding a JavaScript snippet to every page in your website, you can be up and running in minutes. Contact us now to get started!

Share Article
This website uses cookies. Cookies remember you so we can give you a better online experience. Learn more.
VIDEO: Convizit’s AI is delivering the future of behavioral data, today.
This is default text for notification bar