2023-11-27

Authors

Johannes Hemminger
studied philosophy and modern history in Tübingen. Worked in marketing, community management and project management in the video game industry. He is an editor at Kultur Management Network / Arts Management Network.
Digital metrics for the cultural sector

Data-driven?

For as long as there have been digital cultural formats, there has been data on how these formats are used. But understanding digital metrics and integrating them into your own work is anything but trivial because for any serious attempt cultural organizations have to ask themselves what they actually want to know - even if it hurts.
Clicks, impressions, video views, retention rate or average time spent on the website; the variety of metrics collected in the digital space is hard to keep track of. At the same time, these variables move the digital world. From the billions in advertising revenue generated by tech giants to personal blogs, there's no getting around the numbers. Arts and cultural institutions, too, find it hard to resist these digital metrics. Some may even be toying with the idea of reinventing themselves as "data-driven" in order to become more efficient, have greater reach, be more in keeping with the time, and perhaps even (I apologize for the bad word) more relevant.
 
But where to begin? "What do we want to measure?" That sounds like a reasonable starting question for implementing digital metrics and embarking on creating a data-driven cultural institution. But the question is too fuzzy to be of any help, because it can easily be answered with a non-specific "our success!" or with the highly specific "the time users spend on the website in the last step of the ticket booking process." Both are legitimate interests, but lead to entirely different next steps. 
 
This may sound trivial, but it is easy to get lost in the details and technical minutiae. These aspects are important in practice, of course, but it is not enough to more or less randomly select tracking and analytics software for your own digital offerings, such as Matomo or Google Analytics. These tools need to be chosen, set up and understood. And then, the exact metrics to measure, the time periods to look at, and more can still be endlessly discussed and evaluated. If your institution has not decided fundamental issues beforehand, the effort might be wasted, because data is a raw material that is meaningless without further processing, i.e., analysis and interpretation.
 
What do we actually want?
 
Unfortunately, this is where things get uncomfortable because the analysis and interpretation of digital data has to get down to the nitty-gritty. "What do we want to know?" must be answered before asking "What do we want to measure?". Lurking in the background is the wicked question of what you actually want to do, both with the collection of digital metrics, but even more fundamentally: what do you actually want to achieve with your work, your project, or even your entire institution? And how can you translate those goals into data, or what information from what data do you need to assess whether you are achieving your goals? Or even: What you would need to change to do so?
 
No one can really tell arts and cultural institutions what they should measure and what they should know. Because - unlike to many other actors in the digital realm - pure profit is often not the main goal to be achieved. Instead, there is a colorful bouquet of missions, whether it is education, cultural participation, empowerment, or one of the numerous and much discussed other impacts of arts and culture. However, such goals are more difficult to put into metrics than economic profit. And even with the latter, digital metrics can rarely be simply put in relation to profit if one does not want to completely exclude side effects and complex dynamics such as fluctuating advertising revenues or the intensity of user interaction and its influence on the intensity of use. Even the seemingly simple question of how much revenue or profit a user generates can be answered with very different levels of complexity. Nevertheless, with sophisticated tools, statistical methods and, as I can say from my own experiences, a bit of guesswork disguised as necessary simplification, one can achieve quite reliable figures regarding profitability.
 
Trying to do the same for social added value or impact on people or society, on the other hand, seems futile. How much cultural education or social reflection is generated by a half-hour visit to an online exhibition? Asking this question seems strange. Impact research can certainly provide important and interesting impulses and should not be neglected. However, in the end arts and cultural institutions must set specific goals for themselves and their digital formats - in consultation with higher-level authorities or funding bodies, if necessary. 
 
And that is the unpleasant part. Before digital metrics can be meaningfully collected and interpreted, strategic questions must be clarified, being aware that the answers have implications far beyond mere measurement. 
 
For example, if your venue prioritizes analog in-person visits, you should adjust your digital metrics by examining the impact your digital offerings have on ticket sales, and then adjust their content based on this research to best achieve this strategically defined goal. If, on the other hand, a digital visit is considered a valuable goal in itself, you should not only measure your digital offerings differently. Rather, the measurements should also result in completely different recommendations for the further development of content. 
 
Now, if you're wondering what's unpleasant about this, I recommend a thought experiment: Take a post on social media or a section of your website and pinpoint a single thing it should accomplish. "This and that" or "kind of everything" is not allowed! Evaluate your offering according to your goal. In doing so, be honest with yourself, commit to measuring that goal precisely, and don't look for excuses to not change the digital format or service, but actively continue to work on it. This is fundamental: digital metrics should not just be a status report, but the starting point for regular revisions that pertain to the predefined goal.
 
Then, with the help of this objective, define an endpoint to which your digital visitors should be led. This endpoint is usually called a conversion. Virtually anything can be defined as a conversion, such as visiting a specific subpage of your website, creating an account, downloading a podcast, or booking a ticket. At this endpoint, the digital visitors "convert" to what you want them to be, be it viewers of the digital exhibition, podcast listeners or paying customers.
 
Practical example of funnel analysis of a ticket booking
 
If, for example, you have decided on ticket booking for a summer concert to be your conversion, a good next step is to set up a so-called conversion path analysis - or shorter and more eloquently, a funnel analysis. The term path already implies it: with this analysis, the journey of users to the chosen endpoint can be traced. The goal is to identify stumbling blocks and exit points on this path. This involves measuring how many people complete each waypoint of a conversion path that is to be traversed step by step. This makes it possible to identify the waystations at which users drop out.
 
Now you have defined an end point, but still need a start and intermediate waypoints for your path. To stay with the ticket booking example, you might select the landing page of the website of a fictitious concert hall as starting point. In doing this, you have already asked an implicit question: "What is the path from the landing page to the completion of a ticket booking?" 
 
The next step is to identify waystations and make them measurable. So, in case of your fictional summer concert, you would go through the ticket booking process and note each step: 
 
  • You start on the homepage, 
  • open the announcement of the concert, 
  • click on ticket booking, 
  • choose your seat and thus the price, 
  • enter your data for payment, 
  • check the order overview 
  • and confirm it, which brings you to the end point
  • the completed ticket booking.
If you're having trouble defining a clear path from a reasonable starting point to a desired conversion, you've already gained an important insight: If you don't see a clear path, your users won't be able to as well and will have trouble reaching the endpoint!
 
Especially when you start such an analysis, it is highly recommended to pay attention to the clarity of the path. If you also have - again in the ticket booking example - a social media post that links directly to the summer concert announcement, you should exclude the visitors from the first analysis who came to the announcement via social media and perform a separate analysis for them. Or, instead of the home page, set the announcement as the first stop and analyze separately how many users get there from the home page and how many from the social media post. This gives you a better overview and makes your work easier.
 
Similarly, you should make sure that the waystations do not branch out. For example, if people can jump back and forth between selecting a seat and entering payment information, this should not be counted as multiple visits to each waypoint. This will ensure that the funnel remains funnel-shaped: It can only stay the same width or get narrower from one station to the next, because the number of digital visitors traversing the path can't grow as they go along; either they all stick around or some leave the path. The best possible outcome of each station is when no one leaves (though this is not a realistic objective).
 
If we fill our example with fictitious data and plot it as a graph, we will see such a shape:
 
 
The percentages of each step refer to the population of digital visitors at the starting point. For example, you can see from the fictitious data that 40% of the start page visitors reached the ticket shopping cart in the period we are examing, but only 10% completed an order.
 
In the diagram, two conspicuous waystations of the funnel narrowing significantly stand out. The first is from the announcement of the concert to the ticket booking, the second from the seat and price selection to the entry of payment data. Here, we can see at which points a noticeable number of users leave the path.
 
In the next step, we should take a closer look at these problematic waypoints and search for reasons. The reasons can be manifold. Every click, every unnecessary text field and every loading time is a obstacle that you place in the path of your users. And with every obstacle, a few users will inevitably jump off - or churn, as is a common term for it. That's why it's your job to make these obstacles as small and easy to overcome as possible or get rid of them at all. With the help of a funnel analysis, you can find out where you need to take a closer look.
 
The easiest way to proceed is to take a close look at the waystations and ask yourself: Is the link to the ticket booking implemented in the concert announcement as a clearly visible button or is it hidden as a hard-to-see text link? Does the shopping cart take what feels like an eternity (at website speeds, that's more than 1-2 seconds!) to open? Does it maybe not work at all for people who use certain browsers or operating systems? Is the form for seat and price selection easy to understand and navigate?
 
A little more intricate, but often very rewarding, is to have an uninvolved person walk the path and observe at which points problems and ambiguities occur. This can be a new employee, for example, or friends or regular visitors who have never used the digital booking system before. If a more or less randomly selected person has problems at a certain point and the funnel becomes narrower at the same station, you have a good indication of what you should improve.
 
Another easy-to-implement metric can also help. If you measure not only the number of users at each waystation in your funnel, but also the average time spent at each step, this can tell you a lot. Let's take another look at a diagram for our fictitious concert hall:
 
 
Here, the seat and price selection clearly stands out with an average of five minutes time spent. While a long dwell time in, for example, a digital exhibition could be a positive signal, during the booking process it is an indicator of problems . Of course, one can spend a long time thinking about the desired seat, but together with the high churn rate this value shows that at this point there is a large obstacle in the path of the users. Here, comparative data or even the possibility to conduct experiments can help to look more closely at how targeted changes affect this metric. For example, if the same long dwell time is not seen at regular concerts, the summer concert pricing structure may not be attractive or too difficult to understand. If the time spent is similar for both, it is more likely to indicate a problem in the user experience on the website or the technology.
 
Decide, then measure
 
As the practical example shows, digital metrics can be used to identify problems and successes of digital offerings. On such a basis, arts and cultural institutions can work in a data-driven manner and attempt to optimize themselves on a regular basis. The funnel analysis presented here is just one technique among many that can provide important insights, depending on the situation and objective. For example, if you want to get clues about how successful an offering of digital content is, scroll depth can be a useful metric. Scroll depth measures how far down an average user scrolls. For example, if you have a page with an image of an exhibit at the top and a longer text at the bottom, the average scroll depth can give you insights into how digital visitors are using the page. You can then compare this to other pages on your site. Similarly, you can measure: How many users come back within a chosen period of time (retention rate), frequent transitions from one subpage to another (transitions or screen flow) or unique visits, which are most often counted daily (daily active users, DAU) and monthly (monthly active users, MAU). So, the tools of digital metrics are manifold and can provide you with many important insights.
 
But what digital metrics can't do is tell arts and cultural institutions what is or should be important to them. You have to decide for yourself how much a click, a download, or a digital exhibition visit is worth to you. But you should be aware that such decisions have to have consequences. If your institution sees digital offerings as a means to increase in-person visits, make sure your digital offerings are designed to meet that goal. If the in-depth use of your digital formats is a strategic goal, invest in them and develop success metrics that work independently of your analog offerings. In the constant competition for attention on the internet, half measures are usually punished with disregard.
Comments (0)
There are no comments for this content yet.
COOKIE SETTINGS
We use cookies on our website. These help us to improve our offers (editorial office, magazine) and to operate them economically.

You can accept the cookies that are not necessary or reject them by clicking on the grey button. You will find more detailed information in our privacy policy.
I accept all cookies
only accept necessary cookies
Imprint/Contact | Terms