Beginner Read Audience Data Collection and Evaluation
Audience research is the process of gathering information about the people who visit you or see your work, use your services, or engage with your organisation.
In this article
This article aims to help cultural and creative organisations with advice and information to help you answer the following questions:
How can we communicate the value of our surveying?
Survey targets: how should we decide on our sample size?
How can we make our surveys more representative?
Of course, every organisation is different, so the answers to these questions might be different depending on where you work.
If this article leaves you with more questions, you can chat with me directly about your audience surveys and your approach to data collection with free, unlimited one-to-one support, resources and advice from the Digital Culture Network. It could be a one-off chat, or you can come back again as many times as you like. You can book in for a one-to-one call with me using our Ask a Tech Champion form.
The Digital Culture Network provide advice and support with digital skills. However, we are aware that many reading this article will have Arts Council England’s reporting requirements to consider.
In particular, many Arts Council National Portfolio organisations will be free to decide how many audience surveys they should aim to collect.
This article provides guidance which will support your organisation to set a realistic and useful target. Organisations funded by the Arts Council should direct questions about funding or mandatory audience surveying requirements to the Arts Council’s customer service, and technical support or specific questions about using the Illuminate platform to the Illuminate helpdesk by email or by telephone on 0800 031 8671.
Getting buy-in and advocacy from everyone you work with is important to help your organisation be more effective at collecting audience surveys. Effective surveys provide better data to inform programme and service improvement. It’s most effective when leaders, and those designing surveys encourage a robust approach to audience research. Front-of-house staff, volunteers, facilitators and marketers should understand the value of capturing audience feedback, and be able to explain how audience data capture supports the work of the organisation.
The audience surveys you collect not only provide useful information to funders and the government to understand the value of public funding and support to the cultural and creative sector, but they give your audience a voice.
When we carry out research, it’s usually not practical to collect a survey from every single member of our audience. Instead, we collect a representative ‘sample’ from some of the audience. Ideally, the answers and data from the smaller ‘sample’ would be a reasonably accurate representation of the whole audience.
When we talk about the ‘sample size’ of a survey, we mean the number of complete responses to the survey that we get. So, when you set yourself a target number of survey responses, that’s the ‘sample size’ you’ll hopefully end up with.
This number is important for several reasons:
It doesn’t necessarily mean that data with a low sample size is useless, but it should be treated with caution – and I’ll explain why in a moment.
Firstly, it’s always a compromise. More data is more reliable, but we must be realistic and practical about what we can actually do. Well-funded commercial, government or academic research studies might have the budget to buy fieldwork services and pay hundreds or thousands of respondents to fill in surveys. In our sector, very few organisations have that luxury, and we usually rely on audiences to fill in our surveys for free, and our artists, practitioners and front-of-house staff and volunteers to make the pitch.
When we don’t have much budget, or time and resources to collect large amounts of surveys, we need to aim for a ‘sweet spot’ – a point of best value – a number of surveys that gives us a reasonable level of accuracy, but an achievable and practical number of surveys to collect.
In statistics, we use something called the ‘margin of error’ to measure the accuracy of data. It’s shown as a percentage – for an example, let’s say that the margin of error of my survey data is 4%.
A margin of error of 4% means that when my survey responses give me some proportional data, the real answer could be anywhere within 4% in either direction. So, if 80% of people reported visiting my gift shop, the real answer could be anywhere between 76% or 84%.
A lower margin of error means that data is more accurate. If I had a lower 2% margin of error, my score of 80% might really be anywhere between 78% and 82%.
The accuracy of other types of survey data, like numbers and ratings, isn’t calculated in exactly the same way, but the same general idea holds true.
There are many simple calculators available on the web that will do this for you – here’s one that tells you the margin of error from a specific number of surveys. I would recommend having a brief play around with this using your organisations’ numbers.
All you need to do is type in:
A screenshot of the margin of error calculator. Follow the link to go to the calculator.
We usually leave the Confidence Level at 95% – this is the level typically used for most market research studies. It is an assumption – more like a general rule of thumb – that about 95% of the answers you get in your surveys will be accurate. Having an engaging, clear and concise survey with carefully considered and comprehensive answer options does help people to answer more accurately and is a key part of good practice.
In the table below, let’s see how many surveys you’d need to collect to get different margins of error. It changes a little bit depending on the size of your audience (the ‘population size’) so I’ve shown three different audience sizes:
Margin of error | Sample size – audience of 5,000 | Sample size – audience of 25,000 | Sample size – audience of 100,000 |
8.00% | 146 | 149 | 150 |
7.00% | 189 | 195 | 196 |
6.00% | 254 | 264 | 267 |
5.00% | 357 | 379 | 383 |
4.00% | 536 | 587 | 597 |
3.00% | 880 | 1024 | 1056 |
2.00% | 1622 | 2191 | 2345 |
1.00% | 3289 | 6938 | 8762 |
As you can see, the more surveys you can collect, the smaller the margin of error. However, it takes increasing amounts of surveys to get more accurate once you start getting to 3%, 2% and 1%. For this reason, in research we’ll often aim for a ‘sweet spot’ between 3% and 6%, depending on the audience size, the survey methods we have, and our available resources and tools.
The other thing to notice is that the audience size doesn’t seem to make a massive difference to the number of surveys required, except for a 1% margin of error. So, if you’re a smaller organisation, you need to survey a much higher proportion of your audience to achieve a specific margin of error. An organisation with an audience of 5,000 would need 880 surveys, or 18% of their audience to fill in a survey to achieve a 3% margin of error.
Even if you’re doing well and sending attractive emails with good copy with the survey, it’s often hard to get more than a 10% response rate. So, it would be difficult for a smaller organisation with an audience of 5,000 to get 18% of their audience to fill in a survey (without dedicating a lot of time and resources).
But for larger organisations, one with 100,000 visitors only needs just over 1% of their audience to fill in a survey to get 1056 surveys for a 3% margin of error. If they have an email sending the survey to bookers, this should be fairly easy, and they could likely expect to achieve more surveys and get an even lower margin of error.
So, what does this all mean? Organisations with smaller audiences need to set a realistic, achievable survey target, and this generally means making do with a higher margin of error (and less accurate data). Similarly, you might have a larger audience, but if you have a lot of walk-in audiences and you’re not able to email the survey link to lots of bookers, it’s going to be a lot harder to get larger numbers of surveys.
As well, your audience itself makes a difference. Maybe you get a lot of families who don’t have time to stop for an in-person survey, or there might be barriers in place for some of your audience (for example: hard-to-reach communities, or lots of younger people who are less likely to help with a survey). In this case, a lower survey target might be more realistic, although as we’ll explore later, it’s important to try and represent the diversity of your audience accurately.
The critical thing to be aware of is that small fluctuations in scores, answers or responses that fall within your margin of error might not be real changes – or, to put it into research language, they may not be statistically significant.
Let’s say my survey data has a 5% margin of error. Last year, 80% of survey respondents said they visited my gift shop. This year, 84% said they visited the shop. So, this year, 4% more people visited the shop. What’s the problem here?
Because the 5% margin of error means that the real answer could be anywhere 5% higher or lower than the answer suggests, the issue is that I can’t be sure that the shop really did 4% better, or if this is just my survey data fluctuating normally within the margin of error.
Another thing to be aware of is that you might want to look at specific groups within your survey data in more detail:
However – if you’re looking at a smaller number of respondents within your data, then the margin of error for this group will be higher, because you have fewer surveys from them.
Let’s say I get 40,000 visitors a year to my art gallery, and I collected 1,000 surveys. For the whole audience, I have a margin of error of 3.06% – not bad.
My big summer exhibition was attended by 10,000 visitors – a quarter of all visitors. And of my 1,000 surveys, 250 of those were from people who visited that exhibition. So, the survey data I have from my exhibition visitors alone has a much higher margin of error of 6.12% – meaning that it’s much less accurate than the much larger set of data for all visitors – and I should be more careful about the inferences I make from the exhibition-visitor subset of the data.
I need to be even more careful with smaller subsets of the data. Let’s say 10% of my 40,000 visitors identified as being disabled, or 4,000 people. If 100 of my 1,000 surveys were from disabled people, then my data for disabled audiences only would have a margin of error of 9.68%. That’s getting high – the real answer to a question for this specific group of my audience could be nearly 10% different from my result in either direction – covering a spread of almost 20% in total. I would want to see some overwhelmingly positive or negative data on a question to make any definitive conclusions from these responses, or else investing in capturing more survey data in general to reduce the margins of error.
So, what are the main things to take away here? A higher margin of error (less accurate data) makes it:
In the research sector, data that is less accurate or not statistically significant is sometimes euphemistically referred to as ‘indicative’ – meaning that it could be true, but that we can’t be sure – and it can be dangerous to make big bets on data that might not be sound.
Now that you’ve read about the margin of error and how it works, let’s go back to the calculator.
How many surveys do you think would be realistic to collect this year using the methods, resources, and time that you have at your disposal?
Could you change the way that you collect surveys using some of the above approaches to increase your sample size?
Now that you have a target number of surveys in mind, have a look at the margin of error. Are you happy with it? While 3% to 6% is generally considered to be reasonable, if there are a lot of barriers in place to you collecting enough surveys, then you may be happy to settle for less accurate data.
On the other hand, if you have a large audience, you have good methods for collecting surveys, and you want to be able to look at smaller groups within your survey respondents more closely (e.g. people who attended specific programming, or particular demographics), then collecting more surveys and getting more accurate data will help with this.
In the section above, we covered how the size of the sample makes the data more or less accurate and reliable. There’s something else that we should consider – how representative is the data of our actual, total audience? As well as being happy with the number of surveys we collect for our sample, we also must think about the ways we collect that sample of our audience.
Surely collecting a large enough number of surveys will ‘smooth out’ the data and make it more accurate? Well, it does – but only if the people who fill in the survey accurately represent the full range of motivations, personalities, opinions, and demographics within our audience. If we survey 10% of our audience, then we would hope that those 10% have roughly the same characteristics of the audience as a whole.
This can be a bit of a problem. Think about how you collect your surveys. Mainly we do this by asking people to help us, either in person or in an email. The problem lies in the way that those methods can be biased towards certain types of people.
The first problem is around self-selection. Some people are just more or less likely to agree to help with your survey. This often has a demographic component:
In general, we also find that males and younger people are less likely to agree to help with a survey, regardless of familiarity and attendance with creativity and culture.
The methods that we use to distribute the survey might cause a bias too. If you’re mainly collecting the survey by emailing it to those who booked tickets, then you’ll often only be surveying the ‘lead booker’. This skews the data – in heterosexual couples or families, it can bias the responses towards more female respondents. Especially for theatre or family trips to heritage attractions, it’s often mum who books the tickets.
Even without this dynamic, you might find that the lead booker tends to be more culturally active, and more well-informed – which can skew responses to any questions about marketing channels or awareness. Crucially, they are more likely to have visited you previously – hiding any first-time visitors that they may have brought along in their group.
If you have both ticket bookers and walk-in audiences, and if you survey only with the emailed link to bookers, you might find there are differences in the profile and characteristics of those two groups, and your survey data only accurately represents the bookers.
The best way to mitigate these problems – as well as to boost your survey numbers generally – is to approach people on their visit, during or after the show, or at the event (if possible) and try to recruit them to do the survey. How they complete the survey is up to you: that could be handing them a QR code to scan and do the survey on their own device later, or completing it with them.
While you’ll still find some people are less likely to do the survey, having a human being appealing directly and being able to explain how important the survey is to you and your organisation does have value. That means that not only might you get a broader range of respondents generally, but you’ll also pick up some of those people who didn’t book the ticket.
Some organisations are setting up steering or advisory groups made up of people from their audiences – for example, organisations looking to attract more young people to their audience or represent the experience of disabled audience members. These steering groups can be valuable for your surveying and evaluation approaches too – in helping you to find approaches, messaging and communication channels that are more attractive or more accessible for groups within your audience who may be less likely to take your survey.
However, the face-to-face approach comes with potential problems for sampling too. The first thing we can do is to make sure we are sampling our survey well so that our results reflect our whole audience as accurately as possible.
It’s almost impossible to do this perfectly, so what we can do is try to work with two key principles:
Finally, if you’re asking people the questions – you should be careful to read the questions as they are written, consistently, and avoid paraphrasing them.
It’s important to use neutral language and tone of voice to avoid ‘leading’ the respondent to be more positive or negative, or to suggest answers for them – let them decide for themselves!
It’s also important to avoid commenting on people’s answers, as this might ‘lead’ them as well – for example, supporting them and agreeing, or disagreeing with their answers and opinions. You don’t want to sound like a robot, so you can still be chatty, funny and friendly, but try to be discreet and careful about your own opinions.
Everyone who collects surveys finds it scary and difficult at first. We’re not all naturally outgoing and many interviewers aren’t either. It’s also important to know why the survey is being used so you can be confident in what you’re doing. Be kind to yourself, and don’t be disheartened as you will likely struggle to start with. It gets easier and you’ll get much more comfortable with it with practice – with perseverance you will get people to stop!
People are generally nice. They might not be keen to do a survey, but if you’re genuine and tell them that it helps you, usually 25% or more are happy to help.
You should consider accessibility requirements and offer people a chance to sit down if they’d like – but reassure them about the length of the survey so they don’t think they’ll be there forever.
Families with young children might not be able to stop for more than a couple of minutes so if you have a QR code or link to the survey that they can scan and take away, this might be the best way for parents to participate at a more convenient moment.
Some people may be confused why you’re asking questions about them or find them intrusive or suspicious. It may feel to some people like it’s unrelated to the experience they’ve had, and that you shouldn’t be asking questions about that.
One of the changes to the Arts Council England mandatory questions in April 2024 is that these more personal questions are moving towards the end of the survey. This means that respondents get a chance to warm up a bit answering more general questions first, so it might make them a bit more comfortable answering the personal questions later.
To reassure people, you can touch on a few things:
You can acknowledge that these questions might be annoying and reassure people that they don’t have to answer them – but that they do help make sure good things happen.
Thank you for reading this article – hopefully you’ve found it useful, and it’s given you some insight into sample sizes, accuracy of data, and ways to make your surveying more representative. If you’d like to discuss this is in more detail with me and get some more advice or support, I’m here to help. The Digital Culture Network is a funded programme of Arts Council England to provide completely free, unlimited one-to-one support for you and your colleagues.
You can book in for a one-to-one call with me using our Ask a Tech Champion form.
Beginner Read Audience Data Collection and Evaluation
Audience research is the process of gathering information about the people who visit you or see your work, use your services, or engage with your organisation.
Beginner Read Audience Data Collection and Evaluation
A webinar to introduce creative and culture organisations to audience data collection and research.
Beginner Read Audience Data Collection and Evaluation
Audience surveys are a valuable tool for organisations that want to learn more about their target audience. These top tips should help you focus on what matters and get results.
Beginner Read Audience Data Collection and Evaluation
Audience research is the process of gathering information about the people who visit you or see your work, use your services, or engage with your organisation.
Beginner Read Audience Data Collection and Evaluation
This article takes a deeper dive into the different methods for fieldwork and data collection to support these studies, and which ones are useful for different research objectives.
Beginner Read Audience Data Collection and Evaluation
There are many different types of audience research projects. In this article we’re going to take a look at a few types of research projects that creative and cultural organisations and individuals might carry out, and the research objectives that underpin them.