Skip to content
Beginner Read · Typical

How to decide your survey targets and improve your sampling

Image of man with puzzled look looking at a screen full of codes

This article aims to help cultural and creative organisations with advice and information to help you answer the following questions:

How can we communicate the value of our surveying?

Survey targets: how should we decide on our sample size?

How can we make our surveys more representative?

Of course, every organisation is different, so the answers to these questions might be different depending on where you work.

If this article leaves you with more questions, you can chat with me directly about your audience surveys and your approach to data collection with free, unlimited one-to-one support, resources and advice from the Digital Culture Network. It could be a one-off chat, or you can come back again as many times as you like. You can book in for a one-to-one call with me using our Ask a Tech Champion form.

The Digital Culture Network provide advice and support with digital skills. However, we are aware that many reading this article will have Arts Council England’s reporting requirements to consider.

In particular, many Arts Council National Portfolio organisations will be free to decide how many audience surveys they should aim to collect.

This article provides guidance which will support your organisation to set a realistic and useful target. Organisations funded by the Arts Council should direct questions about funding or mandatory audience surveying requirements to the Arts Council’s customer service, and technical support or specific questions about using the Illuminate platform to the Illuminate helpdesk by email or by telephone on 0800 031 8671.

How can we communicate the value of our surveys?

Getting buy-in and advocacy from everyone you work with is important to help your organisation be more effective at collecting audience surveys. Effective surveys provide better data to inform programme and service improvement. It’s most effective when leaders, and those designing surveys encourage a robust approach to audience research. Front-of-house staff, volunteers, facilitators and marketers should understand the value of capturing audience feedback, and be able to explain how audience data capture supports the work of the organisation.

The audience surveys you collect not only provide useful information to funders and the government to understand the value of public funding and support to the cultural and creative sector, but they give your audience a voice.

Survey targets: how should we decide on our sample size?

When we carry out research, it’s usually not practical to collect a survey from every single member of our audience. Instead, we collect a representative ‘sample’ from some of the audience. Ideally, the answers and data from the smaller ‘sample’ would be a reasonably accurate representation of the whole audience.

When we talk about the ‘sample size’ of a survey, we mean the number of complete responses to the survey that we get. So, when you set yourself a target number of survey responses, that’s the ‘sample size’ you’ll hopefully end up with.

This number is important for several reasons:

  • In general, collecting more surveys makes the data you get more accurate, robust and reliable.
  • Collecting too few surveys means that the data is less reliable, and there could be quite a big difference between what your survey data says, and what the whole audience would say if you asked them.

It doesn’t necessarily mean that data with a low sample size is useless, but it should be treated with caution – and I’ll explain why in a moment.

What is a ‘good’ sample size?

Firstly, it’s always a compromise. More data is more reliable, but we must be realistic and practical about what we can actually do. Well-funded commercial, government or academic research studies might have the budget to buy fieldwork services and pay hundreds or thousands of respondents to fill in surveys. In our sector, very few organisations have that luxury, and we usually rely on audiences to fill in our surveys for free, and our artists, practitioners and front-of-house staff and volunteers to make the pitch.

When we don’t have much budget, or time and resources to collect large amounts of surveys, we need to aim for a ‘sweet spot’ – a point of best value – a number of surveys that gives us a reasonable level of accuracy, but an achievable and practical number of surveys to collect.

Group of people in a boat sailing along what looks like a river with some hills and houses in the background

How accurate is my survey data?

In statistics, we use something called the ‘margin of error’ to measure the accuracy of data. It’s shown as a percentage – for an example, let’s say that the margin of error of my survey data is 4%.

A margin of error of 4% means that when my survey responses give me some proportional data, the real answer could be anywhere within 4% in either direction. So, if 80% of people reported visiting my gift shop, the real answer could be anywhere between 76% or 84%.

A lower margin of error means that data is more accurate. If I had a lower 2% margin of error, my score of 80% might really be anywhere between 78% and 82%.

The accuracy of other types of survey data, like numbers and ratings, isn’t calculated in exactly the same way, but the same general idea holds true.

How can I calculate how accurate my data is?

There are many simple calculators available on the web that will do this for you – here’s one that tells you the margin of error from a specific number of surveys. I would recommend having a brief play around with this using your organisations’ numbers.

All you need to do is type in:

  • ‘Population Size’ – which is the number of people in your audience per year
  • ‘Sample Size’ – which is how many surveys you expect to get per year.

A screenshot of the margin of error calculator. Follow the link to go to the calculator.

We usually leave the Confidence Level at 95% – this is the level typically used for most market research studies. It is an assumption – more like a general rule of thumb – that about 95% of the answers you get in your surveys will be accurate. Having an engaging, clear and concise survey with carefully considered and comprehensive answer options does help people to answer more accurately and is a key part of good practice.

In the table below, let’s see how many surveys you’d need to collect to get different margins of error. It changes a little bit depending on the size of your audience (the ‘population size’) so I’ve shown three different audience sizes:

Margin of error Sample size – audience of 5,000 Sample size – audience of 25,000 Sample size – audience of 100,000
8.00% 146 149 150
7.00% 189 195 196
6.00% 254 264 267
5.00% 357 379 383
4.00% 536 587 597
3.00% 880 1024 1056
2.00% 1622 2191 2345
1.00% 3289 6938 8762

As you can see, the more surveys you can collect, the smaller the margin of error. However, it takes increasing amounts of surveys to get more accurate once you start getting to 3%, 2% and 1%. For this reason, in research we’ll often aim for a ‘sweet spot’ between 3% and 6%, depending on the audience size, the survey methods we have, and our available resources and tools.

The other thing to notice is that the audience size doesn’t seem to make a massive difference to the number of surveys required, except for a 1% margin of error. So, if you’re a smaller organisation, you need to survey a much higher proportion of your audience to achieve a specific margin of error. An organisation with an audience of 5,000 would need 880 surveys, or 18% of their audience to fill in a survey to achieve a 3% margin of error.

Even if you’re doing well and sending attractive emails with good copy with the survey, it’s often hard to get more than a 10% response rate. So, it would be difficult for a smaller organisation with an audience of 5,000 to get 18% of their audience to fill in a survey (without dedicating a lot of time and resources).

But for larger organisations, one with 100,000 visitors only needs just over 1% of their audience to fill in a survey to get 1056 surveys for a 3% margin of error. If they have an email sending the survey to bookers, this should be fairly easy, and they could likely expect to achieve more surveys and get an even lower margin of error.

Image of large snow globe like dome lit up with dark sky and group of people watching

So, what does this all mean? Organisations with smaller audiences need to set a realistic, achievable survey target, and this generally means making do with a higher margin of error (and less accurate data). Similarly, you might have a larger audience, but if you have a lot of walk-in audiences and you’re not able to email the survey link to lots of bookers, it’s going to be a lot harder to get larger numbers of surveys.

As well, your audience itself makes a difference. Maybe you get a lot of families who don’t have time to stop for an in-person survey, or there might be barriers in place for some of your audience (for example: hard-to-reach communities, or lots of younger people who are less likely to help with a survey). In this case, a lower survey target might be more realistic, although as we’ll explore later, it’s important to try and represent the diversity of your audience accurately.

What does margin of error and the accuracy of data mean in practice?

The critical thing to be aware of is that small fluctuations in scores, answers or responses that fall within your margin of error might not be real changes – or, to put it into research language, they may not be statistically significant.

Let’s say my survey data has a 5% margin of error. Last year, 80% of survey respondents said they visited my gift shop. This year, 84% said they visited the shop. So, this year, 4% more people visited the shop. What’s the problem here?

Because the 5% margin of error means that the real answer could be anywhere 5% higher or lower than the answer suggests, the issue is that I can’t be sure that the shop really did 4% better, or if this is just my survey data fluctuating normally within the margin of error.

Looking at smaller sections of your audience

Another thing to be aware of is that you might want to look at specific groups within your survey data in more detail:

  • People who attended a particular venue, show, exhibition, or event
  • People who fit a particular demographic profile (for example: younger people, audiences from global majority ethnic backgrounds, or people who are disabled)

However – if you’re looking at a smaller number of respondents within your data, then the margin of error for this group will be higher, because you have fewer surveys from them.

Let’s say I get 40,000 visitors a year to my art gallery, and I collected 1,000 surveys. For the whole audience, I have a margin of error of 3.06% – not bad.

My big summer exhibition was attended by 10,000 visitors – a quarter of all visitors. And of my 1,000 surveys, 250 of those were from people who visited that exhibition. So, the survey data I have from my exhibition visitors alone has a much higher margin of error of 6.12% – meaning that it’s much less accurate than the much larger set of data for all visitors – and I should be more careful about the inferences I make from the exhibition-visitor subset of the data.

I need to be even more careful with smaller subsets of the data. Let’s say 10% of my 40,000 visitors identified as being disabled, or 4,000 people. If 100 of my 1,000 surveys were from disabled people, then my data for disabled audiences only would have a margin of error of 9.68%. That’s getting high – the real answer to a question for this specific group of my audience could be nearly 10% different from my result in either direction – covering a spread of almost 20% in total. I would want to see some overwhelmingly positive or negative data on a question to make any definitive conclusions from these responses, or else investing in capturing more survey data in general to reduce the margins of error.

Children sitting on floor looking up and something and laughing

So, what are the main things to take away here? A higher margin of error (less accurate data) makes it:

  • Harder to compare small differences in survey scores or results
  • Harder to get accurate data from subsets (smaller groups of people) within your survey data

In the research sector, data that is less accurate or not statistically significant is sometimes euphemistically referred to as ‘indicative’ – meaning that it could be true, but that we can’t be sure – and it can be dangerous to make big bets on data that might not be sound.

Putting this all together, how do you choose your survey target for the year?

Now that you’ve read about the margin of error and how it works, let’s go back to the calculator.

How many surveys do you think would be realistic to collect this year using the methods, resources, and time that you have at your disposal?

  • Do you collect all your surveys on paper – is this holding you back?
  • Could you collect more surveys by using an emailed link to bookers, collecting on a tablet, or using a QR code or link at your events/venue?
  • Do you have staff and volunteers, or performers, artists and facilitators mentioning the survey to audiences and making an appeal for them to complete it?

Could you change the way that you collect surveys using some of the above approaches to increase your sample size?

Now that you have a target number of surveys in mind, have a look at the margin of error. Are you happy with it? While 3% to 6% is generally considered to be reasonable, if there are a lot of barriers in place to you collecting enough surveys, then you may be happy to settle for less accurate data.

On the other hand, if you have a large audience, you have good methods for collecting surveys, and you want to be able to look at smaller groups within your survey respondents more closely (e.g. people who attended specific programming, or particular demographics), then collecting more surveys and getting more accurate data will help with this.

Boats docked on a river or canal with old factory building in background

How can we make our surveys more representative?

In the section above, we covered how the size of the sample makes the data more or less accurate and reliable. There’s something else that we should consider – how representative is the data of our actual, total audience? As well as being happy with the number of surveys we collect for our sample, we also must think about the ways we collect that sample of our audience.

Surely collecting a large enough number of surveys will ‘smooth out’ the data and make it more accurate? Well, it does – but only if the people who fill in the survey accurately represent the full range of motivations, personalities, opinions, and demographics within our audience. If we survey 10% of our audience, then we would hope that those 10% have roughly the same characteristics of the audience as a whole.

This can be a bit of a problem. Think about how you collect your surveys. Mainly we do this by asking people to help us, either in person or in an email. The problem lies in the way that those methods can be biased towards certain types of people.

A man standing in a room, looks like he's performing. Guitarist in the background and people looking on.

Self-selection

The first problem is around self-selection. Some people are just more or less likely to agree to help with your survey. This often has a demographic component:

  • We might find that regular cultural audiences, or older, more middle-class people are more likely to agree to help with a survey.
  • People who attend fewer events, or people less familiar with cultural attendance might be less comfortable answering a survey or feel less inclined to help with the research.
  • Taken further, that might mean that some communities who are generally ‘harder-to-reach’ could be under-represented in the research. That might be people who are less well-off, or from areas of the country with a lower cultural and creative provision. In some communities, people may be more wary or uncomfortable with data collection and answering questions.

In general, we also find that males and younger people are less likely to agree to help with a survey, regardless of familiarity and attendance with creativity and culture.

Sampling bias – emailed survey to bookers

The methods that we use to distribute the survey might cause a bias too. If you’re mainly collecting the survey by emailing it to those who booked tickets, then you’ll often only be surveying the ‘lead booker’. This skews the data – in heterosexual couples or families, it can bias the responses towards more female respondents. Especially for theatre or family trips to heritage attractions, it’s often mum who books the tickets.

Even without this dynamic, you might find that the lead booker tends to be more culturally active, and more well-informed – which can skew responses to any questions about marketing channels or awareness. Crucially, they are more likely to have visited you previously – hiding any first-time visitors that they may have brought along in their group.

If you have both ticket bookers and walk-in audiences, and if you survey only with the emailed link to bookers, you might find there are differences in the profile and characteristics of those two groups, and your survey data only accurately represents the bookers.

Ways to reduce self-selection and sampling bias

The best way to mitigate these problems – as well as to boost your survey numbers generally – is to approach people on their visit, during or after the show, or at the event (if possible) and try to recruit them to do the survey. How they complete the survey is up to you: that could be handing them a QR code to scan and do the survey on their own device later, or completing it with them.

While you’ll still find some people are less likely to do the survey, having a human being appealing directly and being able to explain how important the survey is to you and your organisation does have value. That means that not only might you get a broader range of respondents generally, but you’ll also pick up some of those people who didn’t book the ticket.

Some organisations are setting up steering or advisory groups made up of people from their audiences – for example, organisations looking to attract more young people to their audience or represent the experience of disabled audience members. These steering groups can be valuable for your surveying and evaluation approaches too – in helping you to find approaches, messaging and communication channels that are more attractive or more accessible for groups within your audience who may be less likely to take your survey.

Image of group of young people leaving what looks like an art gallery

More representative face-to-face approaches

However, the face-to-face approach comes with potential problems for sampling too. The first thing we can do is to make sure we are sampling our survey well so that our results reflect our whole audience as accurately as possible.

It’s almost impossible to do this perfectly, so what we can do is try to work with two key principles:

  • Different types of people visit at different times and on different days. You should spread out the survey collection as much as possible, but in proportion to the number of visitors or audiences. So, if one particular event, time or day has more visitors than others, you should try to collect more surveys – trying to roughly keep it in proportion. Getting a mix of weekends, school holidays and times of day is important too.
  • We can sometimes also have a bias in who we approach. When you’re conducting a survey, you might approach people who you think look more likely to help you. This might reflect subconscious biases and preferences we have based on what people look like, their age, gender or demographics. To remedy this, professional interviewers will usually use a ‘counting method’. Draw an invisible square on the ground, and choose a number, say 3. Then, count people and always approach the specific person you’ve chosen. This makes it a random choice and removes any element of selection by the interviewer.

Finally, if you’re asking people the questions – you should be careful to read the questions as they are written, consistently, and avoid paraphrasing them.

It’s important to use neutral language and tone of voice to avoid ‘leading’ the respondent to be more positive or negative, or to suggest answers for them – let them decide for themselves!

It’s also important to avoid commenting on people’s answers, as this might ‘lead’ them as well – for example, supporting them and agreeing, or disagreeing with their answers and opinions. You don’t want to sound like a robot, so you can still be chatty, funny and friendly, but try to be discreet and careful about your own opinions.

How should I approach people?

Everyone who collects surveys finds it scary and difficult at first. We’re not all naturally outgoing and many interviewers aren’t either. It’s also important to know why the survey is being used so you can be confident in what you’re doing. Be kind to yourself, and don’t be disheartened as you will likely struggle to start with. It gets easier and you’ll get much more comfortable with it with practice – with perseverance you will get people to stop!

People are generally nice. They might not be keen to do a survey, but if you’re genuine and tell them that it helps you, usually 25% or more are happy to help.

  • Keep it simple: Just say hi, you’re doing a short survey, and ask if they want to help.
  • Don’t sound like you’re reading a script – be yourself! This will help you come across much better and you’ll sound more genuine and human.
  • You can add that it really helps your organisation and that you need to collect the surveys to put on good programming and get funding to continue your work.

You should consider accessibility requirements and offer people a chance to sit down if they’d like – but reassure them about the length of the survey so they don’t think they’ll be there forever.

Families with young children might not be able to stop for more than a couple of minutes so if you have a QR code or link to the survey that they can scan and take away, this might be the best way for parents to participate at a more convenient moment.

Woman and person dressed as robot looking at each other and smiling.

How do I explain any personal questions?

Some people may be confused why you’re asking questions about them or find them intrusive or suspicious. It may feel to some people like it’s unrelated to the experience they’ve had, and that you shouldn’t be asking questions about that.

One of the changes to the Arts Council England mandatory questions in April 2024 is that these more personal questions are moving towards the end of the survey. This means that respondents get a chance to warm up a bit answering more general questions first, so it might make them a bit more comfortable answering the personal questions later.

To reassure people, you can touch on a few things:

  • It’s all anonymous and confidential – people can’t and won’t be identified. Their answers are usually looked at as a group – not individually.
  • The reason that personal questions are asked is to find out who is in the current audience. Then, we can work out if anyone isn’t attending and is missing out – so that we can do better for them.
  • When charity funders or public money is used to support an organisation or its programming, funders and the government need to make sure that it’s reaching the people who need it most and having positive impacts for everyone in society.

You can acknowledge that these questions might be annoying and reassure people that they don’t have to answer them – but that they do help make sure good things happen.

Further support

Thank you for reading this article – hopefully you’ve found it useful, and it’s given you some insight into sample sizes, accuracy of data, and ways to make your surveying more representative. If you’d like to discuss this is in more detail with me and get some more advice or support, I’m here to help. The Digital Culture Network is a funded programme of Arts Council England to provide completely free, unlimited one-to-one support for you and your colleagues.

You can book in for a one-to-one call with me using our Ask a Tech Champion form.


Other news

More by the author