Official Statistics

Community Life Survey: Technical report

Updated 3 May 2023

Applies to England

1. Introduction

This technical report covers the 2021/22 Community Life Survey. Technical reports for previous years are available on the UK Data Service.

1.1 Background to the survey

The Community Life Survey has been conducted by Kantar Public on behalf of the Department for Digital, Culture, Media and Sport since 2012.[footnote 1] The dataset from every survey year between 2012/13 and 2020/21 is available from the UK Data Service. The 2021/22 dataset will be available from the UK data service in Spring 2023, and a data user guide can be found in Chapter 9 of this report.

1.2 Survey Objectives

The Community Life Survey provides Official Statistics on issues that are key to encouraging social action and empowering communities, including volunteering, charitable giving, community engagement, well-being and loneliness. The key objectives of the survey are to:

  • Provide robust, nationally representative data on behaviours and attitudes within communities, to inform and direct policy and action in these areas.
  • Provide a key evidence source for policy makers in government, public bodies, third sector organisations and other external stakeholders.
  • Underpin further research and debate on building stronger communities.

1.3 Survey Design

The Community Life Survey 2021/22 was conducted via an online and paper survey method called Address Based Online Surveying (ABOS): an affordable method of surveying the general population that still employs random sampling techniques. ABOS is a type of “push to web” methodology. In brief, this methodology takes the following approach:

  1. A stratified random sample of addresses is drawn from the Royal Mail’s postcode address file and an invitation letter is sent to ‘the residents’ of each one, containing username(s) and password(s) plus the URL / QR code of the survey website.
  2. Respondents can log on using this information and complete the survey as they might any other online survey, with the option to stop and restart their survey when it suits them.
  3. All non-responding households receive up to two reminder letters, with some households receiving paper questionnaires alongside the second reminder letter.
  4. A third reminder mailing is also sent to a targeted subsample of 33% of households,. for which, based on Kantar Public’s ABOS field data from previous studies, it is deemed likely to have the most significant impact (mainly deprived areas and addresses with a younger household structure).
  5. Once the questionnaire is submitted, the specific username and password cannot be used again, ensuring data confidentiality from others with access to this information.
  6. Paper questionnaires are also available on request for those who are unable to take part online. The paper questionnaire represents a slightly cut down version of the online survey (around 50% of online questions are included in the paper questionnaire), with some questions omitted to reduce respondent burden or because they have complex filtering which is unsuitable for a paper questionnaire.

For further details about ABOS please see the Social Research Practice article ‘An introduction to address-based online surveying’. [footnote 2]

1.4 Questionnaire

Respondents can complete the survey online or by completing a paper questionnaire. The survey asks about a range of topics including; identity, social networks, sense of community, civic engagement, volunteering, social action, charitable giving, subjective wellbeing and loneliness. The paper questionnaire covers the same topics as the online survey however the paper questionnaire is reduced in length. More information about the survey used in both modes can be found in Chapter 4.

1.5 Survey Fieldwork

Fieldwork for the Community Life Survey 2021/22 was conducted between October 2021 and September 2022, with samples issued on a quarterly basis. Overall, 10,126 interviews were achieved over the year, which represents a population-representative overall (online or paper) individual response rate of 23%.

1.6 Weighting

The survey data is weighted to compensate for variations in sampling and response probability. The inferential population is ‘all adults in England aged 16+ and living in private residence’.

The data is calibrated to ensure that the weighted sample matches estimated population totals for several dimensions: sex by age group, degree level education by age group, housing tenure, region, ethnic group, number of co-resident adults, and internet usage by age group. As respondents can complete the survey online or via a paper questionnaire, there are different weights for online only and online and paper questions. More information about the weighting process can be found in Chapter 7.

1.7 Coronavirus (COVID-19)

It should be noted that fieldwork, particularly during the first quarter of the 2021/22 survey, took place during the COVID-19 pandemic. It is unclear what effect the COVID-19 pandemic, associated lockdown measures and associated media coverage may have had on relevant public behaviours, attitudes and perceptions. This should be taken into consideration when interpreting these results.  

2. Development

The Community Life Survey first took place in 2012/13 and was conducted via face-to-face interview. The survey was designed to provide an efficient continuation of the Citizenship Survey - commissioned by the Department for Communities and Local Government [footnote 3] from 2001 to 2011 – replicating much of the methodology as well as incorporating many of the same measures. The face-to-face survey was effective at providing a robust and nationally representative Official Statistic. However, the method was expensive and resource intensive. As a result, Kantar Public was commissioned to carry out development work to explore the feasibility of incorporating online/paper self-completion methods of data collection, which cost significantly less than face-to-face interviews.

The programme of methodological development work was carried out in parallel to the face-to-face survey and conducted between 2012 and 2015.[footnote 4] It comprised four separate stages:

  • Stage 1: Testing and refining an initial field model for online/paper survey delivery
  • Stage 2: Larger scale online/paper pilot conducted alongside the face to face survey
  • Stage 3: Testing the feasibility of smapling all adults in the household instead of one selected at random
  • Stage 4: Study to investigate the relative contribution of sample effects and mode effects in explaining estimate differences between face-to-face and online/paper modes.

2.1 Online and Paper Survey Development

As the methodology is relatively new, the survey has continued to evolve as further evidence becomes available (see below for references). Developments by survey year are detailed in summary below, more details can be found in the relevant year’s technical report.

2.2 ABOS developments by Survey Year

2012/13: The first Community Life Survey took place

The first Community Life Survey was conducted using a face-to-face interview design. The survey replicated much of the methodology of the Citizenship Survey as well as incorporating many of the same measures. Details of the methodology and achieved samples can be found in the 2012-13 technical report.

2012:Initial ABOS test

Kantar Public conducted a large-scale test of the Address Based Online Surveying (ABOS) to explore cost effective methods for future survey years.

The aim of the test was to scope and test a potential data collection model for an online/postal survey. Approximately 6,700 addresses were sampled for the test, with invitations to access the online survey sent out by post. At each address, one adult aged 16+ was invited to take part, with selection based on the adult in the household who had the “last birthday”. Up to two postal reminders were sent to each address to maximize the response rate.

Four different incentive packages were tested in the first letter: £5 shopping voucher conditional on completing the survey; £10 conditional voucher; £5 unconditional voucher; no incentive (the control group).

A random subset of non-responders received a postal questionnaire with the second reminder mailing, which they could complete instead of the online version. The paper version of the questionnaire was an edited version of the online version with a reduced number of questions, as the full survey was too lengthy to accommodate on paper.

For more detailed results of this development work please see the full report for 2012/13.

2013/14: Full scale test

Based on recommendations for an optimal design, a larger scale ABOS survey was conducted which ran concurrently with the standard face-to-face survey during April 2013-March 2014.

A larger annual sample size of c.10,000 completed online/postal questionnaires over the survey year provided a more robust test of differences in a) sample composition and b) measurement between the ABOS and face-to-face interview surveys.

The larger sample size allowed some initial exploration of the relative contribution of sampling/fieldwork methods and data collection mode in explaining differences between ABOS and face-to-face survey estimates. Alongside the full-scale test, the feasibility of sampling all adults in the household instead of one selected at random was also explored. This involved an additional issued sample of n=1,400 addresses in Quarter 3.

In this variant, the invitation letter asked all adults in the household (up to a maximum of four) to complete the survey with an incentive of a £10 voucher offered to each responding adult. This was proposed as a solution to the problem of non-compliance with within-household sampling instructions identified from the initial test survey.

The study allowed a comparison of the ‘all adults’ vs ‘single adult’ design on several measures including: completion behaviour; sample profile; and data quality.For full details please see the Community Life Web Survey Technical Report 2013-14 (ukdataservice.ac.uk)

2014/15:The ‘all adults’ approach was adopted into the survey

Following the work exploring the feasibility of sampling all adults in the household conducted in 2013/14, the ‘all adults’ design was adopted into the survey design. A study was also conducted to investigate the relative contribution of sample effects and mode effects in explaining differences between face-to-face and ABOS estimates. For details and analysis of these experiments, please see the relevant please see the relevant report

2015/16: Half of selected addresses received two copies of the paper questionnaire with their second reminder

In 2015/16 the design of the survey was amended, so that up to two copies of the paper questionnaire were included in second reminder mailings, targeted towards more deprived areas. This was done to limit between-strata variance in response rate. Paper questionnaires remained available to all sampled households on request.

2016/17: The Community Life Survey officially switched to the ABOS design

In 2016/17 the Community Life Survey switched to a solely ABOS design. The overall design of the 2016/17 survey remained largely unchanged from the 2015/16 survey. However, the design was modified to achieve minimum usable samples of respondents from minority ethnic groups. For full details, please see the technical report for 2016/17

2017/18 Overall design remained largely unchanged from 2016/17

Invitations for the 2017/18 survey were sent out to 31,059 addresses, with 7,558 online questionnaires and 2,659 paper questionnaires completed over the course of the year. Full details of sampling and response are covered in detail within the technical report for 2017/18.

2018/19 Overall Design remained unchanged; letter experiment conducted

Invitations for the 2018/19 survey were sent out to 31,761 addresses, with 7,902 online questionnaires and 2,725 paper questionnaires completed over the course of the year. Full details of sampling and response are covered in detail in the 2018/19 technical report

An experiment was conducted in Q2 to test the effectiveness of a new letter design. New letters were developed following a review of best practice and wider literature and issued to half the sample in Q2. Full details of this experiment are available in the 2018/19 technical report

The new letters were adopted for the full sample in Q3 and Q4.

2019/20 Overall design remained unchanged

Invitations for the 2019/20 survey were sent out to 31,728 addresses, with 7,849 online questionnaires and 2,394 paper questionnaires completed over the course of the year. Full details of sampling and response are covered in the technical report for 2019/20.

2020/21 Overall design remained unchanged; randomised controlled trial of the efficacy of providing paper questionnaires in the second reminder

Invitations for the 2020/21 survey were sent out to 27,568 addresses, with 8,787 online questionnaires and 2,130 paper questionnaires completed over the course of the year. Full details of sampling and response are covered in detail in the technical report for 2020/21

The 2020-21 Community Life Survey included a randomised controlled trial of the efficacy of providing paper questionnaires in the second reminder. Findings identified that the proactive provision of paper questionnaires displaces online responses in some strata. Consequently, in 2021/22 the survey design was adapted to include two paper questionnaires in the second reminder only for (i) sampled addresses in the most deprived quintile group, and (ii) sampled addresses where it is expected that every resident will be aged 65 or older (based on CACI data).

Full details are available in Appendix E of the Technical Report for 2020-21.

It should be noted that fieldwork for all four quarters of the 2020/21 survey took place during the COVID-19 pandemic. The effects of the pandemic should be taken into consideration when interpreting the results of the 2020/21 survey.

2021/22 Overall design improvements; advancements in sampling techniques; focus on accessibility and a social capital harmonisation experiment conducted

Invitations for the 2021/22 survey were sent out to 25,813 addresses, with 8,343 online questionnaires and 1,783 paper questionnaires completed over the course of the year. Full details of sampling and response are covered in detail in this technical report.

Advancements to the survey design

In 2021-22 two principal advancements were made to the survey design (i) to improve age-group targeting by utilising household-level data provided by data supplier CACI, and (ii) to improve the efficiency of the ethnic-group targeting.

Strategic use of a third reminder

To help reduce non-response bias, a third reminder stage was introduced for a targeted subsample of addresses, mainly in deprived areas or where a younger-than-average household structure was expected.

Accessibility enhancements

The 2021/22 survey included several accessibility enhancements to improve the user experience, particularly for those completing the survey using a screen reader. These enhancements included removing the requirement to click ‘continue’ before certain codes (e.g. don’t know) appear on screen for all respondents. For those using a screen reader to complete the survey an alternative question template was provided for ‘dynamic grid’ questions that do not work with a screen reader.

Social capital harmonisation

In addition to the survey enhancements, an experiment was conducted in January 2022-June 2022 to identify whether moving to the Government Statistical Service (GSS) harmonised social capital questions would disrupt reported trends. The questions explored were STrustGen2 and SBeNeigh. Other question changes were considered for inclusion in this analysis (FrndSat1-2 and PaffLoc), but an initial assessment deemed the historic and harmonised versions to be too different to merit comparisons.

Overall analysis indicated - There were small systematic differences between the two versions of STrustGen2, with the harmonised version producing slightly lower levels of trust. We recommend publishing the weighted figures for both versions alongside each other for the 2021-22 survey. If trends are reported using the harmonised version in future years of the survey, then a footnote should highlight the change in methodology and refer users to the 2021-22 report. - There were significant measurement differences between the two versions of SBeNeigh. Respondents were less likely to say they strongly agreed or agreed that they belonged to their neighbourhood than they were to say they felt fairly/very strongly that they belonged to their immediate neighbourhood. Based on these findings, we do not recommend treating answers to the historic and harmonised versions of SBeNeigh as comparable measures for trend analysis.

Full details are available in Appendix E.

3. Sampling

3.1 Sample design objectives

The 2021/22 Community Life Survey sample design had to achieve several objectives:

  • A responding sample size of at least 10,500 adults in England aged 16+ with the assumption that up to 500 would be edited from the dataset, leaving a total of at least 10,000;
  • A responding sample size of at least 2,000 respondents from ethnic minority groups (after editing);
  • A maximised overall effective sample size given the two constraints above.

The ‘effective’ sample size in this context reflects the statistical value of the data after weighting to compensate for unequal sampling and response probabilities. As a general rule, the more the sample needs to be weighted the smaller the effective sample size relative to the actual responding sample size.

An equal probability sample of addresses would not have allowed all these objectives to be achieved so a stratified unequal probability sample of addresses was drawn.

The sample frame was the Royal Mail Postcode Address File (PAF) which includes c.99% of all residential addresses in England.

3.2 Address sampling protocol

The sample design for the 2021-22 survey was a little different from previous years, reflecting internal development work carried out by Kantar Public in early 2021. I comprised two steps: a large ‘master’ sample of addresses was drawn, additional data from CACI was appended to each address[footnote 5],and then a smaller, more tailored sub-sample was drawn for issue. The process is described below.

The master sample

A stratified master sample of 44,000 addresses in England was drawn from the PAF ‘small user’ subframe.[footnote 6] This frame was stratified by (i) level of ethnic diversity (2 strata: ‘diverse’ and ‘non-diverse’), (ii) ITL1 region (9 strata), and (iii) deprivation quintile (5 strata). In total, the frame was disaggregated into 90 strata (295) before the master sample was drawn. Within each stratum, addresses were sorted by local authority and then by postcode. A random start-point was selected within each stratum before a systematic sample of addresses was drawn with an interval suitable to obtain the target number of addresses for that stratum.

Within each ethnic diversity stratum, a uniform sampling fraction was applied across all sub-strata, but this fraction was nearly twice as high in the ‘diverse’ stratum as in the ‘non-diverse’ stratum. In total, 24,000 addresses were drawn from the ‘diverse’ stratum (55%) and 20,000 from the ‘non-diverse’ stratum (45%).

The ethnic diversity stratum was defined at the Census Output Area (OA) level. OAs were separately ranked with respect to Census 2011 population proportions coded as (i) Indian, (ii) Pakistani, (iii) Bangladeshi, (iv) black, and (v) other (not white) ethnic group (including mixed). The cumulative population coverage was calculated for each OA for each ethnic category. Any OA with a cumulative population coverage rate <=80% for any of the five ethnic categories was classified as ‘diverse’ unless >90% of the population was coded white. In total, 58,705 OAs (34%) were coded as ‘diverse’ while 112,632 OAs (66%) were coded as ‘non-diverse’.

The master sample was then augmented by CACI. For each address in the master sample, CACI added the expected number of resident adults in each of several age bands (18-24, 25-34, 35-44, 45-54, 55-64, 65-74, 75+). Although this auxiliary data is imperfect, Kantar Public’s investigations have shown that it is effective at identifying households that are mostly young or mostly old. Once this data was attached, the master sample was additionally classified by expected household age structure based on the CACI data: (i) all aged 35 or younger (7,795 addresses: 18% of the total); (ii) all aged 65 or older (7,977 addresses: 18% of the total); (iii) all other addresses (28,228 addresses: 64% of the total).

The sample for issue

The master sample was stratified slightly differently from the PAF: (i) level of ethnic diversity (2 strata: ‘diverse’ and ‘non-diverse’), (ii) deprivation quintile (5 strata), and (iii) expected household age structure (3 strata). In total, the master sample was disaggregated into 30 strata (253) before the sample for issue was drawn.

Within each master sample stratum, the addresses were sorted by local authority and then by postcode. A random start-point was selected within each master sample stratum before a systematic sample of addresses was drawn with an interval suitable to obtain the target number of addresses to issue from that stratum.

Every master sample stratum had a predicted conversion rate (number of completed questionnaires as a fraction of the number of addresses issued) determined by both the selected contact strategy and an analysis of Community Life Survey response data from 2018-21.

Table 3.1 summarises the contact strategy for each stratum, showing the number of mailings (three or four) and the type of each mailing: push-to-web without paper questionnaires (W) or the same but with paper questionnaires (P). The 2020-21 Community Life Survey had an embedded trial of the efficacy of providing paper questionnaires in the second reminder. A critical finding from this work was that the proactive provision of paper questionnaires displaced online responses in some strata. Consequently, two paper questionnaires were included in the second reminder only for (i) sampled addresses in the most deprived quintile group, and (ii) sampled address where CACI expected that every resident will be aged 65 or older. This approach reduced costs by placing the paper questionnaires where they were most needed.

Table 3.1: Contact strategy by area deprivation*expected household age structure stratum (the same in both ethnic diversity strata)

Table 3.1: Contact strategy by area deprivation*expected household age structure stratum (the same in both ethnic diversity strata)
  Area deprivation quintile group        
Expected householdage structure Most deprived 2nd 3rd 4th Least deprived
All <=35 WWPW WWWW WWWW WWW WWW
Other WWPW WWW WWW WWW WWW
All >=65 WWPW WWP WWP WWP WWP

A stratified random sample of 23,357 addresses was drawn from the master sample for issue with the unissued addresses forming a reserve pool.

The sampled addresses were systematically allocated (with equal probability) to one of quarters 1, 2, 3 or 4 and then further divided into two batches to be issued six weeks apart. These batches were further subdivided into three: the first subdivision comprising 80% of what was expected to be required and the other two subdivisions comprising 10% of what was expected to be required. The intention was to issue all three subdivisions of each batch but to have the flexibility to hold back one or two of the small subdivisions if prior response rates proved higher than expected.

In the event, multiple adjustments were made to the issued sample over the course of fieldwork in reaction to the response data obtained. Reserve samples were drawn for quarter 1 batch 2 (1,185 addresses), for quarter 2 batches 1 and 2 (1,458 addresses for each batch), and for quarter 3 batches 1 and 2 (1,109 addresses for each batch). An unexpectedly strong response rate in quarter 3 necessitated the subtraction of 1,585 addresses from quarter 4 batch 2. The final number of addresses issued (26,923) was 15% greater than the originally drawn sample, although slightly fewer than was issued for the 2020-21 survey (27,568). Table 3.2 shows how many were issued per stratum, while table 3.3 shows the sampling intervals applied (i.e. 1 in every N addresses sampled).

Table 3.2 Addresses issued per stratum

(Ethnically diverse stratum) Area deprivation quintile group
Expected householdage structure Most deprived 2nd 3rd 4th Least deprived
All <=35 1,097 1,182 704 429 258
Other 2,866 3,078 1,951 1,349 984
All >=65 519 386 294 213 215
(Ethnically non-diverse stratum) Area deprivation quintile group        
Expected householdage structure Most deprived 2nd 3rd 4th Least deprived
All <=35 326 267 255 234 231
Other 1,211 1,528 1,639 1,583 1,508
All >=65 410 471 565 576 594

Table 3.3 Sampling intervals per stratum (rounded to nearest integer): 1 in every N addresses sampled (1:N)

(Ethnically diverse stratum) Area deprivation quintile group
Expected householdage structure Most deprived 2nd 3rd 4th Least deprived
All <=35 1:485 1:508 1:575 1:563 1:587
Other 1:532 1:452 1:527 1:580 1:624
All >=65 1:544 1:599 1:623 1:792 1:720
(Ethnically non-diverse stratum) Area deprivation quintile group        
Expected householdage structure Most deprived 2nd 3rd 4th Least deprived
All <=35 1:1,148 1:1,257 1:1,319 1:1,486 1:1,468
Other 1:1,338 1:1,134 1:1,290 1:1,437 1:1,545
All >=65 1:1,181 1:1,336 1:1,529 1:1,701 1:1,648

3.3 Within-address sampling protocol

At each address, all permanently resident adults aged 16+ were invited to take part in the survey although only between two and four login codes were included in the letter (more could be requested for larger households). The number of login codes included was equivalent to one plus the CACI expected total number of adults aged 18+ but limited to a maximum of four (7,468 issued addresses (28%) were provided with two login codes, 16,292 addresses (61%) were provided with three login codes, and 3,163 addresses (12%) were provided with four login codes).

It is worth noting that a small fraction (<3%) of addresses in England contain more than one household. There is no household selection stage so, at these addresses, the selected household is the one that picks up the invitation letter.

4. Questionnaire

4.1 Overview

The questionnaire for the 2021/22 survey was intended to provide comparable data to previous years, so the majority of questions remained the same as previous years. However, several adaptations were made to the survey script to improve the user experience particularly for respondents completing the survey using a screen reader.

4.2 Questionnaire development

The online questionnaire was reviewed and updated in September 2021 to:

  • To improve the user experience, particularly for respondents completing the survey using a screen reader
  • Reflect current policy priorities
  • Align the socio-demographic questions with the latest GSS Harmonisation Guidelines, where appropriate. As a result, the following changes were made to the online version of the questionnaire.

Changes to improve the user experience

The online version of the Community Life survey was originally designed to emulate the interviewer administered design, and typically used the second screen design to display don’t know and refused options. However, this design is not compatible with screen readers and anecdotal evidence from usability testing suggests people don’t know this option is available to them, even when prompted on screen and it can cause confusion when it appears (where they think they are being asked the same question again). With that in mind in 2021/22 don’t know and refusal options were added to the survey on a case-by-case basis at selected questions where they were deemed appropriate. As this differs to the approach used in previous surveys the proportion of respondents selecting these responses has increased.

In addition to moving away from the second screen design a number of other improvements were made to improve accessibility.

  • Abbreviations and acronyms such as ‘e.g.’ and ‘etc.’ were replaced with more user-friendly text and dashes (-) were replaced with “to” to improve accessibility.
  • Alternative question displays were provided for questions currently displayed in a grid format for respondents using a screen reader. Grid style questions can be particularly disorienting for screen reader users, so it is recommended that each statement is displayed on a separate page, as this reduces clutter and allows respondents to focus on one statement at a time. To facilitate this, respondents were asked at the start of the survey if they were using a screen reader to complete the survey.

Questionnaire changes

ScreenReader: A new question was added to record whether the respondent intends to complete the survey using a screen reader

SEX/SEXSR: As this question is displayed in a grid format, respondent specific screen reader accessible questions were added. The third answer code was amended from ‘other’ to ‘Identify in some other way’

AGEIFSR/AGEIF3SR: As this question is displayed in a grid format, respondent specific screen reader accessible questions were added

NCHILD2: A new question added to collect the number of the respondents own children aged under 16 currently live in their household, including any children they have formally adopted or fostered

BCAGE3: A new question was added to collect the age/age range of the respondent’s child(ren). Previously the question covered all children living at the address

FrndRel1SR- FrndRel4SR: As this question is displayed in a grid format, respondent specific screen reader accessible questions were added

FrndSat1SR- FrndSat2SR: As this question is displayed in a grid format, respondent specific screen reader accessible questions were added

Assets2: Some additional text was added to the question to clarify the meaning of ‘Place of Worship’

CharServSR: As this question is displayed in a grid format, respondent specific screen reader accessible questions were added

FGroupSR: As this question is displayed in a grid format, respondent specific screen reader accessible questions were added

VYStop: Some pre-codes relating to Coronavirus were removed from the question

LocWant: The routing at this question was amended to include respondents who had taken part in any activities and aware of others being involved in activities they hadn’t taken part in. Previously this question wasn’t asked if the respondent had taken part in any activities even if there were other activities taking place in their local area that they were aware of and hadn’t taken part in.

Sld: The question wording was updated to match the sexual orientation question used on the census

QHealthM, QHealthW, QHealthD: Several new questions on mental health were added to the survey

RelBus, Looked, LKTime, Whynlk: Adopted recommendation for GSS harmonisation

IncomePeriod / IncomeA / IncomeM /IncomeW / IncomeXPeriod / IncomeXA / IncomeXM / IncomeXW: An additional question was asked up front asking the respondent whether they would like to answer based on weekly, monthly or annual income

The paper questionnaire was also reviewed, with the following amendments being made across both the web and paper questionnaires:

MARS/Q6: answer codes were re-ordered

*STRUST/Q21: Different text now in bold for emphasis

VBarr/Q38: Some pre-codes relating to Coronavirus removed

HLLord/Q57: Adopted recommendation for GSS harmonisation

Relig/Q60: Adopted recommendation for GSS harmonisation

DIII/Q61: Adopted recommendation for GSS harmonisation

4.3 Online Questionaire Content 2021/22

The final Community Life Online Survey 2021/22 consisted of the following modules:

Demographics – details of the household, including the number of adults and children, the gender and age of people within the household, and details of the relationships within the household.

Identity and Social Networks – basic demographic information on the respondent’s friends, and how often and how they communicate with friends and family that they do not live with.

Your Community – the respondent’s sense of belonging to their immediate neighbourhood, local area, and Britain, as well as their relationships with neighbours and their satisfaction with the local area.

Civic Engagement – involvement in local affairs, community decision making through formal roles or groups, and their ability to influence decisions affecting both the local area and Britain.

Volunteering – involvement with groups, clubs, or organisations, giving help through these groups (known as formal volunteering), volunteering through work, previous lapsed volunteering, giving help as an individual to someone who is not a relative (known as informal volunteering), charitable giving in the past four weeks, and charity use in the past twelve months.

Social Action – awareness of local people getting involved in their local area to either: set up a new service or amenity, stop the closure of a service or amenity, stop something happening, helping to run a local service or amenity, help to organise a community event such as a street party, or helping with any other issues affecting the local area; whether the respondent is personally involved in any of these activities, what they do, how they became involved, why they became involved, and, if they are not involved, why they are not.

Subjective Wellbeing and Loneliness – the respondent’s feelings on aspects of their life, including the extent to which they feel the things they do in life are worthwhile, levels of happiness and satisfaction, and feelings of anxiety and loneliness.

Demographics Part Two– covers other demographic information such as the general health of the respondent and their citizenship, employment status, education, and income. This section ends by asking if the respondent would be happy to be recontacted in any follow up research.

A copy of the online questionnaire can be found in Appendix A.

4.4 Paper Questionnaire Content 2021/22

The paper version of the questionnaire covered a smaller subset of questions than the online survey, though still covered the same subject areas outlined in section 4.3. The paper questionnaire was reduced in length as the time taken to complete the online survey averaged around half an hour, which was not deemed appropriate for a paper questionnaire.

The question wording used in both the online and the paper versions of the questionnaire was the same. In total, around 50% of the questions included in the online questionnaire were included in the paper questionnaire.

In the paper questionnaire, “Don’t know and/or “Prefer not to say” answer codes were included where they were also present for the equivalent online question.

A copy of the paper questionnaire can be found in Appendix B.

5. Fieldwork

5.1 Introduction

Fieldwork for the Community Life Survey 2021/22 was conducted between October 2021 and September 2022, with samples issued on a quarterly basis. Each quarter’s sample was split into two batches, the first of which began at the start of the quarter, and the second began midway through the quarter. The specific fieldwork dates for each quarter are shown below in table 5.1.

Table 5.1:Fieldwork dates

Quarter Fieldwork start Fieldwork end
Quarter 1 11 October 2021 31 December 2021
Quarter 2 10 January 2022 31 March 2022
Quarter 3 07 April 2022 30 June 2022
Quarter 4 07 July 2022 30 September 2022

The paper questionnaire was made available to around 40% of respondents at the second reminder stage based on the response probability strata as described in section 3. The paper questionnaire was also available on request to all respondents who preferred to complete the survey on paper or who were unable to complete online.

5.2 Contact procedures

5.2.1 Online letters

All sampled addresses were sent a letter in a white envelope with an On Her Majesty’s Service logo. Between 8th and 30th of September 2022, on the advice of No. 10 and DCMS, we adopted an interim measure of using plain envelopes following the passing of Her Majesty Queen Elizabeth II.

The letter invited up to four people aged 16 or over in the household to take part in the survey, directed respondents to www.commlife.co.uk via URL or QR code and provided information on how to log in to the survey. Up to four sets of unique reference numbers and passwords were provided to each address for the respondents to log in with.

The letter informed the resident(s) that they would be able to claim a £10 shopping voucher after completing the survey, as a thank you for taking part (see section 5.4 for details of incentives). The letter also provided an email address and freephone number for resident(s) to contact Kantar Public in case they wanted more information regarding the survey or wanted to request a postal questionnaire.

The back of the letter contained important information including the purpose of the survey, how the addresses were selected, data protection, the voluntary nature of the survey and the importance of taking part. It also included information for those respondents who wished to take part via postal questionnaire, informing them that, if requested, a paper version of the survey will be posted to them along with a pre-paid envelope to allow it to be returned at no extra cost.

All non-responding households [footnote 7] were sent up to two reminder letters, at the end of the second and fourth weeks of fieldwork for each batch. A targeted third reminder letter was sent to households for which, based on Kantar Public’s ABOS field data from previous studies, it was deemed likely to have the most significant impact (mainly deprived areas and addresses with a younger household structure). The information contained in the reminder letters was similar to the invitation letters, with slightly modified messaging to reflect each reminder stage.

As well as the online survey, respondents were given the option to complete a paper questionnaire, which consisted of an abridged version of the online survey. Each letter informed respondents that they could request a paper questionnaire by contacting Kantar Public using the email address or freephone telephone number provided.

In addition, some addresses received up to two paper questionnaires with the second reminder letter. Paper questionnaires were pro-actively provided to (i) sampled addresses in the most deprived quintile group, and (ii) sampled addresses where it was expected that every resident would be aged 65 or older (based on CACI data).

The specific dates for each letter dispatch over the 2021/22 survey year are outlined below in table 5.3.

Table 5.3: Letter dispatch dates

Quarter Batch Initial letter First reminder letter Second reminder letter Third reminder letter
Quarter 1 1 11 October 2021 22 October 2021 03 November 2021 17 November 2021
  2 08 November 2021 19 November 2021 01 December 2021 14 December 2021
Quarter 2 1 10 January 2022 24 January 2022 09 February 2022 21 February 2022
  2 10 February 2022 23 February 2022 10 March 2022 21 March 2022
Quarter 3 1 07 April 2022 12 April 2022 09 May 2022 19 May 2022
  2 10 May 2022 24 May 2022 07 June 2022 21 June 2022
Quarter 4 1 07 July 2022 21 July 2022 04 August 2022 18 August 2022
  2 04 August 2022 18 August 2022 01 September 2022 20 September 2022

Copies of the online letters used during 2021/22 are available in Appendix C.

5.2.2 Confidentiality

Each of the letters assured the respondent of confidentiality, by answering the question “Is this survey confidential?” with the following:

“Yes. The information that is collected will only be used for research and statistical purposes. Your contact details are kept separate from your answers and will not be passed on to any other organisation outside of Kantar Public or supplier organisations who assist in running the survey. Data from the survey will be shared with DCMS for the purpose of producing and publishing statistics. The data shared with DCMS won’t contain your name or contact details, and no individual or household will be identifiable from the results. Your answers will be combined with others that take part in the survey. You will not receive any ‘junk mail’ as a result of taking part. For more information about how we keep your data safe, you can visit www.commlife.co.uk”

5.3 Fieldwork figures

The next section outlines the fieldwork figures and response rates achieved on the 2021/22 survey. Figures from the online survey are outlined first, followed by the paper figures, and then both modes combined.

5.3.1 Online fieldwork

When discussing fieldwork figures in this section, response rates are referred to in two different ways:

Household response rate – This is the percentage of households contacted as part of the survey in which at least one questionnaire was completed.

Individual response rate– This is the estimated response rate amongst all adults that were eligible to complete the survey.

The target number of completed questionnaires required on the online survey over the 2021/22 survey year was 7,900, equating to 1,975 per quarter. In total, 25,813 addresses were sampled [footnote 8],from which 8,343 interviews were achieved online, having removed 459 after validation checks.[footnote 9] At least one online interview was completed in 5,901 households, which represented an online household response rate of 22.86%.

In an online survey of this nature, no information is known about the reason for non-response in each individual household. However, it can be assumed that 8% of the addresses in the sample were not residential and were therefore ineligible to complete the survey.[footnote 10] Once deadwood[footnote 11] addresses are accounted for, the final online household response rate was 24.85%.

The expected number of eligible individuals per residential address was averaged at 1.89 per address, therefore the total number of eligible adults sampled was 44,884. The online survey was completed by 8,343 people, indicating an online individual response rate of 18.59%.

The full breakdown of the online fieldwork figures and response rates by quarter are available in table 5.4.

Table 5.4: Online response rates by quarter

Quarter No. of sampled addresses No. of completed questionnaires No. households completed Household response rate (excl. deadwood) Individual response rate (excl. deadwood)
Quarter 1 7,025 1,935 1,398 21.63% 15.84%
Quarter 2 8,755 2,454 1,769 21.96% 16.12%
Quarter 3 6,947 2,684 1,861 29.11% 22.22%
Quarter 4 3,086 1,270 873 30.75% 23.66%
Total 25,813 8343 5,901 24.85% 18.59%

5.3.2 Paper fieldwork

Over the course of the 2021/22 survey year, 346 paper questionnaires were requested across 256 households. This represented about 1% of the overall sampled households. Paper questionnaires were returned from 205 households, giving a household response rate of 80.07% amongst those who requested a paper version of the questionnaire. The number of paper questionnaires returned over the survey year, including both those requested by respondents and those included within the second reminder, is shown in table 5.5.

Table 5.5: Number of paper questionnaires returned by quarter

Quarter No. of paper questionnaires returned by quarter
Quarter 1 414
Quarter 2 518
Quarter 3 581
Quarter 4 270
Total 1,783

5.3.3 Combined fieldwork figures

By combining the 8,343 completed online surveys and the 1,783 returned paper questionnaires, the total number of interviews completed for the 2021/22 survey stands at 10,126 interviews. The combined household response rate, including online and paper interviews, therefore reached 27.39% and after accounting for deadwood addresses, the overall household response rate was 29.78%. The overall individual response rate, after accounting for deadwood, was 22.56%.

The overall fieldwork figures, including online and paper interviews, are broken down by quarter in table 5.6.

Table 5.6: Combined online and paper fieldwork figures by quarter

Quarter No. of sampled addresses No. of interviews achieved – online + paper No. households completed Household response rate (excl. deadwood) Individual response rate (excl. deadwood)
Quarter 1 7,025 2,349 1,681 26.01% 19.23%
Quarter 2 8,755 2,972 2,117 26.28% 19.52%
Quarter 3 6,947 3,265 2,232 34.92% 27.03%
Quarter 4 3,086 1,540 1,042 36.70% 28.70%
Total 25,813 10,126 7,072 29.78% 22.56%

5.3.4 Combined fieldwork figures – weighted

Due to the sample approach which targets certain ethnic groups[footnote 12], addresses with a lower than average expected response probability were over sampled. As a result, this means the response rate is not population-representative. However, weighting can rectify this, resulting in a population-representative 18.80% individual response rate for online only, and a population-representative overall (online or paper) individual response rate of 23.40%.

5.4 Incentive system

All respondents that completed the Community Life Survey were given a £10 shopping voucher as a thank you for taking part.

5.4.1 Online incentives

The £10 incentive available to respondents who completed the survey online comprised online vouchers which were provided by email, or gift cards which were sent in the post. Online vouchers were emailed to respondents within 24 hours, while paper vouchers were sent in the post and arrived within one week of the order. Online survey respondents could choose which voucher they wanted to receive from a choice of up to thirteen different providers.

5.4.2 Paper incentives

Respondents who returned the paper questionnaire were also provided with a £10 shopping voucher. This voucher was sent in the post and could be used at a variety of high street stores. Once the completed questionnaire was returned by the respondent, vouchers were posted to them within five working days.

5.5 Survey length

The median completion length of the online surveys, with outliers excluded, was 24 minutes and 10 seconds, and the mean was 30 minutes and 41 seconds.[footnote 13] This is based on full surveys and does not include partial completions. The median completion length of the 2021/22 online survey was roughly 5 minutes shorter than the median length of the 2020/21 online survey.

5.6 ‘Micro batching’

To enable better response to changes in fieldwork performance, a ‘micro batching’ system for each quarter was adopted. Micro batching gave us the ability to quickly adjust the sample size between 80% and 140% of the standard batch size as required. In quarter 2 and quarter 3 additional batches were used to account for the underperformance in quarter 1. In quarter 4 fewer batches were used to account for overperformance in quarter 3.

The allocation of sample to the micro batches is outlined below.

Main Reserve
Micro Batch Batch .1 Batch .2 Batch .3 Batch .4 Batch .5 Batch .6 Batch .7
% of sample 80% 10% 10% 10% 10% 10% 10%
Total sample 80% 90% 100% 110% 120% 130% 140%

The final allocation of micro batches for the 2021/22 survey are outlined below in figure 5.2.

Figure 5.2: Allocation of micro batches in the 2021/22 survey

Batch .1 (80%) Batch .2 (10%) Batch .3 (10%) Batch .4 (10%)
Q1 Batch 1 Batch 11.1 Batch 11.2 Batch 11.3 Not used
Q1 Batch 2 Batch 12.1 Batch 12.2 Batch 12.3 Batch 12.4
Q2 Batch 1 Batch 21.1 Batch 21.2 Batch 21.3 Batch 21.4
Q2 Batch 2 Batch 22.1 Batch 22.2 Batch 22.3 Batch 22.4
Q3 Batch 1 Batch 31.1 Batch 31.2 Batch 31.3 Batch 31.4
Q3 Batch 2 Batch 32.1 Batch 32.2 Batch 32.3 Not used
Q4 Batch 1 Batch 41.1 Not used Not used Not used
Q4 Batch 2 Batch 42.1 Not used Not used Not used

6. Data Processing

6.1 Data management

Due to the different structures of the online and paper questionnaires, data management was handled separately for each mode. Online questionnaire data was collected via the web script and, as such, was much more easily accessible. By contrast, paper questionnaires were scanned and converted into an accessible format.

For the final outputs, both sets of interview data were converted into IBM SPSS Statistics, with the online questionnaire structure as a base. The paper questionnaire data was converted to the same structure as the online data so that data from both sources could be combined into a single SPSS file.

6.2 Partial completes

Online respondents can exit the survey at any time before they submit their responses, and while they can return to complete the survey at a later date some chose not to do so.

Equally respondents completing the paper question occasionally leave part of the questionnaire black, for example if they do not wish to answer a particular question or section of the questionnaire.

Partial data can still be useful, providing respondents have answered the substantive questions in the survey. These cases are referred to as usable partial interviews.

Survey responses were checked at several stages to ensure that only usable partial interviews were included. Upon receipt of receiving returned paper questionnaire, the booking in team removed obviously blank paper questionnaires. Following this, during data processing, rules were set for the paper and online surveys to ensure that respondents had provided sufficient data. For the online survey, respondents had to reach a certain point in the questionnaire for their data to count as valid (just after the wellbeing questions). Paper data was judged complete if they answered at least 50% of the questions.

6.1 Standard paper edits

With paper questionnaires,there are a number of completion errors in the data that need to be resolved.These errors generally arise for the following reasons:

  • Cases where the individual selects more than one response to a single coded question

  • Cases where individuals can select more than one response, however they select two conflicting answers such as none of these and a valid survey response

  • Cases where responses are left blank even though the respondent should have answered the question

  • Cases where the individual fails to select an answer for a filter question but then provides an answer for subsequent questions relating to the filter question.

In these situations, respondents were coded as system missing (-8 Don’t know).

6.2 Data Quality

With interview-based surveys we have confidence that almost all the data is collected in a controlled manner and from the right individual.

With most self-completion survey methods, there is no interviewer to do this work so it must be accomplished via other methods. With that in mind an algorithm to validate responses post-fieldwork was used.

The algorithm utilises relevant classic indicators of proxy, careless or fraudulent completion including (i) inconsistencies in household data when multiple completed questionnaires have been received from the same household, (ii) use of the same email address by multiple respondents when providing the necessary details to receive the incentive, (iii) suspiciously short completion times, and (iv) excessive missing data rates.

Other indicators such as flat-lining through question sets with the same response codes were not included as the questionnaire uses very few grid style questions.

This approach led us to remove about 4.5% of cases from the 2021/22 Community Life survey, a rate that is low enough for us to be largely confident of the data’s veracity.

6.3 Coding

Post-interview coding was undertaken by members of the Kantar Public coding department. The code frames are set-up to match those used in previous survey years. The coding department coded verbatim responses, recorded for fully open questions and ‘other specify’ questions, as well as occupation classifications.

6.4 Occupation and socio-economic class

Occupation details were collected for the respondent and were coded according to the Standard Occupational Classifications for (2010) and (2020). This was carried out by coders at Kantar Public using the computer-assisted coding process CASCOT. NS-SEC (2020) was also derived from occupation details.

6.5 Derived variables

A list of the main derived variables is provided in Appendix D.

The following geo-demographic variables were added to the data:

  • Region(formerly Goverment Office Region)
  • Urban/rural indicator
  • Percentage of households in the Ward headed by someone from a non-white ethnic miniority group
  • Inner city PSU indicator
  • Police Force Area
  • ACORN classification
  • ONS ward classifciation
  • Healtth board
  • Primary Care Organisation
  • LSOA area
  • ONS distrcit level classification
  • Output area classfication
  • Indices of multiple deprivation quintile
  • Minority Ethnic Density
  • Index of Multiple Deprivation
  • Income deprivation for England
  • Employment deprivation for England
  • Health deprivation for England
  • Education, Skills and Training deprivation for England
  • Barriers to housing and services deprivation for England
  • Crime and disorder deprivation for England
  • Living and environment deprivation for England

6.6 Data outputs

The Department for Digital, Culture, Media and Sport received a full de-identified cumulative SPSS dataset including derived, geo-demographic and weighting variables at the end of the survey year. Non-disclosive data for the 2021/22 online survey will be made available to download through the UK Data Service in spring 2023.

7. Weighting

The Community Life Survey data has been weighted to compensate for variations in sampling probability and also to partially compensate for variations in response probability within the population. A weight has been produced for use with data collected from both the online and paper questionnaires and another weight has been produced for use with data collected only from the online questionnaire. In both cases, the inferential population is ‘all adults in England aged 16+ and living in a private residence’.

Step 1 was to calculate an address sampling weight. This is equal to one divided by the address sampling probability. This sampling probability varied between strata but did not vary within strata.

Step 2a was to model the probability of obtaining any completed completed questionnaires per sampled address. This was calculated as a function of the variables used to construct sample strata:

  • Ethnic diversity;
  • IMD quintile;
  • Expected household age structure;
  • Region.

Step 2b was to model the expected number of completed questionnaires per sampled address, given at least one had been completed. This model used the same variables as the Step 2a model but also included household-level data provided by the respondent(s):[footnote 14]

  • number of adults in the household;
  • number of children in the household;
  • housing tenure.

The step 2a model was a binary logistic regression model; the step 2b model was a count (Poisson) regression model.

Based on these linked models, the expected number of completed questionnaires (online or paper) was estimated for each sampled address. An address response weight was calculated equal to:

1/expected number of completed questionnaires

For online-only data, this formula was the same except that the expected number of completed questionnaires was replaced by the expected number of completed online questionnaires.

Step 3 was to take the product of the weights produced from steps 1 and 2a/b and use this as a base weight for calibrating the sample to population totals. Because step 2a/b produces a different address response weight for online/paper data than it does for online-only data, there are two base weights – one for online/paper data and one for online-only data. Consequently, step 3 produces two calibration weights as well.

The data was calibrated to ensure the weighted sample matched population totals for seven dimensions:

  • sex*age group;
  • degree level education*age group;
  • housing tenure;
  • region;
  • ethnic group
  • number of co-resident adults
  • internet usage*age group

For the online-only calibration weight, internet usageage group was replaced with working statusage group. The paper questionnaire did not capture working status so it could not be included in the calibration matrix for the combined online and paper data.

The population totals were drawn from the ONS Labour Force Survey (LFS) of April to June 2022, which is itself weighted to ONS population estimates for England (for sex, age and region; total population aged 16+ = 45,284,274). The exception was dimension (vii) – internet usage by age group – for which the distributional data comes from the January to March 2021 LFS. Historically, internet usage was only collected in the January to March edition of the LFS but this has now been discontinued. The research team decided it was fairly ‘safe’ to use the 2021 data for a survey that was carried out 2021/-22 but a decision will need to be made about this variable for future editions of the Community Life Survey.

Table 7.1: Population by age band and sex (online/paper and online-only weight)

Age band LFS April to June 2022 total
Males Females        
16-24 2 932 714 2 811 450
25-34 3 810 994 3 726 481
35-44 3 605 900 3 660 000
45-54 3 624 801 3 722 678
55-64 3 469 958 3 603 045
65-74 2 695 396 2 909 005
75+ 2 104 779 2 607 073

Table 7.2: Population within region (online/paper and online-only weight)

Region LFS April to June 2022 total aged 16+
North East 2 151 461
North West 5 847 863
Yorkshire & the Humber 4 401 533
East Midlands 3 894 944
West Midlands 4 725 149
East 5 041 260
London 7 237 907
South East 7 390 886
South West 4 593 271

Table 7.3: Highest educational level crossed by age (16-64 only) (online/paper and online-only weight)

Age group LFS April to June 2022 total with a Degree LFS April to June 2022 total with no Degree
16-24 936 146 4 808 018  
25-34 3 833 926 3 703 549
35-44 3 642 575 3 623 325
45-54 2 776 153 4 571 326
55-64 1 963 562 5 109 441

Table 7.4: Housing tenure (online/paper and online-only weight)

Housing tenure LFS April to June 2022 total aged 16+
Living in property owned outright 14 905 182
Living in property owned with mortgage 15 255 161
Living in property with other tenure 15 123 931

Table 7.5: Number of adults in household (online/paper and online-only weight)

Number of adults in household LFS April to June 2022 total aged 16+
1 7 757 272
2 23 848 050
3+ 13 678 952

Table 7.6: Ethnic group (online/paper and online-only weight)

Ethnic group LFS April to June 2022 total aged 16+
White 38 467 092
Indian 1 430 647
Pakistani/Bangladeshi 1 159 262
Black 1 617 683
Other 2 609 590

Table 7.7: Internet usage crossed by age (online/paper weight)

Internet usage/age LFS April to June 2022 total based on Jan-Mar 2021 distributional data aged 16+
Aged 16-64 34 968 021
Aged 65-74; some internet usage 5 083 270
Aged 65-74; no internet usage 521 131  
Aged 75+; some internet usage 3 106 073
Aged 75+; no internet usage 1 605 779

Table 7.8: Working status crossed by age (online-only weight)

Age group LFS April to June 2022 total working LFS April to June 2022 total not working
16-24 3,020,443 2,723,721
25-34 6,342,325 1,195,150
35-44 6,279,834 986,066
45-54 6,149,018 1,198,461
55-64 4,577,176 2,495,827
65+ 1,192,567 9,123,686

One way of assessing the impact of weighting the data is to estimate the weighting efficiency for each subpopulation in the weighting matrix. In effect, this weighting efficiency illustrates the impact of the other weighting dimensions and reflects the amount of weighting that is required for each subpopulation. The more weighting that is required the less representative the unweighted responding sample is likely to be. This will be partly due to variations in sampling probability within each subpopulation but also due to variations in response probability.

Weighting efficiency is equal to one divided by the design effect due to weighting. The design effect due to weighting is equal to 1+[(sg/mg)2] where sg is the standard deviation of the weights within subpopulation g and mg is the mean weight within subpopulation g. Weighting efficiency is also equal to the effective sample size divided by the actual sample size where effective sample size accounts only for the weighting and not for other design aspects such as sample stratification and clustering. The overall weighting efficiency was 79%. It was the same for the online-only weight.Both of these weighting efficiencies are significantly higher than they were for the 2020/21 survey (72% and 71% respectively), probably reflecting the more targeted sampling/data collection design.[footnote 15] By subpopulation, 2021/22 weighting efficiencies varied from 76% to 90%.

Table 7.10: Weighting efficiencies for marginal subpopulations defined in the weighting matrix

Age band Gender
  Males Females
16-24 85% 82%
25-34 84% 84%
35-44 85% 86%
45-54 84% 85%
55-64 85% 85%
65-74 88% 88%
75+ 90% 89%
Region    
North East 80%  
North West 78%  
Yorkshire & the Humber 81%  
East Midlands 80%  
West Midlands 76%  
East 78%  
London 86%  
South East 79%  
South West 83%  
     
Age group Degree No degree
16-24 83% 81%
25-34 83% 82%
35-44 84% 83%
45-54 84% 83%
55-64 85% 85%
     
Housing tenure    
Living in property owned outright 84%  
Living in property owned with mortgage 82%  
Living in property with other tenure 79%  
     
Number of resident adults (16+)    
1 86%  
2 83%  
3+ 80%  
Ethnic group    
White 79%  
Indian 81%  
Pakistani/Bangladeshi 84%  
Black 81%  
Other 81%  
Internet usage/age    
Aged 16-64 79%  
Aged 65-74; some internet usage 88%  
Aged 65-74; no internet usage 85%  
Aged 75+; some internet usage 89%  
Aged 75+; no internet usage 88%  

8. Standard Errors

8.1 Introduction

The tables in this chapter show estimates of standard errors for key variables with the survey.

8.2 Sources of error in surveys

Survey results are subject to various sources of error. Error can be divided into two types: systematic and random error.

8.2.1 Systematic error

Systematic error or bias covers those sources of error that will not average to zero over repeats of the survey. Bias may occur, for example, if a part of the population is excluded from the sampling frame or because respondents to the survey are different from non-respondents with respect to the survey variables. It may also occur if the instrument used to measure a population characteristic is imperfect. Substantial efforts have been made to avoid such systematic errors. For example, the sample has been drawn at random from a comprehensive frame, two modes and multiple reminders have been used to encourage response, and all elements of the questionnaire were thoroughly tested before being used.

8.2.2 Random error

Random error is always present to some extent in survey measurement. If a survey is repeated multiple times minor differences will be present each time due to chance. Over multiple repeats of the same survey these errors will average to zero. The most important component of random error is sampling error, which is the error that arises because the estimate is based on a random sample rather than a full census of the population. The results obtained for a single sample may by chance vary from the true values for the population, but the error would be expected to average to zero over a large number of samples. The amount of between-sample variation depends on both the size of the sample and the sample design. The impact of this random variation is reflected in the standard errors presented here.

Random error may also follow from other sources such as variations in respondents’ interpretation of the questions, or variations in the way different interviewers ask questions. Efforts are made to minimise these effects through pilot work and interviewer training.

8.3 Standard errors for complex sample designs

The Community Life Survey employs a systematic sample design, and the data is both clustered by address and weighted to compensate for non-response bias. These features will impact upon the standard errors for each survey estimate in a unique way. Generally speaking, systematic sampling will reduce standard errors while data clustering and weighting will increase them. If the complex sample design is ignored, the standard errors will be wrong and usually too narrow.

The standard errors quoted below have been estimated using the SPSS Complex Samples module, which employs a Taylor Series Expansion method to do this. The tables include a ‘design factor’, which is the ratio of the estimated standard error to the standard error we would obtain if we ignored the sample design. In general, this averages at approximately 1.2-1.3, but varies somewhat between survey variables.

Table 8.1: Participation in civic engagement and voluntary activities

Characteristic Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
Participation in civic engagement or voluntary activities          
Those taking part at least once a month in:          
Informal volunteering All 25.8 2749 0.5 1.04
Formal volunteering All 16.2 1684 0.4 0.48
Any volunteering All 34.3 3588 0.6 0.71
Those taking part at least once in the last 12 months in:          
Civic participation All 33.9 3435 0.6 0.7
Civic consultation All 17.7 1816 0.4 0.5
Civic activism All 6.8 740 0.3 0.30
Informal volunteering All 45.5 4738 0.6 0.83
Formal volunteering All 26.6 2740 0.5 0.62
Any volunteering All 54.6 5614 0.6 0.92

Table 8.2: Participation in civic engagement and formal volunteering at least once in the last year, by sex, age, ethnicity and disability

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage NumberCivic Participation Male 31.9 1367 0.8 1.2
  Female 36.1 1947 0.7 1.1          
  16-24 32.2 259 1.9 1.15          
  25-34 36.1 566 1.4 1.16          
  35-49 37.7 887 1.1 1.14          
  50-64 34.9 861 1.1 1.15          
  65-74 34.1 547 1.3 1.14          
  75+ 22.4 268 1.3 1.13          
  White 34.8 2854 0.6 1.19          
  Asian 28.9 245 1.8 1.16          
  Black 29.6 91 3.4 1.32          
  Mixed 41.3 100 3.8 1.17          
  Other 33.8 53 4.4 1.19          
  Limiting Long Term Limiting Illness (LLTI)/ Disability*[footnote 16] 35 816 1.2 1.13          
  No LLTI/ Disability* 39.3 2049 0.7 1.17          
Civic consultation Male 17.8 785 0.7 1.11          
  Female 17.3 953     0.6 1.09      
  16-24 14.2 105 1.5 1.18          
  25-34 15.3 235 1.0 1.12          
  35-49 20.0 471 0.9 1.12          
  50-64 19.2 475 0.9 1.13          
  65-74 19.7 319 1.1 1.13          
  75+ 14.6 180 1.1 1.10          
  White 17.3 1445 0.5 1.15          
  Asian 17.3 151 1.4 1.12          
  Black 24.7 76 3.2 1.31          
  Mixed 20.4 46 3.1 1.16          
  Other 19.9 33 3.9 1.20          
  LLTI /Disability* 19.5 401 1.0 1.13          
  No LLTI /Disability* 18.8 1083 0.6 1.13          
Civic activism Male 7.1 337 0.4 1.09          
  Female 6.3 363 0.4 1.06          
  16-24 4.4 30 0.9 1.17          
  25-34 5.5 86 0.7 1.13          
  35-49 6.7 173 0.5 1.04          
  50-64 7.9 199 0.6 1.10          
  65-74 7.9 137 0.7 1.07          
  75+ 8.0 101 0.9 1.13          
  White 6.3 558 0.3 1.08          
  Asian 7.4 64 1.1 1.23          
  Black 14.2 40 2.4 1.21          
  Mixed 11.0 28 2.2 1.08          
  Other 9.1 16 2.3 1.02          
  LLTI /Disability* 7.8 164 0.7 1.11          
  No LLTI /Disability* 7.2 419 0.4 1.11          
Formal volunteering Male 26 1140 0.8 1.12          
  Female 27.4 1506 0.7 1.12          
  16-24 28.9 220 1.9 1.15          
  25-34 19.1 299 1.2 1.16          
  35-49 28.3 670 1.1 1.13          
  50-64 27.3 680 1.0 1.15          
  65-74 32.3 523 1.3 1.15          
  75+ 26.2 316 1.4 1.15          
  White 27.0 2259 0.6 1.17          
  Asian 24.5 205 1.8 1.23          
  Black 30.2 88 3.5 1.35          
  Mixed 34.9 76 3.7 1.18          
  Other 20.4 35 3.6 1.13          
  LLTI /Disability* 28.4 574 1.2 1.14          
  No LLTI /Disability* 29.6 1689 0.7 1.17          

**Table 8.3: Informal or formal volunteering within the last month and the last 12 months broken down by age, ethnicity, employment status and region **

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft) Percentage Number Percentage Number
At least once a month                    
Informal volunteering 16-24 24.7 197 1.8 1.16          
  25-34 19.7 315 1.1 1.12          
  35-49 24.2 589 1.0 1.10          
  50-64 27.1 694 1.0 1.14          
  65-74 34.1 554 1.3 1.10          
  75+ 28.9 355 1.5 1.12          
  White 25.8 2225 0.6 1.18          
  Asian 25.6 222 1.7 1.17          
  Black 29.6 99 3.0 1.16          
  Mixed 30.2 69 3.4 1.13          
  Other 28.6 50 3.7 1.06          
  In employment* 24.2 1325 0.7 1.15          
  Unemployed* 25.5 42 3.8 1.09          
  Economically inactive* 31.1 933 1.0 1.18          
  North East 26.9 134 2.3 1.13          
  North West 26.5 339 1.5 1.17          
  Yorkshire and the Humber 21.8 233 1.5 1.15          
  East Midlands 24.8 207 1.8 1.15          
  West Midlands 25.6 313 1.5 1.20          
  East of England 25.0 296 1.6 1.23          
  London 27.0 515 1.2 1.17          
  South East 26.3 439 1.3 1.16          
  South West 27.8 273 1.8 1.22          
Formal volunteering 16-24 18.6 141 1.6 1.15          
  25-34 10.4 161 0.9 1.20          
  35-49 14.3 337 0.8 1.14          
  50-64 16.5 412 0.8 1.13          
  65-74 23.4 381 1.2 1.15          
  75+ 19.1 229 1.3 1.12          
  White 17.1 1454 0.5 1.17          
  Asian 9.7 83 1.1 1.14          
  Black 15.2 44 2.8 1.39          
  Mixed 19.2 41 3.2 1.23          
  Other 10.1 17 2.5 1.06          
  In employment*[footnote 17] 15.3 822 0.6 1.14          
  Unemployed* 17.0 23 3.5 1.17          
  Economically inactive * 20.9 631 0.9 1.20          
  North East 13.5 64 1.8 1.15          
  North West 13.6 182 1.1 1.15          
  Yorkshire and the Humber 14.9 153 1.4 1.18          
  East Midlands 17.2 135 1.6 1.22          
  West Midlands 16.4 173 1.4 1.26          
  East of England 14.8 180 1.2 1.15          
  London 16.5 299 1.1 1.23          
  South East 18.1 304 1.1 1.18          
  South West 19.1 194 1.4 1.12          
At least once in the last year                    
Informal volunteering 16-24 38.1 307 2.0 1.15          
  25-34 41.7 645 1.4 1.13          
  35-49 47.8 1142 1.2 1.12          
  50-64 48.0 1196 1.2 1.14          
  65-74 52.0 836 1.4 1.12          
  75+ 43.9 537 1.6 1.14          
  White 45.8 3849 0.7 1.20          
  Asian 46.3 393 2.1 1.24          
  Black 49.1 158 3.2 1.13          
  Mixed 47.3 114 3.7 1.14          
  Other 47.0 80 4.1 1.06          
  In employment* 46.7 2527 0.8 1.17          
  Unemployed* 44.0 72 4.5 1.14          
  Economically inactive* 47.0 1430 1.1 1.18          
  North East 42.2 208 2.8 1.22          
  North West 45.2 583 1.6 1.17          
  Yorkshire and the Humber 40.3 416 1.9 1.20          
  East Midlands 43.6 356 2.1 1.18          
  West Midlands 43.5 515 1.8 1.27          
  East of England 44.9 517 1.8 1.22          
  London 48.5 900 1.3 1.15          
  South East 47.8 774 1.6 1.25          
  South West 48.4 469 1.9 1.19          
Formal volunteering 16-24 28.9 220 1.9 1.15          
  25-34 19.1 299 1.2 1.16          
  35-49 28.3 670 1.1 1.13          
  50-64 27.3 680 1.0 1.15          
  65-74 32.3 523 1.3 1.15          
  75+ 26.2 316 1.4 1.15          
  White 27.0 2259 0.6 1.17          
  Asian 24.5 205 1.8 1.23          
  Black 30.2 88 3.5 1.35          
  Mixed 34.9 76 3.7 1.18          
  Other 20.4 35 3.6 1.13          
  In employment* 27.5 1480 0.7 1.15          
  Unemployed* 24.9 36 3.9 1.14          
  Economically inactive* 29.3 895 1.0 1.19          
  North East 20.5 94 2.4 1.29          
  North West 23.2 307 1.4 1.16          
  Yorkshire and the Humber 25.6 253 1.7 1.22          
  East Midlands 26.5 213 1.9 1.21          
  West Midlands 25.7 278 1.5 1.20          
  East of England 24.4 299 1.5 1.22          
  London 29.8 534 1.3 1.19          
  South East 28.8 465 1.4 1.19          
  South West 29.9 297 1.7 1.16          

Table 8.4: Any volunteering in the last year broken down by sex, age and region

Characteristic Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage NumberAny formal or informal volunteering at least once in last year Male 51.7 2256 0.9 1.15
  Female 57.6 3147 0.8 1.12          
  16-24 51.1 401 2.1 1.16          
  25-34 48.8 758 1.5 1.17          
  35-49 57.1 1349 1.2 1.13          
  50-64 57.5 1419 1.1 1.14          
  65-74 60.7 974 1.4 1.13          
  75+ 51.5 628 1.6 1.15          
  North East 50.7 242 3.0 1.31          
  North West 51.6 666 1.7 1.18          
  Yorkshire and the Humber 51.3 515 2.0 1.24          
  East Midlands 52.0 420 2.2 1.21          
  West Midlands 53.1 607 1.9 1.30          
  East of England 52.3 606 1.8 1.22          
  London 57.8 1057 1.4 1.18          
  South East 58.7 941 1.5 1.21          
  South West 58.0 560 2.0 1.22          

Table 8.5: Whether gave to charity in the last four weeks, broken down by sex, age, ethnicity and region

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
Gave to charity in the last 4 weeks Male 61.2 2756 0.9 1.19
  Female 71.8 3938 0.7 1.13
  16-24 47.3 392 2.1 1.16
  25-34 61.7 944 1.5 1.17
  35-49 66.6 1590 1.2 1.2
  50-64 70.7 1750 1.1 1.17
  65-74 78.4 1265 1.2 1.13
  75+ 75.9 905 1.4 1.1
  White 67.5 5705 0.7 1.26
  Asian 65.7 586 2.0 1.24
  Black 61.9 195 3.4 1.23
  Mixed 63.2 149 3.7 1.16
  Other 57.0 94 4.2 1.07
  North East 66.1 322 3.0 1.36
  North West 65.3 843 3.0 1.25
  Yorkshire and the Humber 66.7 675 1.7 1.28
  East Midlands 63.5 519 1.9 1.30
  West Midlands 67.3 800 2.2 1.26
  East of England 65.7 767 1.7 1.30
  London 67.1 1266 1.8 1.20
  South East 67.4 1102 1.3 1.28
  South West 67.2 657 1.5 1.23

Table 8.6: Banded amount given to charity in the four weeks prior to interview

Characteristic Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
      Percentage Number Percentage Number
Banded amount given to charity £0-£4 All 14.1 480 0.7 0.44
  £5-£9 All 14.1 470 0.7 0.44
  £10-£19 All 23.7 821 0.8 0.56
  £20-£49 All 29.3 1045 0.9 0.63
  Over £50 All 18.9 701 0.8 0.51

Table 8.7: : Whether aware of or involved in social action broken down by sex and age

Characteristic Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
Aware of social action*[footnote 18] Male 22.3 814 0.8 1.10
  Female 24.7 1098 0.7 1.13
  16-24 16.3 128 1.5 1.07
  25-34 18.9 276 1.2 1.17
  35-49 23.6 518 1.0 1.13
  50-64 26.0 545 1.1 1.14
  65-74 28.6 341 1.5 1.11
  75+ 26.7 150 2.1 1.16
  All 23.2 1987 0.5 0.58
Involved in social action Male 11.5 512 0.6 1.13
  Female 11.7 636 0.5 1.1
  16-24 7.5 57 1.1 1.12
  25-34 8.5 133 0.8 1.13
  35-49 12.3 290 0.8 1.14
  50-64 13.0 315 0.8 1.13
  65-74 15.4 245 1.0 1.16
  75+ 11.8 138 1.0 1.2
  All 11.6 1197 0.4 0.4

Table 8.8: The extent to which people agree that people in their neighbourhood pull together to improve the area

Characteristic Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage NumberWhether agree or disagree that people in this neighbourhood pull together to improve the neighbourhood Definitely agree 14.4 1468 0.4 0.46
  Tend to agree 43.5 4350 0.6 0.80          
  Tend to disagree 22.7 2313 0.5 0.58          
  Definitely disagree 12.2 1307 0.4 0.44          
                     
  Agree 57.8 5818 0.6 0.96          
  Disagree 35.0 3620 0.6 0.76          

Table 8.9: Whether chat to neighbours at least once a month by age, sex ethnicity and region

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
People who chat to their neighbours at least once a month Male 71.9 3092 0.8 1.21
  Female 73.4 3913 0.7 1.15
  16-24 47.9 341 2.2 1.12
  25-34 60.8 858 1.5 1.17
  35-49 75.2 1686 1.1 1.18
  50-64 78.9 1884 1.0 1.17
  65-74 84.7 1359 1.0 1.12
  75+ 83.6 1012 1.2 1.13
  White 74.5 6074 0.6 1.3
  Asian 66.6 569 2.0 1.22
  Black 54.4 172 3.3 1.15
  Mixed 62.1 144 3.9 1.20
  Other 58.7 91 4.5 1.14
  North East 72.2 353 3.0 1.43
  North West 74.6 919 1.7 1.35
  Yorkshire and the Humber 75.6 710 1.8 1.30
  East Midlands 72.7 564 2.1 1.32
  West Midlands 72.1 842 1.7 1.26
  East of England 72.2 796 1.8 1.32
  London 65.3 1191 1.4 1.22
  South East 73.2 1152 1.5 1.30
  South West 76.4 719 1.8 1.26

Table 8.10: Whether people feel they belong strongly to their neighbourhood

Characteristic Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage NumberWhether feel they belong strongly to their neighbourhood Very/fairly strongly 62.5 5838 0.6 0.98
  Not very strongly 25.7 2431 0.5 0.61          
  Not at all strongly 11.7 1151 0.4 0.43          

Table 8.11: Whether people feel they belong strongly to their neighbourhood by sex, age and ethnicity

Characteristic Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft) Percentage Number Percentage Number
Strongly belong to local neighbourhood Male 61.1 2438 0.9 1.17          
  Female 64.3 3198 0.7 1.11          
  16-24 54.2 336 2.3 1.18          
  25-34 53.6 715 1.6 1.21          
  35-49 62.5 1343 1.2 1.19          
  50-64 64.9 1459 1.1 1.14          
  65-74 69.0 1063 1.3 1.13          
  75+ 71.2 827 1.5 1.16          
  White 63.3 4811 0.7 1.24          
  Asian 62.8 500 2.1 1.22          
  Black 55.9 164 3.7 1.25          
  Mixed 60.9 133 3.7 1.14          
  Other 48.8 70 4.7 1.14          

Table 8.12: Satisfaction with local area as a place to live

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage NumberSatisfaction with local area Very/fairly satisfied 76.3 7657 0.5 1.06
  Neither satisfied nor dissatisfied 15.4 1532 0.4 0.47          
  Very /Fairly dissatisfied 8.3 923 0.3 0.36          

Table 8.13: Community cohesion by sex, age, ethnicity and region

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
Agree that people from different backgrounds get on well together in the local area Male 83.8 3289 0.7 1.17
  Female 84.8 4199 0.6 1.14
  16-24 83.2 597 1.7 1.21
  25-34 82.3 1145 1.2 1.15
  35-49 82.2 1799 0.9 1.13
  50-64 84.7 1886 0.9 1.18
  65-74 86.5 1278 1.0 1.12
  75+ 88.0 927 1.1 1.11
  White 84.6 6242 0.5 1.22
  Asian 84.1 718 1.6 1.25
  Black 80.6 240 2.9 1.26
  Mixed 82.6 182 2.8 1.11
  Other 81.2 127 3.7 1.19
  North East 81.1 341 2.5 1.29
  North West 84.1 950 1.4 1.28
  Yorkshire and the Humber 84.6 725 1.4 1.17
  East Midlands 82.7 586 1.7 1.18
  West Midlands 82.9 882 1.4 1.20
  East of England 84.5 857 1.5 1.31
  London 83.2 1491 1.0 1.18
  South East 86.2 1249 1.1 1.24
  South West 83.1 689 1.7 1.18

Table 8.14: Whether people feel able to influence decision affecting their local area

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
Able to influence decision affecting their local area Yes able to influence decisions 27 2771 0.5 0.65

Table 8.15: Whether able to influence decisions affecting their local area by sex, age and ethnicity

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
Able to influence decisions affect their local area   Percentage Number Percentage Number
  Male 27.1 1177 0.8 1.16
  Female 25.9 1417 0.7 1.11
  16-24 23.7 185 1.8 1.17
  25-34 26.9 416 1.3 1.17
  35-49 28.0 659 1.1 1.17
  50-64 28.3 705 1.0 1.15
  65-74 27.6 455 1.2 1.12
  75+ 23.4 281 1.4 1.15
  White 25.0 2040 0.6 1.24
  Asian 34.4 310 2.0 1.22
  Black 50.3 162 3.3 1.17
  Mixed 37.9 85 3.8 1.18
  Other 29.9 54 4.1 1.15

Table 8.16: How important it is to be able to influence decisions affecting their local area

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
How important it is to be able to influence decisions affecting the local area Important 53.1 5478 0.6 0.91
Not important 46.9 4622 0.6 0.84  

Table 8.17: Whether people would like to be more involved in decisions made by their local council

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number  
Whether would like to be more involved in decision made by the local council  Yes 28.2 2461 0.6 0.63  
  No 20.3 1585 0.5 0.54  
  Depends on the issue 51.5 4297 0.6 0.83  

Table 8.18: How often people feel lonely

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
How often feel lonely   Percentage Number Percentage Number
  Often/always 6.2 609 0.3 0.30
  Some of the time 18.8 1830 0.5 0.52
  Occasionally 22.1 2147 0.5 0.54
  Hardly ever 31.7 3000 0.6 0.66
  Never 21.2 2005 0.5 0.57

Table 8.19: Whether people borrow things and exchange favours with their neighbours

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
Whether people borrow things and exchange favours with neighbours          
  Definitely agree 9.9 1008 0.4 0.39
  Tend to agree 26.3 2663 0.5 0.62
  Tend to disagree 27.9 2781 0.5 0.63
  Definitely disagree 35.8 3581 0.6 0.76

Table 8.20: Whether people think their area has got better or worse over the last two years

Characteristics Population Weighted percent (%) Unweighted base (affirmative response) Standard error Design factor (deft)
    Percentage Number Percentage Number
Whether area has got better or worse over the last two years          
  The area has got better 13.7 1331 0.5 0.49
  The area has got worse 20.5 2246 0.5 0.60
  The area has not changed much 56.1 5587 0.6 0.95

9. Data user guide

This chapter provides a user guide for those conducting analysis of the Community Life Survey dataset. The dataset will be made available on the UK Data Service in SPSS format, and the guide assumes that analysis will be conducted in SPSS.

9.1 Selecting cases for analysis

The sample consists of an unweighted base of 10,126 interviews.

9.1.1 Quarters

The dataset contains data from fieldwork between 11 Oct 2021 and 20 Sep 2022 broken down into four quarters. Quarter data is weighted to be representative. To perform analysis on an individual quarter, use the variable ‘Quarter’ and select the appropriate:

  • Quarter 1: 11 Oct 2021 – 14 Dec 2021
  • Quarter 2: 10 Jan 2022 – 21 Mar 2022
  • Quarter 3: 07 Apr 2022 – 21 Jun 2022
  • Quarter 4: 07 Jul 2022 – 20 Sep 2022.

For example, to look at Quarter 4 data only within the SPSS file, select data, select cases, filter if Quarter=4 and then run crosstabs and frequencies as normal.

9.2 Variables

The dataset is ordered in the following way:

  • Unique serial numbers (For individuals and Households).
  • Demographic information such as number of adults in the household, age, gender, marital status, and information on children under the age of 16 living in the household
  • The survey question responses in the same order as the questions appear in the questionnaire (please see Appendix A for the questionnaire)*
  • Derived variables (please see Appendix D for a full list)
  • Geo-demographic files
  • Weight variables

*In Q2 & Q3 an experiment took place with the intention of assessing the impact of the following Social Capital Harmonisation questions on the time series data…

New Question Original Question(s)
FrndSATB FrndSat1-2, FrndSat1SR, FrndSat2SR
SBeNeighB SBeNeigh
STrustGen2B STrustGen2
PAffLocB PAffLoc

Both ‘original’ and ‘new’ questions were asked of all respondents whilst a randomization was applied to determine who saw the ‘original’ vs ‘new’ questions first. Full details of the analysis conducted in relation to these questions can be found in section E of the Appendix

Any queries on published variables should be sent to [email protected] in the first instance. In some circumstances, DCMS analysts may refer queries to Kantar.

Variables are named exactly to match the questionnaire names. Where the respondent was able to give multiple answers to one question (a multiple response question), the question has been represented in the dataset by a number of variables, one for each possible answer, which are coded as yes or no, depending on whether the respondent chose this response or not. This aids analysis as it avoids the need to recode each multiple response question.

9.3 Mode of completion

All four quarters of interviews were completed by both online and paper questionnaires. The paper questionnaire was shorter than the online survey, only containing roughly 50% of the questions. Data users should be aware that some questions which only appeared in the web survey will have a smaller base size as a result. A variable titled ‘Mode’ is included in the data, which indicates whether each interview was completed online or by post.

The two versions of the questionnaire can be viewed in Appendix A and B.

9.4 Missing Values

For the majority of variables, “Don’t know” and “Prefer not to say” responses are set as missing values within the dataset. “Don’t know” responses are coded as -8 and “Prefer not to say” responses are coded as -9.

In situations where the respondent was not asked the question, either due to the question being added in a later quarter or removed in a previous quarter, or due to routing within the questionnaire, responses are also set as missing values (-1 Item not applicable).

Questions left blank in the paper questionnaire, that the respondent should have answered are either coded as -8 “Don’t know”.

9.5 Screenreaders and Accessibility:

To ensure the script is accessible to respondents completing the survey using a screen reader, it was necessary to display certain types of questions in an alternative accessible way (e.g. dynamic grids).

At the start of the questionnaire, the question Screenreader is asked, which says: “A screen reader is a software programme that helps people who are blind or visually impaired to use their computer or smartphone by reading out the content. Do you intend to complete this survey using a screen reader?

  1. Yes
  2. No

If a respondent answers yes here, they are only asked the “SR” version of questions.

If a respondent answers no here, they are only asked the existing version of the questions.

9.6 Weighting

To analyse the data at the individual level finalweight should be used to weight the data. Five sets of weights are included within the dataset. Table 9.1 below details the separate weight variables and their use.

Table 9.1: Weights used on the 2020/21 survey

Weight Description
finalweight Scaled-to-sample size individual weight for combined online and paper sample for the entire survey year
rimweightCLS Scaled to population-size individual weight for combined online and paper sample for the entire survey year.
finalweightweb[footnote 19] Scaled-to-sample size individual weight for online only sample for the entire survey year. Use for data collected on the online survey only.
rimweightwebCLS Scaled to population-size for combined online only sample for the entire year. Use for data collected on the online survey only.
  1. Between 2012-13 and 2015-16 the survey was conducted on behalf of the Cabinet Office. 

  2. social-research-practice-journal-issue-03-winter-2017.pdf (the-sra.org.uk). 

  3. Now the Department for Levelling Up, Housing and Communities 

  4. For more information on the findings of this development work please see Rebecca Hamlyn, Alice Fitzpatrick and Joel Williams (2015): Investigating the viability of moving from a face-to-face to an online postal mode 

  5. CACI Ltd, business registration number 01649776. 

  6. The ‘small user’ subset of the Postcode Address File is thought to contain nearly all residential addresses, as well as a subset of non-residential addresses that cannot be separately identified as such. 

  7. Non-responding households are households which did not have any interaction with the survey; no individual attempts or completes and no optouts. 

  8. For more information on the address sampling protocol, please see section 3.3. 

  9. For more information on the validation checks, please see section 6.2. 

  10. An estimated 8% of ‘small user’ PAF addresses in England are assumed to be non-residential (derived from interviewer administered surveys). ‘ 

  11. Deadwood refers to addresses which are not eligible to complete the survey, such as second homes, vacant properties, or business addresses. These addresses were not included in survey response rate calculations. 

  12. For more information on the sample design, please see sections 3.1-3.4. 

  13. This figure is calculated by removing outliers, which were any interviews shorter than 10 minutes or longer than 150 minutes. 

  14. A prioritisation algorithm was employed to derive a uniform value per variable per address even when different respondents provided different answers. In summary, the data from the oldest respondent was prioritised. This prioritisation step was combined with a small number of imputed household-level values for those addresses where at least one questionnaire had been completed but no respondent had supplied the relevant data. 

  15. Any change in design - even an improvement - risks a time series disjuncture but, in this case, the similarity of the data collection method and the weighting procedure to that used in the previous years strongly limits that risk. 

  16. All LLTI/Disability calculations based on online only data. 

  17. All in employment/unemployed/economically inactive calculations based on online only data. 

  18. Based on online only data. 

  19. finalweightweb should be used for any break variables that are only collected in the online data, regardless of whether the question was included on paper.