DESNZ Public Attitudes Tracker: technical report, Winter 2023 to Summer 2024 (accessible webpage)
Published 13 December 2024
Summary
The DESNZ Public Attitudes Tracker (PAT) survey measures public awareness, attitudes and behaviours relating to policies such as renewable energy, nuclear energy, climate change, net zero and energy bills.
Due to departmental restructuring in February 2023, responsibility for the survey switched from the Department for Business, Energy and Industrial Strategy (BEIS) to the Department for Energy Security and Net Zero (DESNZ). From Summer 2023, the survey moved from a quarterly to a triannual design with waves conducted every Spring, Summer and Winter.
This technical report covers methodological information about the three triannual PAT survey waves completed in Winter 2023, Spring 2024 and Summer 2024.
The report includes information on the:
- Data collection model
- Sampling approach
- Questionnaire structure and development process
- Fieldwork method and performance
- Data processing approach
- Weighting design
- Reporting outputs
- Changes between waves
Survey objectives
The DESNZ Public Attitudes Tracker aims to continue building a better understanding of public awareness, attitudes and behaviours relating to DESNZ policies in order to provide robust and reliable evidence for policy development, and to see how these measures shift over time. Data is collected from a representative sample of the UK population, so that the results fairly represent the views of the wider population.
The main objectives of the new PAT series are:
-
To monitor changes in public attitudes, behaviours, and awareness over time
-
To understand how energy policies affect the UK population and different demographic groups
-
To provide robust evidence for early policy development and emerging energy concepts
-
To provide departments with attitudinal data on DESNZ priorities and policies, such as Net Zero
Understanding public attitudes and awareness is essential in developing effective and targeted policies. Findings from this work help DESNZ stay abreast with where the public are in relation to the Department’s priorities and can perform a high-level evaluative and communication purpose. Owning public attitudes data also allows the Department to respond effectively to research published by external stakeholders.
Background
This technical report covers three waves of the Public Attitudes Tracker: Winter 2023 (7 November to 11 December), Spring 2024 (18 March to 22 April 2024) and Summer 2024 (11 July to 15 August 2024). These three waves build upon the previous eight waves conducted in the new PAT series delivered by the research agency Verian[footnote 1]. A brief explanation of the differences between the previous tracker surveys and the new tracker survey is provided below.
Previous tracker survey series
The Public Attitudes Tracker began in March 2012 within the Department for Energy and Climate Change (DECC), later part of the Department for Business, Energy and Industrial Strategy (BEIS) in 2016. The survey, which was conducted quarterly, ran 37 waves from 2012 to 2021. Until March 2020, data was gathered through in-home interviews on Kantar Public UK’s face-to-face omnibus, using random location quota sampling.
With the onset of Covid-19, the methodology shifted to Kantar Public’s online omnibus from March 2020 to March 2021, making direct comparisons between face-to-face and online data infeasible due to this break in the time series[footnote 2]. The online panel methodology was set up as an interim methodology given its limitations in terms of sample representativeness and potential panel conditioning. More detail can be found in the Autumn 2022 to Summer 2023 technical report.
New tracker survey series
In Summer 2021, BEIS recommissioned the survey with the aim of creating a new time series based on a methodology which will allow more robust tracking of measures over the longer-term. This was in the context of continued uncertainty about the feasibility of face-to-face data collection.
The new survey series, beginning in Autumn 2021, uses Address Based Online Surveying (ABOS), a cost-effective method of surveying the general population using random sampling techniques. ABOS is a ‘push to web’ methodology where the primary method of data collection is online, but respondents are also able to complete a paper version of the questionnaire which enables participation among the offline population. Full details of the ABOS methodology are covered in the section ‘Details of the data collection model’.
Comparisons with previous tracker series
It should be noted that changes in methodology can lead to both selection effects (that is differences due to the different sampling methods employed) and measurement effects (that is differences due to the different survey modes). Although attempts have been made to reduce the selection effects between surveys, the results from the new time series spanning the eleven waves from Autumn 2021 to Summer 2024 should not be directly compared with previous waves where data was collected either face-to-face (waves 1 to wave 33) or via an online panel (waves 33 to 37)[footnote 3].
When it comes to measurement effects, differences in results could be caused by a number of factors[footnote 4]. Measurement effects cannot be ameliorated by weighting, although it is sometimes possible to estimate their direction and scale and (at least partially) account for them in analysis.
Results from the Autumn 2021 to Summer 2024 surveys are comparable with one another. While it is possible that the switch from BEIS to DESNZ branding from the Spring 2023 wave onwards could have had some impact on the trends recorded by the survey series, it appears that any such effects are minimal. The ‘Questions added and removed’ section outlines minor differences in the survey in each of the three most recent PAT survey waves, but these are not of a magnitude which would undermine the new time series.
Interpretation of findings and further resources
In the published reports for the new PAT series, differences between groups are only reported at the 95% confidence interval level (that is, the difference is statistically significant at the .05 level). Further information about significance testing is provided later in this Technical Report. For the key research and statistical term definitions, refer to ‘Appendix C - Research and statistical term definitions’.
Details of the data collection model
Address Based Online Surveying (ABOS) is a type of ‘push-to-web’ survey method.
The basic ABOS design is simple: a stratified random sample of addresses is drawn from the Royal Mail’s postcode address file and an invitation letter is sent to each one, containing username(s) and password(s) plus the URL of the survey website. Up to four individuals per household can log on using this information and complete the survey as they might any other web survey. Once the questionnaire is complete, the specific username and password cannot be used again, ensuring data confidentiality from others with access to this information.
It is usual for at least one reminder to be sent to each sampled address and it is also usual for an alternative mode (usually a paper questionnaire) to be offered to those who need it or would prefer it. It is typical for this alternative mode to be available only ‘on request’ at first. However, after nonresponse to the web survey invitation, this alternative mode may be given more prominence.
Paper questionnaires ensure coverage of the offline population and are especially effective with sub-populations that respond to online surveys at lower-than-average levels. However, paper questionnaires have measurement limitations that constrain the design of the online questionnaire and also add considerably to overall cost. For the DESNZ PAT, paper questionnaires are used in a limited and targeted way, to optimise rather than maximise response.
Sampling
Sample design: addresses
The address sample design was intrinsically linked to the data collection design (see the section ‘Contact procedures’ and was designed to yield a respondent sample that is representative with respect to geography, neighbourhood deprivation level, and age group. This approach limits the role of weights in the production of unbiased survey estimates, narrowing confidence intervals compared with other designs.
Multiple samples were drawn to cover different combinations of waves. The first sample file covered the Winter 2023 wave, while the second covered the Spring 2024 wave. The third sample file covered the Summer 2024 and the Winter 2024 waves. Response expectations for the Spring 2024 and Summer 2024 waves were revised using response data from the Winter 2023 wave.
The principles underpinning the sample design remained the same throughout and are described below.
First, a stratified master sample of addresses in the UK was drawn from the Postcode Address File (PAF) ‘small user’ subframe. Before sampling, the PAF was stratified by International Territorial Level 1 (ITL1) region (12 strata) and, within region, by neighbourhood deprivation level (5 strata). A total of 60 strata were constructed in this way. Furthermore, within each of the 60 strata, the PAF was sorted by (i) local authority, (ii) super output area, and finally (iii) by postcode. This ensured that the master sample of addresses was geographically representative within each stratum.
Each master sample of addresses was augmented by data supplier CACI. For each address in the master sample, CACI added the expected number of resident adults in each ten-year age band. Although this auxiliary data will have been imperfect, Verian’s investigations have shown that it is reasonably effective at identifying households that are mostly young or mostly old. Once this data was attached, the master sample was additionally stratified by expected household age structure based on the CACI data: (i) all aged 35 or younger (14% of the total); (ii) all aged 65 or older (22% of the total); (iii) all other addresses (64% of the total).
From each master sample, Verian drew three stratified random sub-samples to cover three waves of the PAT. One in five of these addresses was allocated to a reserve pool, while the remaining addresses in each master sample that were not assigned to any wave formed a wider reserve pool. The conditional sampling probability in each stratum was varied to compensate for (expected) residual variation in response rate that could not be ‘designed out’, given the constraints of budget and timescale. The underlying assumptions for this procedure were updated wave by wave as evidence accumulated.
In total, 75,937 addresses were issued across the three waves covered by this report: 22,557 in Winter 2023, 17,601 in Spring 2024 (with an additional reserve sample of 12,885), and 17,612 in Summer 2024 (with an additional reserve sample of 5,282)[footnote 5].
Figure 1 shows the issued sample structure with respect to the major strata, combining all four surveys together[footnote 6].
Figure 1: Address issue by area deprivation and household age structure: Winter 2023, Spring and Summer 2024 surveys
Expected household age structure | Most deprived | 2nd | 3rd | 4th | Least deprived |
---|---|---|---|---|---|
All <=35 | 4,999 | 4,446 | 3,279 | 2,198 | 1,692 |
Other | 12,223 | 10,688 | 8,664 | 9,594 | 8,493 |
All >=65 | 2,180 | 1,985 | 1,933 | 1,852 | 1,711 |
Sample design: individuals within sampled addresses
All resident adults aged 16 and over were invited to complete the survey. In this way, the PAT avoided the complexity and risk of selection error associated with remote random sampling within households.
However, for practical reasons, the number of logins provided in the invitation letter was limited. The number of logins varied between two and four, with this total adjusted in reminder letters to reflect household data provided by prior respondent(s). Addresses that CACI data predicted contained only one adult were allocated two logins; addresses predicted to contain two adults were allocated three logins; and other addresses were allocated four logins. The majority of addresses were given either two or three logins. Paper questionnaires were available on request to those who are offline, not confident online, or unwilling to complete the survey this way. Furthermore, some addresses were sent a paper questionnaire at the initial point of contact – further details are provided in the ‘Contact procedures’ section.
Questionnaire
Questionnaire design
The starting point for developing questions at each wave was to review previous questions from the new PAT survey series which ran from Autumn 2021 to Summer 2023, maintaining consistency unless issues with the previous questions had been identified or the policy context had changed. New questions were also developed to address any new policy priorities.
In the current PAT survey series, all questions need to be designed for mixed-mode surveying, with questions suitable for both online and paper-based presentation. The main considerations when designing questions are set out below.
Mixed mode adaptation
The aim is to ensure that questions are presented consistently across modes to avoid mode effects and to ensure that data collected from the two modes can be merged. The starting principle is to design questions to be ‘unimodal’, that is to use a standard question format across modes. However, in some cases, and especially where the question format was non-standard, we took an ‘optimode’ approach to designing questions. This refers to a more flexible approach where the design is optimised to suit mode, but also ensuring consistency in outputs. The documented questionnaires indicate where question formatting or routing differs by mode.
The main mode-based considerations were as follows:
-
Question order and routing were aligned between modes by ordering questions in a way which provided simple navigation on paper. Routing instructions were added explicitly on the paper version. However, in some cases, the filter was widened for postal respondents (for example widened to ‘ask all’ with an added ‘not applicable’ option) which avoided the need for complex visual routing. Where this occurred, data editing was later applied to ensure equivalence across modes.
-
Grid-style questions often required the need for different presentation by mode, with paper-based questions set up as more traditional static grids, while online these were presented more dynamically to better suit navigation on laptop, tablet and smartphone screens.
-
Where a question required a long list (for example more than 12 items), this was retained as a long list on paper, but for online this was split into two or more lists to better suit web-based presentation.
-
All response lists were presented in a fixed order (as opposed to randomised or rotated) to ensure mode equivalence for online and paper.
Use of scales
Where scales are used across different items in the questionnaires (for example, 5-point scales for knowledge/awareness, agree/disagree and support/oppose) these are standardised to ensure consistent presentation throughout.
Demographic questions
Wherever possible, these were based on ONS harmonised[footnote 7] versions of questions at the time of setting up the first wave of the new survey series.
Cognitive testing
Cognitive interviewing helps to identify problems in question wording and any words or phrases that are open to misunderstanding or misinterpretation. It does this through assessing the thought processes that respondents go through when trying to answer a question.
Cognitive testing was used to test and refine proposed new questions before adding them to the tracker. Cognitive testing was conducted in advance of the Spring 2024 and Summer 2024 waves.
The cognitive testing ahead of the Spring 2024 wave included questions on usage of hydrogen in industrial processes, Small Modular Reactors, nuclear power stations and trust in sources for energy information.
The cognitive testing in advance of the Summer 2024 wave included questions on energy independence, greenhouse gas removals, smart appliances, data access and battery energy storage systems.
Each of these stages of cognitive testing involved 10 interviews with adults aged 16+ spread across relevant demographics such as age, gender, region, education level, and tenure. Interviews were carried out by members the project team and other researchers trained in cognitive testing techniques.
Questionnaire structure
As far as possible at each wave, repeat questions are included with a similar placement, and with a similar preceding context, to minimise context effects. Triannual (formerly quarterly) questions are always asked at the beginning of the survey (after the opening demographics) to ensure that these are not impacted by other questions which may affect knowledge or attitudes towards these key topics.
A list of survey topics and the waves where these are included are provided in Summary of questions included in the DESNZ/BEIS Public Attitudes Tracker since Autumn 2021. Questions are broken down by theme, in line with the coverage in each topic-based report.
The full questionnaires from each survey wave are published alongside the survey results for each wave.
Questions added and removed
Below is a list of changes to the questionnaire compared to previous versions. These include the removal of certain questions, the addition of new topics, and modifications to existing questions to better align with the evolving needs of DESNZ.
Winter 2023
New questions:
A new set of questions was added on cooking appliances:
-
HOBTYPE, ‘What type of hob do you have in your home?’
-
OVENTYPE, ‘And what type of oven do you have in your home?’
-
HOBREPLACE, ‘Thinking about the next time you need to buy or replace a hob, which of these options would you be most likely to choose?’
-
HOBFACTORS, ‘If you were choosing a new hob, what would be the most important factors in your decision? Please select up to three factors’
-
OVENREPLACE, ‘Now thinking about the next time you need to buy or replace your oven, which of these options would you be most likely to choose?’
-
OVENFACTORS, ‘If you were choosing a new oven, what would be the most important factors in your decision? Please select up to three factors’
A new question was asked about how smart meters are used to monitor energy use:
- SMARTMETD, ‘Which, if any, of the following do you personally use to monitor your energy use?’
New questions were added on sources of information relating to actions to help reduce climate change:
-
CCHEARD, ‘Looking at the following sources, from which, if any, of these do you hear or read about actions you can take to tackle climate change? This might include making choices about travel, product purchases, or how to save energy at home.’
-
CCINFO, ‘Which, if any, of the following would you like to have more information on?’
Adapted questions:
A revised set of questions was introduced about time of use energy tariffs. These replaced the questions used in Spring 2022 on ‘time of use’ tariffs, which included separate questions for dual-rate tariffs (e.g. day and night) and agile tariffs (e.g. changing every half hour):
-
TOUTAWARENEW, ‘Before today, how much, if anything, did you know about these electricity tariffs with different rates depending on time of use?’
-
TOUTLIKELY, ‘Think about a tariff where pricing varies at different times (for example daytime, night-time and peak rates). If this was available to you, how likely is it that your household would switch to it?’
-
WHYNOSMART, ‘You said you would be unlikely to switch to a tariff with multiple rates for different time periods. Why is this?’
An additional response option was added to the list of reasons why they would be unlikely to install low carbon heating systems at LCNOWHY:
- ‘Concerns about performance/efficiency’
Response options were removed for ‘solid fuel (open fire/enclosed stove) – coal’, and ‘LPG -fixed room heaters’ at HEATMAIN (the main way respondents heat their home), with these covered by the ‘other type of heating’ option in Winter 2023.
An additional response option was added to the list of reasons for not paying attention to the amount of heat used in the home at HEATNOATTWAY:
- ‘I have made energy efficiency improvements so I don’t need to think about this as much now’
Changes were made to the list of trusted sources of advice on which heating system to install at home at TRUSTHEAT to reflect changes to websites since Winter 2022:
-
‘Energy advice websites or helplines such as ‘Find Ways to Save Energy in Your Home’ replaced: ‘Simple Energy Advice’ website or similar energy advice website
-
‘Any other Gov.uk information source’ replaced ‘Gov.uk’
Removed questions:
Solar thermal panels were removed from the list of low carbon heating systems listed at LCHEATKNOW1-8 and LCHEATINSTALLA-E.
Spring 2024
New questions:
A new set of questions was added on attitudes towards the construction of a nuclear power station in their local area:
-
NUCLOCALSUPP
-
NUCWHYSUPP
-
NUCWHYNO
- A new code, ‘I’m concerned about the disposal of radioactive nuclear waste and decommissioning nuclear power stations’, was added from the open text data collected in ‘Other reason’.
A new follow up question was introduced about reasons for opposition to the UK developing fusion technology:
- WHYOPPFUS
A new question was introduced about trust in information on new and emerging energy sources:
- NEWTECHTRUST
- The following information sources options were included:
- Newspapers or newspaper websites
- TV news such as BBC, ITV, Sky, C4
- Social media (e.g., Facebook, Tik Tok, Instagram, Twitter, Reddit)
- TV and radio documentaries
- UK Government
- Scientists/Scientific organisations
- Charities, Environmental or Campaign groups
- Other (please type in) (please specify)
- None of these
- Don’t know
- The following information sources options were included:
A new demographic variable on self-reported financial hardship was added:
- FINHARD
- The following response options were listed in the questionnaire:
- Living comfortably
- Doing alright
- Just about getting by
- Finding it quite difficult
- Finding it very difficult
- Don’t know
- Prefer not to say
- The following response options were listed in the questionnaire:
-
For reporting purposes these response options were combined to form three summary figures:
- Finding it fine: ‘Living comfortably’ and ‘Doing alright’
- Just about getting by: ‘Just about getting by’
- Finding it difficult: ‘Finding it quite difficult’ and ‘Finding it very difficult’
A new demographic variable on self-reported personal income was added:
- INCOMEBAND
- The following response options were listed in the questionnaire:
- £0-£14,999
- £15,000-£19,999
- £20,000-£29,999
- £30,000-£39,999
- £40,000-£49,999
- £50,000-£59,999
- £60,000-£79,999
- £80,000-£99,999
- £100,000-£149,999
- £150,000 or more
- Don’t know
- Prefer not to say
- The following response options were listed in the questionnaire:
- Based on the distribution of the data, the income bands were combined into four net income bands:
- £0 - £14,999
- £15,000 - £29,999
- £30,000 - 49,999
- £50,000+
Adapted questions:
Minor wording and/or formatting changes:
- HYDREDKNOW
- SMRKNOW
Addition of new codes:
- SOLWHYNO
- A new code, ‘I’m concerned about the loss in fertile and agricultural land’, was added from the open text data collected in ‘Other reason’.
- SOLARENC
- Two new codes were added: ‘A guarantee scheme to cover any faults or damage (for example, a government backed scheme to cover repairs)’ and ‘Information about trusted, reliable installers’.
Questions moved from the Autumn wave to Spring 2024:
-
FRACKKNOW
-
FRACKSUPPORT
-
SMRKNOW
Questions removed:
The marital status demographic variable was removed as it was not used for weighting or analysis:
- MARSTAT
Summer 2024
New questions:
A new question was added on awareness of engineered greenhouse gas removals:
- GGRKNOW
A new question was added on the perceived importance of different aspects of energy policy:
- ENERG3
- The following options were included:
- The cost of energy
- Having a reliable, uninterrupted supply of energy available
- Energy generated from clearer, low-carbon sources
- The UK becoming self-sufficient and not buying fuel from other countries
- The following options were included:
New questions were added on knowledge about, and likelihood of buying smart appliances. While similar questions had been included in Summer 2022, wording changes do not allow for comparison:
-
SMAPPKNOW
-
SMAPPLIKWHIT
A further new question was included on the level of comfort with energy supplier involvement in use of smart appliances in the home:
- SMAPPCONTROL2
- The following supplier involvement options were included
- Collect data on your use of the smart appliance (for example how often the smart appliance is used, when it is used)
- Control the times in which your smart appliance comes on to make use of cheaper periods of electricity (if you give consent)
Adapted questions:
Minor wording and/or formatting changes:
- GENDER
- The open code ‘Prefer to self-describe’ was replaced with ‘Identify in another way’
- CCTRUST
- Examples of social media updated to include TikTok, Instagram, YouTube and Reddit, rather than just Facebook and Twitter
- GOVSUPPORTEN
- Focus changed from ‘in the last year’ to ‘in the last few years’
Removed questions:
Demographic variables associated with occupational status and the coding of NS-SEC were removed from the survey going forwards:
- SUPERVIS
- EMPNO
- SEMPNO
- OCCUPATION
Definitions
For the key terms used within the questionnaires and their corresponding definitions, refer to ‘Appendix A – Definition of terms used in questionnaires’.
Fieldwork
Contact procedures
All sampled addresses are initially sent a letter inviting them to take part in the survey. Letters are sent by 2nd class franked mail in a white C5 window envelope. The envelope has an ‘On His Majesty’s Service’ logo printed on it. As discussed further below, some envelopes also include paper questionnaires, giving the respondent the option either to complete the survey online or by filling in paper questionnaires.
The number of online survey logins provided to each address varied between two and four. Each advance letter included a description of the survey content, which varied between waves as follows:
Winter 2023: The study covers a range of topics including heating in the home, energy tariffs and climate change, and will inform key decisions made by the government and other public sector organisations.
Spring and Summer 2024: The study covers a range of topics including climate change, energy sources, and energy bills, and will inform key decisions made by the government and other public sector organisations.
The letter also contained the following information:
-
The URL of the survey website (https://www.patsurvey.co.uk) and details of how to log in to the survey
-
A QR code that can be scanned to access the online survey
-
Log-in details for the required number of household members (up to four)
-
An explanation that participants will receive a £5 gift voucher
-
Information about how to contact Verian in case of any queries
-
The reverse of the letter featured responses to a series of Frequently Asked Questions[footnote 8]
-
Those whose envelopes do not include paper questionnaire(s) are told that they may request paper questionnaires
-
Those whose envelopes do include postal questionnaire(s) are told that they may either complete the survey online or by filling in the enclosed paper questionnaire
A privacy notice is also provided for those whose envelopes include a postal questionnaire. For those completing the survey online, the privacy notice can be found by navigating to https://www.patsurvey.co.uk/, clicking ‘Click here to complete the survey’ and viewing the ‘Privacy policy’ hyperlink at the bottom of the page.
For details on past changes to the invitation letters, refer to the Technical Report for Autumn 2022 to Summer 2023 (Waves 5 to 8), under the section ‘Refinements to the invitation letters’ in the Appendix.
Figure 2 summarises the contact design within each stratum, showing the number of mailings and type of each mailing: push-to-web (W) or mailing with paper questionnaires included alongside the web survey login information (P). For example, ‘WP’ means an initial push-to-web mailing without any paper questionnaires followed by a second mailing with paper questionnaires included alongside the web survey login information.
The five-week timescale of each wave of the PAT – as well as the available budget – limits the maximum number of mailings to each address to two, a fortnight apart. There was also a limit on the number of mailings that included a paper questionnaire alternative. They were included in one of the mailings to sampled addresses where the CACI data indicated that every resident would be aged 65 or older. These addresses comprised 13% of the sampled total.
Figure 2: Data collection design by stratum (Area deprivation quintile group and Expected household age structure)
Expected household age structure | Most deprived | 2nd | 3rd | 4th | Least deprived |
---|---|---|---|---|---|
All <=35 | WW | WW | WW | WW | WW |
Other | WW | WW | WW | WW | WW |
All >=65 | WP | P | P | P | P |
A weaker response in the Spring 2024 wave necessitated some revision to the release of a small reserve sample. While the initially issued (‘main’) sample and the first reserve sample followed the original plan (see Figure 2), the second reserve sample required slight adjustments, as outlined in Figure 2a. A small proportion of households scheduled to receive paper questionnaires as per contact design (n = 346) were excluded from this reserve due to insufficient time to post, complete and return the questionnaires before the survey deadline.
Figure 2a: Data collection design by stratum (Spring 2024, second reserve sample issue, n = 3,186)
Expected household age structure | Most deprived | 2nd | 3rd | 4th | Least deprived |
---|---|---|---|---|---|
All <=35 | W | W | W | W | W |
Other | W | W | W | W | W |
All >=65 | W | W | W | W | W |
Fieldwork performance
Fieldwork dates
Fieldwork for each wave of the survey is run over a period of approximately five weeks. Figure summarises the specific fieldwork dates for each survey wave.
Figure 3: Fieldwork dates
Wave | Fieldwork dates |
---|---|
Winter 2023 | 7 November to 11 December 2023 |
Spring 2024 | 18 March to 22 April 2024 |
Summer 2024 | 11 July to 15 August 2024 |
Fieldwork numbers and response rates
A target of 4,000 completed surveys was set for Winter 2023, with a revised target of 3,250 for each subsequent wave of the PAT due to the new contract starting in 2024. This is sufficient for all whole population analyses and for most single-dimension subpopulation analyses. The reduction in precision is also relatively modest. Figure 4 shows the maximum confidence intervals for direct estimates for a 3,250 sample compared to a 4,000 sample, assuming a design factor of 1.2: the average for W1-W8. There remains some variability in response rate from wave to wave.
Figure 4: Maximum confidence intervals for % estimates based on two sample sizes
4,000 per wave (deft = 1.2) | 3,250 per wave (deft = 1.2) | |
---|---|---|
100% (top level) | +/-1.9% pts | +/-2.1% pts |
50% (e.g., by sex) | +/-2.6% pts | +/-2.9% pts |
20% (e.g., by age group) | +/-4.2% pts | +/-4.6% pts |
10% (e.g., by sex/age group) | +/-5.9% pts | +/-6.5% pts |
5% (rare subpopulations) | +/-8.3% pts | +/-9.2% pts |
Although response rates for the Spring and Summer 2024 waves were slightly lower than expected, this was managed by utilising preselected reserve samples designed for this purpose.
For Spring 2024, fieldwork overlapped with the Good Friday and Easter bank holiday weekend, as well as the Easter school holidays, which may have contributed to the lower response. Similarly, Summer 2024 fieldwork coincided with the summer school holidays, which may have affected response.
As a result of using the sample reserves, both the Spring and Summer 2024 waves exceeded their fieldwork targets, with Spring surpassing the target by 26% and Summer by 12%. In contrast, the Winter 2023 wave achieved 94% of its target sample size.
Figure summarises the sample sizes by data collection method at each wave.
Figure 5: Sample sizes by data collection method
Wave | Total sample sizes (adults aged 16+) | CAWI completes | CAWI completes as percentage of total wave completes | Paper completes | Paper completes as percentage of total wave completes |
---|---|---|---|---|---|
Winter 2023 | 3,743 | 3,081 | 82% | 662 | 18% |
Spring 2024 | 4,087 | 3,543 | 87% | 544 | 13% |
Summer 2024 | 3,644 | 3,129 | 86% | 515 | 14% |
Total | 11,474 | 9,753 | 85% | 1,721 | 15% |
In total, there were 11,474 respondents across all three waves, a conversion rate (responses/issued addresses) of 15%.
This can be converted into an individual level standardised response rate of 8.7% if it is assumed that (i) 92% of sampled addresses are residential, and (ii) an average of 1.89 adults live in the residential addresses. These assumptions are well-evidenced in general but not known with certainty for the particular sample that was drawn.
Using the same assumptions, the standardised household response rate (at least one response) was 13.4%. On average, 1.37 responses were received from each responding household.
Incentives
Each respondent who completes the survey receives a £5 gift voucher incentive. Those who complete a postal questionnaire are mailed a £5 Love2shop voucher. Respondents who complete the survey online are able to claim their voucher via the online incentives platform, serviced by Merit (a third party e-voucher provider), which allows respondents to choose from a range of vouchers.
Survey length
Figure shows the average (median) time taken to complete the survey in each wave, based on those completing the survey online. Timings for those completing the paper version of the survey are not available – however, the questionnaire content for both data collection methods is largely mirrored and completion lengths are likely to be broadly similar.
Figure 6: Median length of each survey wave for those completing online
Wave | Median survey length |
---|---|
Winter 2023 | 15 minutes and 49 seconds |
Spring 2024 | 13 minutes and 53 seconds |
Summer 2024 | 14 minutes and 41 seconds |
Response Burden
The GSS has a policy of monitoring and reducing statistical survey burden to participants where possible, and the burden imposed should be proportionate to the benefits arising from the use of the statistics.
The response burden of a survey is calculated by multiplying the number of responses to the survey in each wave by the median time spent completing the survey per wave (Figure 6).
The response burden of surveys which fulfilled the quality check standards was estimated at 2,824 hours. This assumes that the median survey completion time for postal questionnaires was the same as the median completion time for online questionnaires. On average per wave, this is a reduction on the first and second years of the survey, when the total response burden was estimated at 4,682 hours and 4,170 respectively. This significant decrease in time is attributed to the reduced sample size implemented from Spring 2024 onwards as part of the new contract.
Data processing
Data management
Due to the different structures of the online and paper questionnaires, data management was handled separately for each mode. Online questionnaire data were collected via the web script and, as such, were much more easily accessible. By contrast, paper questionnaires were scanned and converted into an accessible format.
For the final outputs, both sets of survey data were converted into IBM SPSS Statistics, with the online questionnaire structure as a base. The paper questionnaire data was converted to the same structure as the online data so that data from both sources could be combined into a single SPSS file.
Quality checking
Initial checks were carried out to ensure that paper questionnaire data had been correctly scanned and converted to the online questionnaire data structure.
Once any structural issues had been corrected, further quality checks were carried out to identify and remove any invalid surveys. To do this, a range of ‘potential invalid survey’ flags were created and applied to the data.
Any cases that were allocated a duplicate flag, a super-speeding flag, an extreme straightlining flag, or a flag which indicates that they were missing the ‘confirmation of accuracy’, were immediately removed.
Any cases allocated three or more of the other flags were also removed. So, for example, a case which had a minimum survey completes flag, plus a missing demographic flag, plus a moderate straightlining flag, would also be removed.
The quality checks are as seen in Figure 7.
Figure 7: Potential invalid survey checks
Type | Process |
---|---|
Duplicate on Individual Serial | Check for duplicate serials. Manually review the flagged cases and decide whether it is a duplicate based on demographics, email addresses used to claim Merit incentives and respondent name. If so, flag for removal. Otherwise attach a new, unique serial. |
Minimum survey completes | Flag households where there are more survey completes than the minimum reported number of people in that household (different respondents from the same household may report a different number of household members). |
Maximum survey completes | Flag households where there are more survey completes than the maximum reported number of people in that household (different respondents from the same household may report a different number of household members). Manually review these flagged households and decide whether there are any duplicates based on demographics, email addresses used to claim Merit incentives and respondent name. Flag any duplicates for removal. |
Super speeding | Allocate a super speeding flag to any survey completes with a length of less than 30% of median time to complete and remove from dataset. |
Moderate Speeding | Allocate a moderate speeding flag to survey completes which took longer than 30% of median time to complete but were still in the lowest 10th percentile of survey length. |
Missing demographic information | Only for PAPI questionnaires. Attach a missing demographic flag if more than one variable is missing from: ageband; gender; numadults; ethnic; and tenure. |
Moderate straightlining of grids | Apply a moderate straightlining flag if more than half of the answered grids have been straightlined (i.e., the same response code is given for each item in the grid). |
Extreme straightlining of grids | Apply an extreme straightlining flag if all answered grids were straightlined and remove from dataset. |
Have not ticked the “confirmation of accuracy” box | Flag for removal if a CAWI respondent has not typed in their name to verify that ‘I confirm that all of my answers were given honestly and represent my own views’. Flag for removal if a PAPI respondent has not signed to verify that ‘I confirm that I answered the questions as accurately as possible and that the answers reflect my own personal views’. |
The following number of invalid cases was identified in each survey wave:
-
Winter 2023: 245 invalid cases (6.1% of all cases)
-
Spring 2024: 318 invalid cases (7.2% of all cases)
-
Summer 2024: 254 invalid cases (6.5% of all cases)
Data checks and edits
Upon completion of the general quality checks described above, more detailed data checks were carried out to ensure that the right questions had been answered according to questionnaire routing. Unless a programming error has been made, this is correct for all online completes, as routing is programmed into the scripting software. However, data edits were required for paper completes. Data is also checked against the raw topline data outputs and checks are also implemented to verify that any weighting has been correctly applied.
There were three main types of data edits for paper questionnaire data:
-
If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response in the data was allocated a ‘SYSMIS’ value.
-
If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-4: Not answered (Paper)”.
-
If a paper questionnaire respondent selected multiple responses to a single-coded question, their answers to that question were excluded from the data and they were instead allocated a response in the data of “-5: Multiple options chosen (Paper)”.
Other minor edits were made on a question-specific basis, to ensure that there were no mutually exclusive combinations of responses for paper completes (for example, ‘none of these’ being recorded alongside a specific response code).
Coding
Post-survey completion coding was undertaken by the Verian coding department. The coding department coded any verbatim responses recorded in ‘other specify’ questions.
If the open-ended response corresponded to one of the pre-coded categories for a given question, the coding team would reallocate the open-ended response to the relevant pre-coded category and the response was removed from the ‘other’ category.
A new response code is added to the reported data when at least 1% of respondents in a wave provide open-ended answers that can be meaningfully grouped together (approximately 40 responses for a target wave of 4,000 or 33 for a target wave of 3,250). This threshold was not reached in the Winter 2023 and Summer 2024 waves, so no new codes were added. However, in Spring 2024, two new codes were introduced based on the open text responses (see the new additional codes in ‘Spring 2024’).
Data outputs
Once the checks were complete a final SPSS data file was created that only contained valid survey completes and edited data. Individual SPSS data files were created for each of the three PAT waves from Autumn 2023 to Summer 2024.
Based on these SPSS datasets, data tables in an Excel format were produced for each PAT wave. There are no combined wave databases for the current PAT series.
Key sub-group reporting variables
The variables which are the main focus of sub-group reporting in the PAT survey series cover a range of demographic and profiling measures. These are created using a consistent specification in each wave, as outlined in ‘Appendix B – Sub-group reporting variable specification’.
Weighting
PAT data was weighted separately for each survey wave.
The PAT is largely used to collect data at the person-level but there are a small number of questions where the respondent is asked about the household as a whole or is asked to give an opinion on a household-level matter. The details of these two types of weights are provided below.
Individual weight
A three-step weighting process was used to compensate for differences in both sampling probability and response probability:
Step 1: An address design weight was created equal to one divided by the sampling probability; this also served as the individual-level design weight because all resident adults could respond.
Step 2: The expected number of responses per address was modelled as a function of data available at the neighbourhood and address levels. The step two weight was equal to one divided by the predicted number of responses.
Step 3: The product of the first two steps was used as the input for the final step to calibrate the sample. The responding sample was calibrated to the contemporary Labour Force Survey (LFS)[footnote 9] with respect to (i) sex by age, (ii) educational level by age, (iii) ethnic group, (iv) housing tenure, (v) region, (vi) employment status by age, (vii) the number of co-resident adults, and (viii) internet use by age[footnote 10].
The statistical efficiency of the individual-level weights was 69% (Winter 2023), 64% (Spring 2024), and 62% (Summer 2024)[footnote 11].
It should be noted that the weighting only corrects for observed bias (for the set of variables included in the weighting matrix) and there is a risk of unobserved bias. Furthermore, the raking algorithm used for the weighting only ensures that the sample margins match the population margins. There is no guarantee that the weights will correct for bias in the relationship between the variables.
Finally, because the new methodology employs a random sampling technique, the weighting procedure is different from those used for the face-to-face surveys (up to wave 33) and online panel surveys (waves 33-37) in the original PAT series. However, the objective to eliminate sample bias was the same.
Household weight
The household weight is used for questions which are best interpreted at a household level, for example factual questions such as main method of heating the home, and whether the household has a smart meter.
The full list of household-weighted variables is:
-
HEATMAIN
-
SMARTMET
-
BILLPAY
Note, household weights were not used in the Autumn 2022 and Spring 2023 surveys. The COOLMAIN variable was household-weighted in Winter 2021, but from Winter 2022 onwards, it was person-weighted.
To analyse household-level survey data, it makes sense to convert the weighted sample of adults aged 16+ into a weighted sample of households.
This was achieved in two steps:
Step 1: The person-level weight of each respondent was divided by the reported number of adults aged 16+ in that respondent’s household (that is, the number of survey-eligible residents). This provisional weight was used as the input weight for step 2.
Step 2: A household-level calibration procedure was carried out using the contemporary LFS household-level dataset as the benchmark. Household totals were obtained for (i) housing tenure, (ii) region, (iii) the number of adults aged 16+ in the household, and (iv) the number of children aged under 16 in the household.
This approach to constructing the household-level weight has the advantage of making use of data from all respondents. The unweighted base is therefore the same for both person-level and household-level estimates. However, multiple respondents reporting about the same household are likely to provide very similar answers. The practical consequence is that the statistically effective sample size for household-level estimates will be smaller than for person-level estimates, even if the unweighted base is the same.
Reporting and data
Data delivery
Any respondent-level data is transmitted using Kiteworks software, which provides a highly secure and trackable means of transferring sensitive data. Kiteworks employs AES-256 encryption at rest and TLS 1.2 encryption when the data is in transit.
Reporting outputs
The following reporting outputs were published for the PAT waves from Winter 2023 to Summer 2024:
-
Individual topic reports covering results grouped thematically for each wave, for example, ‘Net Zero and Climate Change’ or ‘Energy Infrastructure and Energy Sources’, are available in PDF format. Starting from Spring 2024, these reports are also available in HTML format, in line with DESNZ’s commitment to improving accessibility.
-
A technical overview of the methodology for each wave is available in PDF format, with an HTML version introduced from Spring 2024 onwards.
-
A single version of the questionnaire that outlines the content for both online and paper questionnaires for each wave (available in PDF format).
-
Tabulations showing time series for questions asked triannually (formerly quarterly), biannually and annually, where these questions have been included more than once in the survey (available in XLSX format).
-
Tabulations of key questions from each wave, cross-tabulated by gender, age, highest qualification, and region (available in XLSX format).
Anonymised datasets are in the process of being deposited to the Integrated Data Service[footnote 12].
Significance testing
Significance testing is a statistical process which shows whether differences between sub-samples, or over time, are likely to be real (as opposed to being an artefact of the confidence intervals which are usually inherent in any sample survey).
Significance tests were applied throughout the reporting of the current PAT series and the commentary in the published reports focused on those differences which were statistically significant at the 95% confidence level.
The significance tests for any comparisons (for example, comparing response data for men with response data for women) were automatically conducted within the Wincross software package which Verian uses to produce data tabulations.
The software uses a column proportions test which looks at the rows of a table independently and compares pairs of columns, testing whether the proportion of respondents in one column is significantly different from the proportion in the other column.
The column proportions test is performed separately for each relevant pair of columns within each relevant row. The tests were conducted at the 95% confidence level and the weighted base size and unrounded percentages were used as the input values for within-wave significance testing.
Appendix A – Definition of terms used in questionnaires
The table below sets out the key terms used within the questionnaires and gives a brief definition for each term.
Term | Definition |
---|---|
Carbon capture and storage (CCS) | Carbon capture and storage is a technology that stops greenhouse gases entering the atmosphere. It typically involves capturing carbon dioxide (CO2) emissions from power stations or industrial facilities where emissions are high. The CO2 is then piped to offshore underground storage sites, where it can be safely and permanently stored. |
Climate change / Global warming | Long-term shift in the planet’s weather patterns and rising average global temperatures. |
Energy infrastructure | A term used to capture a range of different energy sources that are covered by the survey and the interconnections between them. This includes a range of renewable sources (on-shore and off-shore wind, solar, wave and tidal, and biomass), nuclear, shale gas, and carbon capture and storage as well as the pipeline and other interconnectors between them. |
Energy Performance Certificate (EPC) | An Energy Performance Certificate (EPC) measures the energy efficiency of a property and is needed whenever a property is built, sold or rented. The certificate includes recommendations on ways to improve the home’s energy efficiency. |
Energy tariffs | The pricing plan for energy used (e.g., for electricity and gas). |
Energy security | Energy security relates to the uninterrupted availability of energy sources at an affordable price and the associated impacts of these factors on national security. |
Fusion Energy | Fusion energy is an experimental technology that works by fusing together atoms in order to release energy. The UK is exploring whether this technology could be used to generate zero carbon electricity. |
Greenhouse Gas Removal (GGR) | These are methods that remove greenhouse gases such as carbon dioxide from the atmosphere to help tackle climate change. The purpose of GGRs is to help achieve net zero in the UK by 2050, balancing out emissions from industries such as air travel and farming, where eliminating greenhouse gas emissions will be more challenging. GGRs can be based on natural approaches. However, they can also be based on engineered approaches. Engineered approaches use technology to remove greenhouse gases from the environment and store them permanently, for example offshore in underground storage sites. |
Hydrogen | Hydrogen is used as a fuel in some industrial processes. It is not naturally available, which means it needs to be produced from other sources, such as natural gas, nuclear power, or renewable power like solar and wind, to be used as a fuel. When produced in an environmentally friendly way, hydrogen can help reduce the carbon emissions in industries, power generation, heavy transport (such as buses, lorries, shipping and aircraft) and potentially home heating. |
Low carbon heating systems | Heating systems that use energy from low-carbon alternatives such as hydrogen, the sun, or heat pumps which draw heat from the ground, air or water to heat homes. |
Net Zero | Net Zero means that the UK’s total greenhouse gas (GHG) emissions would be equal to or less than the emissions the UK removed from the environment. This can be achieved by a combination of emission reduction and emission removal. A new Net Zero target was announced by the Government in June 2019, which requires the UK to bring all greenhouse gas emissions to Net Zero by 2050. |
Nuclear Energy | Nuclear power is the use of nuclear reactions to produce electricity. This source of energy can be produced in two ways: fission – when nuclei of atoms split into several parts; or fusion – when nuclei fuse together. Fission is the process which occurs in nuclear power stations across the UK. Fusion is an experimental technology which the UK is exploring as a possibility to produce zero carbon electricity. |
Renewable energy | Renewable energy technologies use natural energy resources that are constantly replaced and never run out to make electricity. Fuel sources include wind, wave, biomass and solar. |
Shale gas and fracking | Shale gas is natural gas found in shale, a non-porous rock which does not allow the gas to escape. Hydraulic fracturing or “fracking” is a process of pumping water at high pressure into shale to create narrow fractures which allow the gas to be released and captured. The gas can then be used for electricity and heating. |
Small Modular Reactors | These are a type of nuclear fission reactor, similar to existing nuclear power stations, but on a smaller scale. They can be used for electricity generation, to provide industry with heat and power, or to provide energy to UK communities. not connected to the national gas grid. |
Smart appliances | Smart appliances are normal household appliances that have built in features enabling them to connect to the internet. This allows them to be controlled and monitored remotely using a smart phone or tablet. Smart appliances can be scheduled to come on at certain times. They can also be linked to smart meters to come on during periods of low electricity prices. This can help to lower customer bills and also manage demand on the electricity grid. Examples of smart appliances include: smart kitchen appliances, smart thermostats to control heating, and other appliances such as smart electric vehicle chargers. |
Smart meters | Smart meters are a type of gas and/or electricity meter which automatically send meter readings to your energy supplier and usually come with a monitor or screen (digital in-home display), that provides information about your energy usage. Smart meters also allow prepayment customers to top up their credit online and over the phone. |
Time of use tariffs | Time-of-use tariffs are energy pricing structures offered by some suppliers that charge lower “off-peak” rates during times of lower demand (typically at night or certain times of the day) and higher “peak” rates during periods of high demand. These tariffs can help consumers reduce their electricity bills if they are able to adjust their energy usage to align with the cheaper off-peak periods. |
Appendix B – Sub-group reporting variable specification
Top level grouping | Detailed grouping | Definition |
---|---|---|
Gender | Male | GENDER=1 |
Gender | Female | GENDER=2 |
Gender | Prefer to self-describe | GENDER=3 |
Age | 16 to 24 | AGE>=16 AND <=24 OR AGEBAND=1 AND 2 |
Age | 25 to 34 | AGE>=25 AND <=34 OR AGEBAND=3 |
Age | 35 to 44 | AGE>=35 AND <=44 OR AGEBAND=4 |
Age | 45 to 54 | AGE>=45 AND <=54 OR AGEBAND=5 |
Age | 55 to 64 | AGE >=55 AND <=64 OR AGEBAND=6 |
Age | 65+ | AGE>= 65 OR AGEBAND=7 AND 8 |
Highest qualification | Degree level or above | HIGHQUAL=1 |
Highest qualification | Another kind of qualification | HIGHQUAL=2 |
Highest qualification | No qualifications | HIGHQUAL=3 |
Tenure | Owner | TENURE=1,2,3 |
Tenure | Renter | TENURE=4 |
Rental type | Social renter | LANDLORD=1,2 |
Rental type | Private renter | LANDLORD=7 |
Rental type | Other type of renter | LANDLORD=3,4,5,6 |
Property type | A house or bungalow | ACCOMTYPE_COMB = 1,2,3 |
Property type | Flat | ACCOMTYPE_COMB = 4,5,6,7 |
Property type | Other | ACCOMTYPE_COMB = 8,9 |
GOR | North East | GOR=1 |
GOR | North West | GOR=2 |
GOR | Yorkshire & Humber | GOR=3 |
GOR | East Midlands | GOR=4 |
GOR | West Midlands | GOR=5 |
GOR | East of England | GOR=6 |
GOR | London | GOR=7 |
GOR | South East | GOR=8 |
GOR | South West | GOR=9 |
GOR | Wales | GOR=10 |
GOR | Scotland | GOR=11 |
GOR | Northern Ireland | GOR=12 |
Number of adults in household | 1 | NUMADULTS=1 |
Number of adults in household | 2 | NUMADULTS=2 |
Number of adults in household | 3+ | NUMADULTS>=3 |
Number of children in household | None | CHILDHH=1 |
Number of children in household | 1 | CHILDHH=2 |
Number of children in household | 2+ | CHILDHH>=3 |
Household decision maker | Respondent | HHRESP=1 OR NUMADULTS=1 |
Household decision maker | Joint | HHRESP = 3 |
Household decision maker | Someone else | HHRESP=2 |
Current working status | Working full time (30+ hours a week) | WORKSTAT=1 |
Current working status | Working part time (less than 30 hours a week) | WORKSTAT=2 |
Current working status | Unemployed and available for work | WORKSTAT=6 |
Current working status | Wholly retired from work | WORKSTAT=7 |
Current working status | Full-time education at school, college or university | WORKSTAT=8 |
Current working status | Looking after home or family | WORKSTAT=9 |
Current working status | Permanently sick or disabled | WORKSTAT=10 |
Current working status | Other | WORKSTAT=3, 5, 11 |
Ethnicity | White | ETHNIC=1,2,3,4 |
Ethnicity | Mixed or multiple ethnic groups | ETHNIC=5,6,7,8 |
Ethnicity | Asian or Asian British | ETHNIC=9,10,11,12,13 |
Ethnicity | Black or Black British | ETHNIC=14,15,16 |
Ethnicity | Other ethnic group | ETHNIC=17,18 |
NS-SEC* | Managerial, administrative, and professional occupations | (OCCUPATION=1,8 AND EMPSTATUS=1,2,3,4,5,6,7) OR (OCCUPATION=2 AND EMPSTATUS=1,4,5,6) OR (OCCUPATION=3,7 AND EMPSTATUS=1,4,5,6,7) OR (OCCUPATION=4 AND EMPSTATUS=1,4,5) OR (OCCUPATION=5,6 AND EMPSTATUS=1,4,5) |
NS-SEC* | Intermediate occupations | (OCCUPATION=2 AND EMPSTATUS=7) |
NS-SEC* | Small employers and own account workers | (OCCUPATION=2 AND EMPSTATUS=2,3) OR (OCCUPATION=3,7 AND EMPSTATUS=2,3) OR (OCCUPATION=4 AND EMPSTATUS=2,3) OR (OCCUPATION=5,6 AND EMPSTATUS=2,3) |
NS-SEC* | Lower supervisory and technical occupations | (OCCUPATION=4 AND EMPSTATUS=6,7) OR (OCCUPATION=5,6 AND EMPSTATUS=6) |
NS-SEC* | Semi-routine and routine occupations | (OCCUPATION=5,6 AND EMPSTATUS=7) |
NS-SEC* | Never worked | JOBEVER=2 |
Mode | CAWI | CAWI_PAPI = 1 |
Mode | PAPI | CAWI_PAPI = 2 |
Urban/Rural Classification | Urban | Urban_Rural_Class=1 |
Urban/Rural Classification | Rural | Urban_Rural_Class=2 |
Coastal/Inland Classification | Coastal | Coastal_Inland =1 |
Coastal/Inland Classification | Coastal Adjacent | Coastal_Inland =2 |
Coastal/Inland Classification | Inland | Coastal_Inland =3 |
IMD Decile | 1 | IMDDecile=1 |
IMD Decile | 2 | IMDDecile=2 |
IMD Decile | 3 | IMDDecile=3 |
IMD Decile | 4 | IMDDecile=4 |
IMD Decile | 5 | IMDDecile=5 |
IMD Decile | 6 | IMDDecile=6 |
IMD Decile | 7 | IMDDecile=7 |
IMD Decile | 8 | IMDDecile=8 |
IMD Decile | 9 | IMDDecile=9 |
IMD Decile | 10 | IMDDecile=10 |
Annual Personal Income | £0 - £14,999 | INCOMEBAND=1 |
Annual Personal Income | £15,000 - £29,999 | INCOMEBAND=2, 3 |
Annual Personal Income | £30,000 - 49,999 | INCOMEBAND=4, 5 |
Annual Personal Income | £50,000+ | INCOMEBAND=6,7,8,9,10 |
Financial Hardship | Finding it fine | FINHARD=1,2 |
Financial Hardship | Just about getting by | FINHARD=3 |
Financial Hardship | Finding it difficult | FINHARD=4,5 |
*Note: The NS-SEC variable was discontinued from Summer 2024 onwards as the employment related questions were reduced in the survey to lessen the burden on respondents. These questions were replaced with more relevant questions, including financial hardship and income.
The IMD Decile, Annual Personal Income and Financial Hardship subgroups were added in Spring 2024 following the introduction of the INCOMEBAND and FINHARD demographic variables, and the Coastal/Inland classification was added in Summer 2024.
Appendix C – Research and statistical term definitions
The table below sets out the key terms used within this report and gives a brief definition for each term.
ABOS (Address Based Online Surveying) | A ‘push to web’ survey methodology where letters are sent to a sample of home addresses inviting household members to complete the survey online. However, householders are also given the option to complete a paper version of the questionnaire which enables participation among the offline population. |
---|---|
Base | The number of people answering a survey question. In the PAT, the base number varies slightly between questions asked to equivalent subgroups. This is because of the ABOS methodology which includes a mixture of online and paper responses. On paper it is possible to leave a question blank or answer multiple responses at a single-coded question; in these situations, the answers are removed from the overall base. |
CAWI | Computer-assisted web interviewing. |
Fieldwork | The period of time over which data are collected for a survey (whether by face-to-face interviews, online completions or paper-based questionnaire completions). |
NS-SEC* | National Statistics Socio-Economic Classification. The PAT survey uses the self-coded method of deriving NS-SEC which classifies people into six categories: Managerial, administrative and professional occupations Intermediate occupations Small employers and own account workers Lower supervisory and technical occupations Semi-routine and routine occupations Never worked |
Omnibus survey | A method of quantitative survey research where data on a variety of subjects submitted by a range of funders is collected during the same interview. |
Privacy notices | Information provided by a service provider to inform users how they will use their personal information. |
Random location quota sampling | A hybrid form of sampling that combines elements of quota sampling and random probability sampling. The principal distinguishing characteristic of random location sampling is that interviewers are given very little choice in the selection of respondents. A random sample of geographical units is drawn (usually postcode sectors) and respondents in each interviewer assignment are then drawn from a small set of homogenous streets within these. Quotas are set in terms of characteristics which are known to have a bearing on individuals’ probabilities of being at home and so available for interview. Rules are given which govern the distribution spacing and timing of interviews. |
Representativeness | Similarity of the sample profile to benchmark population statistics, such as the Office for National Statistics mid-year population estimates. |
Sample size | The number of people included in the sample (a subset of the population). |
Statistical significance | A statistical test to determine whether relationships observed between two survey variables are likely to exist in the population from which the sample is drawn. We only report on findings that are statistically significant at the 95% level. |
*Note: The NS-SEC variable was discontinued from Summer 2024 onwards as the employment related questions were reduced in the survey to lessen the burden on respondents. These questions were replaced with more relevant questions, including financial hardship and income.
Contact information
Responsible statistician: Jack Gibson
Email: PAT@energysecurity.gov.uk
Media enquiries: 020 7215 1000; [email protected]
Public enquiries: 020 7215 5000
-
Following Kantar Public’s divestment from the Kantar Group and subsequent rebranding to Verian, all references to Kantar Public, including logos, were updated to Verian in Spring 2024. This change was reflected across the online questionnaire, paper questionnaire, survey website, and invitation and reminder letters. In this technical report, we will refer to it as Verian. ↩
-
Fieldwork in March 2020 was conducted in two stages. The survey was initially run on the Kantar Public face-to-face Omnibus but stopped early due to the outbreak of COVID-19 and the start of the lockdown. The findings, based on a truncated face-to-face sample, were published in May 2020. ↩
-
https://www.gov.uk/government/statistics/beis-public-attitudes-tracker-wave-33. The remainder of Wave 33 was conducted on the Kantar Public online omnibus to trial the online omnibus approach and to compare the results with the face-to-face survey. ↩
-
For the list of measurement effects, please refer to the Autumn 20222 to Summer 2023 technical report. ↩
-
See ‘Fieldwork numbers and response rates’ section. ↩
-
In addition, higher sampling fractions were applied to the three least populous ITL1 regions (NE England, Wales and N Ireland) so that the expected number of completed questionnaires was at least 220 in each one. ↩
-
https://analysisfunction.civilservice.gov.uk/government-statistical-service-and-statistician-group/gss-support/gss-harmonisation-support/harmonised-standards-and-guidance/ ↩
-
The respondent FAQs, provided with each letter, remained consistent across all three waves. The only updates were changes made from Spring 2024 onwards, following Kantar Public’s divestment from the Kantar Group and rebranding to Verian. Specifically, the name was updated from ‘Kantar Public’ to ‘Verian’ under the ‘Who is conducting the survey?’ section, and the helpline email address was changed from ‘[email protected]’ to ‘[email protected]’ under the ‘What do you need to do?’ section. To view the website version of the FAQs, visit https://www.patsurvey.co.uk/faq.html. ↩
-
October-December 2023 for the Winter 2023 survey; October-December 2023 for the Spring 2024 survey; and April-June 2024 for the Summer 2024 survey. ↩
-
Internet use by age was based on LFS data from January-March 2021, as this data is only collected in these months. This release has now been discontinued, so the 2021 data was retained for all three surveys. ↩
-
The statistical efficiency is the size of the effective sample size as a proportion of the actual sample size, taking only weighting into account (i.e., ignoring the effects of sample stratification and clustering by household). ↩
-
The Integrated Data Service (IDS) provides access to data sets from a range of sources, across government departments and devolved administrations, presented alongside analytical and visualisation tools in a secure multi-cloud infrastructure. Link to the IDS: https://integrateddataservice.gov.uk/. ↩