Part five: Five things you didn't know about...
Our final selection of posts from across the Analysis Function, each sharing five facts about a topic or project
As part of Analysis in Government month, this is the final edition in our series of ‘Five things you didn’t know about…’ blog posts. Analysis in government is being used more than ever in policy, decision making and the media, this series of blog posts will share, showcase and celebrate the vast variety of professions and work within the Analysis Function.
Jump to one of our posts:
- Five things you didn’t know about user engagement
- Five things you didn’t know about cognitive interviewing
- Five things you didn’t know about data science
- Five things you didn’t know about the work of the Economic and Social Research Council (ESRC)
- Five things you didn’t know about behavioural science
Five things you didn’t know about user engagement
By Tegwen Green (left) and Nancy Singh (right), Office for National Statistics
For the last fifteen months Nancy and I have been working with users and producers of statistics to develop a user engagement strategy for statistics.
The strategy sets out a plan of action for building a more meaningful and sustained dialogue between producers, users and potential users of statistics. Its principles are widely relevant to anyone conducting analysis across different professions.
We want to spread the word about the value of user engagement, in the midst of a pandemic, more than ever. We’ve highlighted five important things we want you to remember about user engagement. You might know some of them already – if so, then perhaps share them with your colleagues as well!
1. You’re doing a lot of user engagement already
We just want to remind you all that every time you ask an opinion or have an informal chat about your work, you are engaging with a user or potential user of your analysis.
Let’s get better at recognising and acknowledging the good behaviours and actions that we already display. Let’s build on those to create even more opportunities to develop regular, ongoing, two-way dialogue with a wider range of people.
Engagement is rewarding and it can be anything from an email exchange to a full-blown consultation exercise. If you have an engagement success story to share email [email protected] so we can help showcase your experience and inspire others to follow your lead.
2. One size doesn’t fit all
This may seem obvious, but we really do need to tailor our engagement activities to suit the intended audience, much as we would do with any other sort of communication. We have such a huge range of users and potential users of our products and services, and we probably don’t even know who they all are.
This is where techniques such as audience segmentation and the use of user personas can help us categorise and gain insight into what users of our statistics want and need from us. Through this understanding we can tailor our communications and our statistics to ensure that as many users as possible are aware of our statistics and engage with them and us.
3. No one does it perfectly
User engagement tends to be most effective when it has multiple strands and when your engagement happens as part of your organisation’s wider engagement activities.
We don’t need a few people doing user engagement perfectly, we just need lots of people trying their best to do it well and working together to make it happen as part of business as usual.
Let’s learn from each other’s’ experiences and share all the ‘good stuff’. Why not take a look at these success stories and tell us about the successes you’ve had engaging with users, so we can showcase those, to inspire and help others.
4. Help is at hand
You’re not alone – as ever, your colleagues across government are here to support you in your endeavours. For example, you can:
- link up with the cross-government user engagement champions – a vibrant and enthusiastic network of user engagement enthusiasts
- identify your engagement challenges for the new User Support and Engagement Resource (USER) hub to help address, or ask for practical advice from the Good Practice Team
- post a query on the government-wide User Engagement Slack channel
- reach out to your organisation’s stakeholder engagement or communications teams to identify new contacts and explore new channels of engagement
5. We have a vision for the future of user engagement
Nancy and I launched the new four-year user engagement strategy for statistics on 22 February 2021. The strategy has:
- a radical vision - for user engagement to be second nature and built into our organisations’ wider activities
- three ambitious goals, the ‘3Cs’ – centred around facilitating collaboration, building capability and encouraging a culture change
Five things you didn’t know about cognitive interviewing
By Meg Pryor, Office for National Statistics
Cognitive interviewing is a method which is used to see how individuals process and respond to survey questions, allowing us to investigate what the respondent is thinking about when they’re answering our surveys. I have been doing cognitive interviewing for three years, so this blog is based around the things I have learned while doing so, which I didn’t know before! As a note, the cognitive interviewing I’m going to discuss here is not the same as cognitive interviewing that happens in police settings.
1. This is usual practice within question and questionnaire design
Before starting at the Office for National Statistics (ONS) I didn’t know that cognitive interviewing was a process within questionnaire and survey design - in fact I had not heard of it at all from my time in academia! Since joining, I now know that it is an integral part of building and designing a survey and allows us as researchers to see whether or not a question is clear for a respondent. It allows us to identify if there are any risks to the data quality because respondents may interpret the question to mean something different than what the question designer intended. This could come in a variety of forms:
- Respondents may answer the question incorrectly because they do not understand a term.
- Respondents may be misremembering an event or the frequency of events.
- Respondents may be deciding to estimate rather than calculate answers.
- Respondents may say something they feel is untrue because it makes them ‘look better’, especially in interviewer-led modes.
- Respondents may get frustrated and stop completing the survey entirely (if it is voluntary).
Conducting cognitive interviews allows us to better understand the data we’re collecting and helps to identify these risks early. It therefore allows us to think about what we can do to mitigate these risks and make the answering process easier for respondents. This could be redesigning the question if possible or incorporating guidance.
2. It can be done remotely
We’re a year on from the start of the coronavirus (COVID-19) pandemic, and it has meant we have had to change the ways in which we work. This has included research. While previously I had conducted cognitive interviews face-to-face, this was no longer possible. But that does not mean that cognitive interviewing can’t go ahead, it just has to go ahead a little differently.
I recently co-authored a blog post on the GSS website talking about remote testing in practice, and how we experienced it. For cognitive interviewing specifically, key points would be to have sessions last no more than 60 minutes as opposed to 60-90 minutes as remotely it’s harder to keep your participants concentration. Also, to carefully think about what software and technology not only you have access to, but also what the participant has access to. Lastly, as with face-to-face, think about the ethics! Use the Government Social Research (GSR) Professional Guidance to help with this.
All in all, don’t be put off cognitive interviewing because of the pandemic. Doing it remotely has many benefits! There is also guidance on the GSS website to help you get started.
3. It can be emotive
As I alluded to, ethics are very important to consider, and especially so when cognitive interviewing. I have had interviews where the person I’m speaking to has started to cry and has gotten upset and therefore I stopped the session and provided aftercare. It’s important to remember that survey questions still have the potential to be emotive if they are sensitive, even if at first they don’t appear so.
Cognitive interviewing can show the sensitivities of questions which we may not have been aware. For example if you’re asking a question on household spending, the respondent may have just lost their job so always be mindful. With this in mind, the UK Statistics Authority has a great Ethics Self-Assessment Tool which you can use to identify ethical risks before the research sessions.
However, to quote the Government Digital Service’s principle ‘you are not your user’ so therefore you might not realise the emotional impact a question might have until you’re in the research session. Therefore, I also recommend putting together a plan in place for if a participant becomes distressed in the session.
4. Get the participant to ‘think aloud’
How can we expect to find out the cognitive processes a participant is going through without a machine hooked up to their brains? Getting them to go through the process of thinking aloud.
This can be a weird concept for participants to understand. We essentially want them to tell us what they’re thinking when they’re presented with the question, and the processes they go through to figure out their answer. That is easier said than done, and is something that sometimes doesn’t come easy to participants.
One way to help them understand ‘thinking aloud’ is to give them an example of what you mean. An example I often use is:
‘If you asked me how many windows were in the room I’m sitting in, I could say two and that would be my answer. But if I were to think aloud, I would say I have a window next to my TV, and then a bay window next to me. I don’t know if a bay window counts as multiple windows, because it has multiple sections, but to me it’s just one window, therefore I have two windows.’
After providing that example, I then ask the participant to try giving it a go themselves, so they have a better idea, and I can explain it further if they want further information.
It’s often through this method that you get golden quotes that really show you what’s going through their minds when answering your questions.
5. Silence is your friend!
Now while we want the participants to be as loud as possible, it is incredibly important for us as researchers to utilise silence, as it is your best friend while cognitive interviewing. As the saying goes, less is more! It allows the participants to keep talking and to keep providing you with rich, quality data into why they have answered the question in that way.
However, something which I have learnt through conducting remote testing especially is that while you want to give the participant room to talk, if you’re quiet for too long they may think your internet connection has gone! So instead, every now and then make a ‘hmm’ noise so that they know the technology isn’t playing up.
In conclusion, I hope this has shown you some aspects to cognitive interviewing that you may not have known previously. If you’re starting work on a survey, or wanting to design questions, I cannot recommend enough conducting these sessions for yourselves and you can find courses available on the GSS website.
Five things you didn’t know about data science
By Hillary Juma, Jonathon Mellor, Lewis Edwards, Ali Cass from Data Science Campus, Office for National Statistics and Emma Walker from Centre for Applied Data Ethics, UK Statistics Authority (UKSA)
Data Science is informing policy, digital products, and operational decisions across the Public Sector. Data Science is the practice of bringing together mathematical knowledge, domain knowledge (such as environmental policy) and computer science to provide analytical or operational insight. See The Data Science Venn Diagram.
In this blog we dispel myths around data science.
1. Demystifying data science
Data Scientists produce algorithms; a set of rules for solving a problem in a finite number of steps. Data Scientists work closely, with data engineers, data architects and product managers, to deliver business relevant insights. An example data science project is the use of machine learning to predict energy efficiency from energy performance certificates data.
Data Science has similarities and differences to Artificial Intelligence (AI), for example the similarities include the use of algorithms and machine learning. Machine learning is the process of using of algorithms that generate predictive or explanatory models based on patterns or structures in data. The difference is that AI is the theory and development of computer systems able to perform tasks normally requiring human intelligence. For example, Siri or Alexa performing task in response to a vocal request.
2. Learning Data Science
Thankfully, a degree in data science is not a requirement to be a data scientist, nor should it be! Thanks to the open-source community spirit and training on offer from the civil service, the routes into and for advancing in the field are immense.
There is no one right way to learn data science: try out a new technique at work, a training course here, a personal project there, a blog read with a cup of tea or a podcast while you go for a walk. All are valuable to expand your knowledge of programming, statistics, and AI. You can find an abundance of open access books, tutorials, and welcoming communities online and in person (for example the Government Data Science Slack channel). Within government there are even more communities and meetups, along with training material available created for those working in government (such as the Analysis Function curriculum) and mentoring opportunities such as the Data Science Accelerator.
To misquote a famous phrase: “The journey of a thousand data science techniques begins with a single article,” starting is the hardest part.
3. Data Ethics is more than just assessing for “Bias in, Bias Out”
When it comes to using data science in our work, the possibilities and potential applications can seem endless! However, it is vital that we carefully consider not only what we can do with these techniques, but also what we should do.
Data ethics is a growing field and there is an increasing amount of information and resources related to ethical considerations in machine learning and AI. These include aspects related to transparency in techniques, approaches, and datasets, considerations of consent and privacy in relation to the use of data, accountability, and human oversight, and understanding the limitations of methods used, including the potential for biases in datasets and approaches that may lead to groups being underrepresented or discriminated against in some way.
In sum, thinking about ethics in data science is crucial and this is reflected in the recent launch of the UK Statistics Authority’s Centre for Applied Data Ethics, which aims to further help researchers and statisticians address ethical considerations in their work.
4. It is more than code
- Art of data science is exploring the problem and then conveying insights
- Coding is the part of the journey, but the destination is understanding
- Know your audience
Whilst the day-to-day work of a data scientist may seem like a scene from the Matrix with lines of code across multiple screens, this is only part of the journey.
The most valuable contributions that data scientists can make to a team, business or department are sharing the deep insights from behind the data’s silicon curtain. Developing this value will involve the application of various techniques and skills, such as the machine learning approaches mentioned already.
But the time to shine is when you present and talk through what you have found out about the data; whether confirming untested assumptions that have been held true about the data by regular users (a very common experience) or revealing unexpected characteristics about how certain variables relate to others.
Often summarised in the form of dazzling visualisations or interactive dashboards, the opportunity to present these to the rest of your community and explore the true value of any data source together is a rewarding experience. It also a chance to inspire curiosity in the data from as-yet unconvinced colleagues (friends and family too, if you are willing to chance it) and other unexplored options.
Whether it is a personal project or part of a large programme of work, the role of a data scientist lies in understanding the task at hand, digging into the data with your favourite data inspection tools and then setting out a string of achievable goals towards what might (with an allowance for pragmatism) become a fully-fledged analytical pipeline replete with cutting-edge techniques.
5. There are no unicorns
Since data science is a combination of several skillsets applied in a multitude of ways, the path to becoming a data scientist can be a very individual journey.
Unlike a more traditional career path where one might become an expert in their field by following a structured route, there are no unicorns in data science – no experts in all elements!
Instead, you will find accomplished data scientists with expertise skewing towards a range of areas in the Venn diagram above. This might be more on the software engineering side of things with responsibilities including building data science architectures and workflows, or the domain expertise side applying innovative technology to solve problems related to climate change and the environment.
If you were to ask a data science team how they each got to where they are you will hear all sorts of journeys. And that is great! It speaks to the diversity of the data science skillset and applications and having a team that capitalises on this is a great position to be in when tackling novel problems.
Conclusion
Over the years, there has been a growing understanding of the value of data and the sorts of way it can be realised. Whilst the proportion of senior leaders that needed encouragement to prioritise exploration of their data in innovative ways has decreased, the growth in appetite to make use of more data and explore more complex techniques has been exponential.
If you would like to learn more about Data Science in the Public sector, feel free to check out the following resources:
Open to all:
- Data in Government Blog
- Data Science Community of Interest, Machine Learning Blog Post
- Data Science Campus, Learning and Development
- Data Science Campus mailing list
- Data Science project: Estimating Vehicle and pedestrian activity from town and city traffic Cameras Dr Li and Dr Ian, Senior Data Scientists, Data Science Campus, ONS Presented at Institute for Governments, Data Bites
Open to UK Public Sector Employees only, access with work address:
- Government Data Science Slack – community forum
- Government Data Science Festival Knowledge Hub Net – community presentation’s library
- Government Data Science Partnership Mailing List
- Data Science Community, Service Manual page
- Data Science Seminar Series - hosted by the Data Science Campus, ONS
Five things you didn’t know about the work of the Economic and Social Research Council (ESRC)
By Alison Park Interim Executive Chair of ESRC
In this blog post, Interim Executive Chair of the Economic and Social Research Council (ESRC) Alison Park, describes five things you didn’t know about the work of ESRC.
ESRC was established over fifty-five years ago to help inform policy and industry. We are now one of the nine councils that make up UK Research and Innovation (UKRI) where we work to achieve UKRI’s mission to ‘connect discovery to prosperity and public good.’ Here are five things you might not know about ESRC.
1. The breadth of our research and data investments
Work we fund improves our understanding of how we think, feel and behave, of our mental health, education, work and family lives. Our researchers consider how organisations are managed, how states are governed, and how to achieve a fair and sustainable economy. This evidence informs decision making and efficient public service delivery.
Examples include:
- The Productivity Institute — a new investment which will provide a deep understanding of what individuals, firms, regions and national policy can do to improve productivity.
- ADR UK — a partnership of government and academic groups working with Whitehall departments and devolved administrations to create linked research datasets from administrative sources covering areas from education and health to crime and justice. Read more about ADR UK’s impact.
- An array of national studies including Understanding Society, the world’s largest longitudinal household panel study, which provides vital evidence about change and stability over time across nearly every element of people’s lives.
- Policy facing research on people’s behaviour and climate change through The Centre for Climate and Social Transformation (CAST) and Place-based Climate Action Network (PCAN).
2. The relevance of our work to government priorities
Many of you will be aware of the Areas of Research Interest (ARIs). Less well known is that nearly two-thirds of all ARIs can be addressed primarily with insights from the social and behavioural sciences, as the Government’s Chief Scientific Adviser Sir Patrick Vallance has pointed out.
More specifically, just as government priorities are now very focused on issues such as the pathway to net zero, levelling up and skills, so too are our plans.
As well as further research on climate change adaption and mitigation, we are planning place-based research investments that will provide a better understanding of the different challenges facing different regions and cities, as well as scoping work to build a high-quality evidence base on skills to improve policy and practice across all economic sectors.
We are looking forward to continuing to engage with government in refining our thinking in these and other areas.
3. Our commitment to connecting research and policy
Our aim is to better connect research capability with policy challenges. We’ve recently been focusing on how we can catalyse deeper and enduring connectivity across the research-policy system. Our vision is to realise the potential of research to inform and shape public policy at all levels.
We are currently exploring a range of activities, including:
- Developing our Evidence Centre Network, including the Cabinet Office-led What Works Network and our Economics Observatory and International Public Policy Observatory investments
- Building on our PhD Review to upskill researchers to enable them to work more collaboratively with non-academic users
- Identifying policy relevant data linkages and ensuring our data infrastructure aligns with policy needs.
To flesh out these activities we will be talking to government stakeholders and departments to understand their interests.
4. Our new people exchange and fellowship framework
An immediate priority is to develop a people exchange and fellowship framework which creates opportunities for researchers to spend time in the heart of policy organisations and for those in government to gain experience in research organisations.
Over the next year we’ll test and develop this framework, as well as pilot data science fellowships with No.10, initially focusing on levelling up, net zero and coronavirus (COVID-19) recovery.
5. The scale of our COVID-19 research
ESRC has just under 200 grants in our COVID-19 portfolio, which are generating unparalleled insights into the impacts of the pandemic and will support the ongoing national response and recovery efforts. We engaged with CSAs and devolved administrations to match proposals with policy priorities and are now extending this engagement with other policy stakeholders.
Do keep an eye out for a series of ‘actionable insights seminars’ that we are developing in partnership with government analytical networks. These will focus on thematic areas of relevance to key government priority areas. I hope this blog has given you a brief impression of some of ESRC’s priorities and a sense of how we encourage close connections between research we fund and policy priorities. To keep up to date with our work please follow us on Twitter and visit our website.
Five things you didn’t know about behavioural science in the Department for Work and Pensions
By Alexandra Urdea, Department for Work and Pensions (DWP)
I am a social researcher and member of DWP’s Behavioural Science team. I have a PhD in anthropology and currently work on a cross-departmental project to help more informal carers remain in work.
Like a number of other government departments, DWP has its own behavioural science function tailored specifically to the needs of the department in which it sits. As a team we work across a wide range of policy areas, from disability benefits to labour market interventions, alongside more internal-facing challenges like organisational transformation and people performance policies.
It would be presumptuous for us to speak for other teams in this emerging field, which continues to change as needs and approaches evolve. But for now, and for DWP, here are five things you probably didn’t know about behavioural science.
1. Our practices are distinct from Behavioural Insights
Behavioural science is often seen as synonymous with Behavioural Insights (BI). BI involves finding low cost ways of nudging people’s behaviours – usually through communications – with the effectiveness of nudges measured using randomised control trials (RCTs). But behavioural science is an umbrella term that covers a range of different approaches to solving problems involving human behaviour. DWP Behavioural Science was designed in 2015 to complement, rather than duplicate, an analytical function with extensive expertise in trialling interventions. Our team takes a more upstream focus, supporting colleagues to design user-centred, behaviourally informed policies and services from the outset.
2. We help colleagues to understand and diagnose problems
Many of the problems we work on in DWP – like tackling long-term unemployment and designing an effective benefits system – are highly complex. They involve a range of different ‘actors’, including policy colleagues, work coaches in Job Centres, GPs and employers as well as benefit claimants. Often the Department is asking these actors to perform complicated series of behaviours in order to achieve a policy goal. We work with colleagues to translate their goals into concrete behaviours so that we can explore how realistic they are, and then systematically map the barriers currently preventing people from doing them (COM-B is one of our favourite tools for this). It’s only once we’ve done this that we start co-designing solutions to address those barriers.
3. We’re interested in systems and context
All human behaviour happens in the context of social structures and systems. These include more nebulous things like cultural norms – for example around who should care for elderly relatives in a family – as well as more tangible things like legal employment rights. These structures and systems enable certain actions and choices whilst constraining or preventing others. Understanding the context in which behaviours are taking place is vital if we want to understand why people do the things they do, and what might help them behave differently. It also helps us think through the risk that proposed interventions in one part of a system will have unintended consequences in another part.
4. We’re not all psychologists!
Psychologists, such as Daniel Kahneman and Amos Tversky, are the social scientists most often associated with the emerging field of behavioural science. Their ideas and findings confirmed the suspicions of economist Richard Thaler that people often don’t act in the way traditional economic theory would suggest. This is why, Thaler thinks, different social sciences that can help understand human behaviour should have a stronger voice in policymaking. We have very much taken this message to heart. In our team we have psychologists, but also anthropologists, sociologists, philosophers, operational researchers and policy professionals (to name but a few)! This trans-disciplinary approach means we can generate a far richer and multi-faceted understanding of human behaviour – and therefore more innovative and effective solutions – than any single discipline could achieve.
5. We are methodological magpies
Some behavioural science teams specialise in quantitative insights and impact evaluation. Others can draw on well-established literatures about the drivers of particular behaviours. Given the nature of what we do in DWP we find qualitative methods especially helpful for unpacking the context within which behaviour occurs, and for developing and testing behavioural hypotheses. We often draw on ethnographic, co-productive and other creative methods. We also use tools from the digital and user centred design (UCD) professions to help us think about user needs and solution design. Our toolkit is constantly growing and evolving to help us better tackle the problems we’re faced with and we don’t see this changing any time soon!