Universal Credit Full Service, beta to live review
Universal Credit Full Service combines six existing benefits into one. Universal Credit is designed to achieve several key outcomes: delivering a simplified benefit system, increasing claimant responsibility by giving claimants control of their own information, moving people into work, supporting people already in work on low incomes to move into more work, ensuring those who cannot work receive the right support and reducing fraud and error.
Government Digital Service Standard Beta to Live Review
Universal Credit Full Service
From: | Central Digital and Data Office |
Review date: | 17 September 2017 |
Stage: | Beta |
Provider: | Department for Work and Pensions |
About the service
Basis
The service is in a beta status presently, this document provides a review of current status and options to support the services transition toward live running. This is based on a significant review of documentation provided by the Universal Credit Full Service (UCFS) outlining the services current approach to; User Research, Design, Team, Technology and Analytics. The review was conducted by a cross-government panel of subject matter experts with a subsequent one day in-person review with current service leads.
Description
Universal Credit Full Service combines six existing benefits into one. Universal Credit is designed to achieve several key outcomes.
These are:
- delivering a simplified benefit system
- increasing claimant responsibility by giving claimants control of their own information
- moving people into work
- supporting people already in work on low incomes to move into more work
- ensuring those who cannot work receive the right support
- reducing fraud and error
Service users
Primary users of Universal Credit Full Service are:
- claimants
- work coaches
- case managers
- decision makers
In addition to these users, there are other third parties who interact with parts of the system, such as appointees or landlords.
Detail
User needs
The service has clearly established user needs, then developed these into the strategic design framework.
Because of the UCFS unique scale and scope it is generating significant amounts of data gathered in actual use, this quantitative data is now being used to prioritise iteration, for example the use of ‘error rates’. This is a transition away from the previous approach of conducting user research with a controlled group, to now being able to test observations collated from a large group of service users in live running at scale, against this initial controlled group data to base future iteration on.
The team has invested significant research resource in understanding the impact of the service on DWP staff in addition to research with claimants. The team carried out ethnographic research with agents working in service centres and job centres. This work highlighted particular problems for the case managers who were responsible for processing payments and answering calls from claimants in understanding what needs doing on a case and which actions they are responsible for. In direct response to this, there is now a stream of work dedicated to how staff can progress cases improving agent experience and service efficiency.
The service is working with third party organisations to enable delivery, for example the ‘Landlord Portal’ has been created to support the very different relationships Social Sector Landlords have with their tenants and UCFS, the Landlord Portal is a new service being developed to enable UCFS to communicate relevant information with third parties that meets user needs.
Recommendations toward live running:
-
the service should continue throughout the process of managed migration to engage with social landlords, iterating the UCFS approach to enable existing services to meet user needs during transition as appropriate through provision of enabling services such as Landlord Portal
- the service should utilise emergent user needs and user understanding and build on this actual evidence to test real observations against user research to prioritise iteration based on actual behaviour
- the service should continue implementing a quantitatively based approach to understanding user behaviour in the actual service to provide evidence to iterate and prioritise features
Team
The team is one of the largest service teams in government, mainly co-located at DWPs London hub formed of small multi disciplinary units which enable effective agile working with additional benefit of co-design with operational management and multiple other internal stakeholders including Policy. It is acknowledged that the design approach is impacted by the operational roll-out schedule.
There are strong links across DWP, demonstrated by remote teams and DWP Operations, Policy, Security and Legal having embedded members within the London hub to enable effective agile working.
The panel notes that the team has been well supported and empowered to prioritise and shape the service, embedded Policy colleagues provide understanding of policy intent initially outlined in the Welfare Reform Act 2012 and Welfare Reform and Work act 2016. Embedded Policy teams are informed by emerging data and user research gathered by UCFS.
The structure of an ‘embedded’ data team and links to wider operational teams provide sufficient cover for sprint, epic and operational/strategic analysis and reporting.
The service acknowledges it is dependent on contingent labour in a number of Digital Data and Technology (DDaT) profession roles, and that it is working towards moving away from contingent labour.
Recommendations toward live running:
- the service should continue delivery using their current agile approach and consider increasing engagement with third parties as migration scales up, to ensure a smooth transition during and after migration through ongoing collaboration with independent organisations to enable them to understand, contribute to and benefit from the changes UCFS delivers
- the service should consider plans for knowledge transfer from contractors to the remaining permanent staff, create measures and monitor implementation to continuously support the live service
Technology
The nature of the challenge means that the team has built and deployed a substantial amount of technology rapidly, and over a period of years, and the expectation is that this will continue for many years to come.
The service has recently been migrated to public cloud hosting. This is a hugely positive step, allowing for the service to scale much more easily.
Release process: The service is composed of a microservice architecture which is updated once a week with a short period of outage. Experience has shown that single releases of microservices architectures often lead to the service becoming a ‘distributed monolith’ which is harder and more expensive to develop, iterate, and maintain.
Recommendations toward live running:
- the service should consider redefining the release and deploy process into smaller chunks to allow for faster iteration, and to reflect the desired microservice architecture
- the service should consider which features provided by the cloud provider such as load balancing, queues it could take advantage of to scale more efficiently and how additional features provided by the cloud provider to avoid the team doing ‘undifferentiated heavy lifting’
-
evaluate if there are any components which can be easily opened up, or any other technical details which could be published such as use of cloud/general architecture to build confidence and comfort in operating in an open manner
-
the team should consider mitigating knowledge transfer risk specifically in Tech teams of long-term impact of the loss of interim staff considering the complexity of the service, and the people and cultural challenges posed by moving to a team mix that in live running include less-senior and experienced staff.
-
consider if it’s possible to pull-forward addition of in-house, and more junior staff despite the near-term impact on delivery it’s likely to have
- continue to assess and evaluate the manual effort to recover in the event of a business continuity event, and consider how this could be minimised through functionality available in the hosting environment and service design
Design
The high-level service mapping represents best practice, further it is good that the service continues to iterate swim lanes within - i.e. still building it as the overall service is developed.
The service have taken the user needs and developed them into a strategic design framework. The policy is embedded in the team and they are willing to have difficult conversations, which is supported by user research, evidence and data.
The service team has actively identified and built awareness of pain points that include factors that delay a claim/payment, such as housing information, onboarding and identity verification, as well as issues around understanding of the current To Do and adding a ‘Journal’ interaction.
Recommendations toward live running:
-
the service continue as it moves toward a live state to utilise emergent user needs and user understanding to move from qualitatively based evidence to a quantitative basis to prioritise iterations and features that enable users to succeed first time
-
the team continue their best practice approach of learning from and reusing elements from existing services where available, carrying out appropriate testing and outcome measurement before full rollout, and to demonstrate how this has been done
Analytics
UCFS has a very capable data team that is co-located with the build teams, with analysts ‘buddying’ feature teams. The data team demonstrated robust processes for prioritisation of work and socialisation of analysis and outputs through Kanban walls, appropriate agile ceremonies, dashboards and reports in Jira.
The data culture has evolved from ‘tell us what’s happening’ to a position where features have success measures, and data is used for prioritisation. The data team also has the opportunity to ‘do the data science’ and explore the data for further insights that can drive new features.
Through the ‘embedded’ data team and links to wider operational teams, there appears to be a good cover for sprint, epic and operational/strategic analysis and reporting.
Recommendations toward live running:
- continue to analyse using server logs and admin data, maintain the resource for digital analysis on the use of the online interface, currently carried out by Business Analysts who feed this into service teams and a dedicated analyst
- explore opportunities to leverage more from web analytics insight as the service moves toward live provided they are reliable based upon the complexity of the journeys data on service outcomes and be able to demonstrate how this is linked collaboratively with user researchers
- the “feature warranty’ is an interesting innovation. The service should continue to review acceptance criteria and the obligations through the feature warranty process, to determine a reasonable measure which evidences the feature is ‘working’ operationally as expected in the live service and be able to demonstrate this measure in live running
- the UC service collects a vast amount of data about claimant and agent interaction and consequently should continue the robust data analysis taking place. And continue to ensure it is not ‘service-centric’ through maintaining operations embedded input to manage the cost of agents and carry out analysis to not only reduce agent cost and but also reduce pain points for users
- service continues to use data collected from the service, as well as qualitative data from ongoing user research, to understand what are the most pressing user needs. For example, when considering what users need as the service begins to significantly scale the service continue to working to identify manual processes which take up excessive amounts of work coach/case manager time to help make claim processing and payment more efficient
Identify performance indicators
- UC has developed robust performance indicators that meet business needs
Report performance data on the Performance Platform
-
performance is not currently reported on Performance Platform, but statistics are being published on a monthly basis and meeting all statutory reporting obligations
-
UC should continue to engage with what is now called ‘Government Service Data’ team, who are in beta themselves with a revised product where the focus is on the needs of senior staff who require an overview across government/departments, and less on service teams who should already have access to the data they need
Recommendations
To prepare for ‘Live’ it is recommended that:
- the service engage quarterly with a small cross government panel working with UCFS subject area leads to review a full end-to-end experience of a specific claimant journey from a start page on GOV.UK, including all components to collaboratively improve understanding and cooperation with other parties such as government departments and external organisations, with a view that members of that panel would be part of any future assessment
- the service evaluate if there are any components which can be easily opened up, or any other technical details which could be published such as use of cloud/general architecture to share learning and experience with other government organisations
- the service continues to utilise measures that accurately reflect it is operating effectively, accurately and doing so cost effectively
In preparation for ‘Live’ it is recommended the service team should also:
- continue to test the prototype with all user types and iterate based on the user feedback
- consider their plans for knowledge transfer from contractors to the remaining permanent staff and ensure that these are implemented to continuously support the live service
- break the release and deploy process into smaller chunks to allow for faster iteration, and to reflect the desired microservice architecture
- consider the use of additional features provided by the cloud provider to avoid the team doing ‘undifferentiated heavy lifting’
Updates to this page
Published 25 October 2019Last updated 25 October 2019 + show all updates
-
Title change
-
First published.