What does it take to move one of the largest and most complex IT estates in the country to the cloud?
DWP’s current IT estate encompasses over 3,500 physical servers hosting over 155 business critical applications – some that are part of the Critical National Infrastructure – running on everything from VME mainframes to AIX, HPUX and X86.
As well as the physical estate we have a huge range of documentation – 500 million items are printed and dispatched annually – processed through legacy applications written in a variety of languages, connected in myriad ways.
Shifting all of that to the cloud, while maintaining an effective and efficient delivery of services for DWP claimants and customers, is a significant task, requiring a clear vision and strategy (along with a little experimentation and iteration). There are four main elements to this work:
- building new, cloud-based digital services
- hosting and security
- creating one set of automated tools
- sourcing the right people, the right skills and the right jobs
Cloud is at the centre of our digital services
Cloud means a lot of things to different people and nothing at all to some. We are focused on what it means in practice for the average citizen using government services: access to government services from the comfort of a connected device at home or on the move. That goal is at the heart of our entire approach to building digital services.
These new services must be accessible, scalable and adaptive to user needs. And we want them to work whatever route a citizen is using to access them. The digital service a citizen uses online should be the same service a support worker uses to input a citizen’s data over the phone, or transcribe it from a paper form.
We think this is best achieved by building a single, cloud-based product, rather than by maintaining separate digital and non-digital services.
But to achieve that goal, we’re going to have embrace a bit of renewal and creative reconstruction – by which I mean getting rid of some old and out-dated services that cannot be easily ported to cloud to make way for new ones that better meet the needs of users.
To that end, we are conducting a thorough analysis of our ‘brownfield’ legacy IT estate, and over the next few months we will make an honest assessment of the services we need to retain, those we need to rebuild and those that we must replace.
We have to bear in mind that many of these legacy applications were written decades back, and were designed for internal use by civil servants. Most allow only a certain aspect of a service to be transacted digitally, rather than providing a full, end-to-end digital service. Few are public facing, and none were built in line with the current GDS design principles.
We can change all that by building new, ‘greenfield’ services - secure, cloud-based services that are designed to meet user needs and that can be improved over time. The Universal Credit and Carer’s Allowance Digital Services are our first two. Others are now in the discovery and alpha phases of development.
The goal is to do away with the radically different working processes for paper, telephone and online transactions, and with the data stored in a multitude of different formats. In their place will be a suite of joined up, scalable digital services, built and hosted in the cloud, that make efficient use of securely stored data, and that provide a single, positive user experience, regardless of the access route.
Hosting and security
As we deliver digital applications our security boundaries also change. Our security measures (tools and processes) are also evolving to deal with these changes and the ever changing dynamics of cyber security.
The big question for many when it comes to cloud-based services is security - “How secure is the cloud?”
This question becomes especially pressing when it comes to sensitive government services being hosted in the cloud, and generally spawns further questions: “Which company is hosting my data, how did you choose them, and what security checks have you carried out on them?”
The answers are always a little more complicated than many hope. There is not one single cloud solution that fits all of our applications, so there isn’t a single company, a single choice, a single set of security checks.
The varied nature of the data that our services need to access means that our hosting providers range from commodity public cloud suppliers hosting publicly available data, to high security private clouds.
We’re making use of existing government hosting frameworks such as Crown Hosting. This is a joint venture company that is owned by Cabinet Office and Ark Data Centres Ltd. After a robust financial and security assessment process, it was given a framework to deliver secure data centre services across the entire public sector.
The Crown Hosting framework gives departments the options of short, medium or longer-term contracts, with total commercial flexibility to vary the size of their engagement without penalty. Crown Hosting are obliged to keep their data centre technology infrastructure up to date to meet stringent performance levels, energy efficiency targets and more.
By opting for a cloud first approach to our hosting strategy we can take advantage of utility based cost models like this, where we pay only for what we use. This is far more cost efficient than paying for our own physical data centre which requires significant upfront and ongoing expenditure to meet the ever-changing and difficult-to-forecast demands of enterprise IT.
For services making use of publicly available data, we use the current gCloud framework to buy additional hosting services as needed, with decisions taken based on utility and suitability for the job. Our suppliers here include both SMEs and larger companies.
Creating one set of automated tools
Our use of cloud technology extends beyond hosting and services, to re-imagining the complete process of building and deploying applications and infrastructure components within DWP.
Our ambition is to combine a DevOps approach to software development with the cloud-based automation of testing and deployment. To aid this we use a common set of collaboration, tracking and version control tools across all of our businesses, enabling greater reuse and knowledge sharing across our IT estate.
These are industry standard tools, both open source and commercial: Chef, Puppet and Ansible for configuration management, Jenkins for deployment automation, Splunk and Logstash for log management… the list goes on.
These tools are helping us to deploy a standard approach to building applications, which allows for greater consistency in the build environment and better services in the long run.
Finding the right people with the right skills
The move to cloud is no longer a wish for us. We are already putting cloud technology to good use in building the Universal Credit and Carer’s Allowance Digital Services and are now expanding this into 24 further services.
We have chosen to make the move ourselves rather than rely on an outsourced provider to deliver an all-in-one solution – a decision that brings with it additional recruitment and training challenges.
We are upskilling our staff to take on roles in the WebOps team, that play a key role in enabling greater automation and agile code releases. We are recruiting a number of apprentices later this summer who will cut their teeth as Junior Software Developers working in Extreme Programming pairs with more senior colleagues.
And we are recruiting experienced developers, technical architects, QA testers, data scientists, security experts and more.
We have a huge range of compelling roles on offer to high calibre candidates who want to help improve the lives of the 22 million claimants and customers who use DWP services.
This is an exciting time to be part of what is effectively an enterprise-scale tech company in the heart of government. Visit our jobs site to see our current vacancies.
Sign up for email alerts to follow the Delivering DWP Technology story.
5 comments
Comment by A Tupman posted on
Very Interesting article. Can I ask a question relating to this section
"Our use of cloud technology extends beyond hosting and services, to re-imagining the complete process of building and deploying applications and infrastructure components within DWP.
Our ambition is to combine a DevOps approach to software development with the cloud-based automation of testing and deployment. To aid this we use a common set of collaboration, tracking and version control tools across all of our businesses, enabling greater reuse and knowledge sharing across our IT estate"
What is the ambition once the Service goes Live. E.g. will Cloud suppliers be expected to manage the service following best practise e.g. ITILV3. Will they be required to maintain a CMDB/perform Asset and Configuration Management as per a standard e.g. ITILV3 to support control, re-use, knowledge share for deployed applications/infrastructure.
Comment by Ben Farber posted on
Hi,
I passed your comment on to the Enterprise Infrastructure team, who have provided the following response:
"All of these functions will be within our scope of own DevOps team and tooling. Cloud suppliers simply provide the servers, storage and networking. Our relationship with these companies will align with commodity computing allowing us to change suppliers quickly for price, quality of service, or other reasons. All of the knowledge will be in our own domain."
I hope that is of use.
Comment by A Tupman posted on
Thanks for the reply Ben. I've had a good chat with Duncan Taylor this morning and better understand the approach. Duncan has suggested we engage with the Team shaping the Common Environment delivery platform.
I've also added an article I found re ITIL and Cloud Computing that's an interesting read.
http://www.itilnews.com/ITIL_and_Cloud_Computing_by_Sumit_Kumar_Jha.html
Thanks Again
Angela
Comment by Tye posted on
I am a little too late to this discussion, but having some exposure to CHS, my question is:
- In all of this DevOps framework to enable Agile team efforts for a rapid transition of current/legacy, and rapid code deployment onto the Cloud, I assume you will still be using tools for infrastructure monitoring?
From an availability, performance, transparency and a risk mitigation perspective I would deem this to be crucial.
For effective Root Cause Analysis and investigations, this would be fundamental.
Comment by Juan Villamil posted on
Yes, we are using a comprehensive set of tools that make up our Operations Support Systems which will be monitored on a 24x7 basis from our primary command centre which will located in Manchester.