To complement our blog series on the Junior WebOps training programme, I wanted to share what those of us with a little more experience in the WebOps team get up to day-to-day.
First, a little bit of background
What does "WebOps" mean? Mark Kirkham gave a good answer in his recent blog, but to expand a bit, it’s a term used to describe the various technical job roles employed to support digital projects, and to build and maintain digital services using agile methodology.
Traditional roles that have been incorporated into our WebOps team include System Administrators, Environments Managers, Firewall and Security Managers, Network Configuration Managers and Incident and Problem Managers.
The WebOps team functions as an internal supplier of all those traditional services to the whole of DWP Technology. But the important difference is that we deliver them as a package to each service or project that needs them.
Someone like me will join a digital service and work as part of the team, providing WebOps support from the discovery phase right through the build and into live (and ultimately retirement, although we’ve yet to reach that phase with any of our new digital services).
As part of the team, we’ll do a range of things at different points in the development process:
- Stand up development, testing and production machines
- Run test services using automation tools
- Use Saltsack to support configuration management
- Take on live production services like fortnightly code releases, monitoring, report generation and system backup
- Maintaining system security through patching and firewall configuration
- Problem and defect investigation
WebOps and the Carer's Allowance Digital Service
So what does that look like in practice? Personally, I’ve been working on the Carer's Allowance Digital team since June 2014.
My work is extremely varied - some days I’ll be working with developers to fix specific problems and maintain service integrity, other times I’ll be mentoring the our WebOps trainees and apprentices. I take part in all the agile ceremonies like the rest of the team, and I'm part of the discussion when we're prioritising which user stories to tackle during each two week sprint.
I always have an eye on reusability - looking at ways to redeploy code or processes we’ve developed on the Carer's service across other services and projects, and I try and share that knowledge with WebOps colleagues on different digital services.
All of the work I’m doing ultimately has an impact on the lives of citizens who use DWP services – that’s a definite upside to the job.
Our all-important tooling
We've had a few comments on previous WebOps blogs about the tooling we use, so I wanted to give a bit more detail there. Different services are using different set-ups, but at the moment, Saltstack is my configuration management tool of choice. Service performance is monitored using OpenNMS, Pingdom and Performance Platform. We use Postgres databases, RabbitMQ message queues, NGINX web servers… the list goes on and on.
Most of the tooling we are using is cloud based so we use the vCloud Director interfaces from our cloud provider as well.
Pretty much all of it is open source software, used extensively in the private sector, which helps when we bring new people into the team from outside - the set-up is similar to what you’d expect across the tech industry so most new joiners can hit the ground running.
Everyone on the team has strong Linux skills, and then we have a spread of development skills, different package experiences, networking expertise and so on. It all adds up to a well-rounded team that can deal with just about anything that gets thrown at it!
Building services in-house
The best thing about the way we’re now working at DWP is the sense of self-sufficiency and control we gain by developing services in-house.
The WebOps team is integral to DWP being able to build and deploy new services quickly and cost effectively. Without the varied skills on the team, and the automation of processes that WebOps is responsible for, two week releases of new functionality would be out the window, along with agile sprint cycles. We’re in the room with the developers, the user researchers, the business analysts, the subject matter experts, the testers, and so on - it enables good things to just happen, naturally.
A good example of the sort of thing that wouldn’t have been thought about, let alone done under the old system of outsourced contracts is a recent change I made to the cloud-based test environment the Carer’s Allowance Digital Service uses.
I realised that our cloud test environment was technically ‘on’ all the time but we were not making use of it 24/7. While machines in the clouds cost a lot less than a bunch of servers in a DWP building, I still thought we could save more money by scheduling our use of the test environment more precisely.
So I spent half a day creating some ‘fog’ scripts that ensure the environment is only switched on when staff on the project create a test event in a shared calendar, meaning we’re only charged for the exact time we’re using.
That small change has saved the Carer's Allowance Digital Service over £3,000 a month for a couple of hours’ effort! In the old world of government IT, access to the test environment would’ve been a line item in a much bigger, more expensive contract with an outsourced supplier and we’d never had been able to make that sort of change to the arrangement.
Now, not only do we have the freedom to make these sorts of improvements, but we can share that learning across the department to ensure everyone using similar testing set-ups can benefit from it. It’s a much better way to work, and I’m pleased to have done my bit.
You can follow DWP Technology on Twitter, sign up for email alerts, or subscribe to the feed.
Comment by Donnie Mathers posted on
Your comment regarding "switching on" cloud-based test environments only when required was interesting. Particularly your view that this would not have been possible in the " old system of outsourced contracts". I think this is fundamentally wrong. If you cant define "what, where and when" you need, then you will end up with the service you dont require. Basic and simple.
Comment by Ben Farber posted on
I've had a word with Ian who came back with the following response: "What we haven't tried to do on Carer's is to define in absolute detail the requirements for everything before we've started doing anything (that approach would have been traditional waterfall). The "bare minimum to run the whole service" was put in place, then it has been improved as time has gone on, and understanding of the service has increased, in an Agile fashion."
I don't think Ian was intending to suggest that it's impossible to procure a flexible testing arrangement in the first place. Indeed, Enterprise Infrastructure director Juan Villamil discusses how the department is moving toward such an approach in his blog on DWP cloud services.
But given the requirements of a service can change over time, these sorts of flexible arrangements with individual suppliers come into their own when we are developing services ourselves because we have the ability to change the way we're using those external services when we spot an opportunity for improvement.
Where a service is completely outsourced for development and requirements are defined up front, this is not always the case.