Skip to main content

https://mojdigital.blog.gov.uk/2023/06/22/experiments-in-service-assessments/

Experiments in Service Assessments

Posted by: , Posted on: - Categories: Service Assessments

If you haven’t heard of the Government Service Standard, well you have now. Those of us in the business of building digital products and services in the public sector use this framework to make sure we create the right thing, in the right way. And we have periodic ‘service assessments’ where advice is offered to help teams build better, and to provide assurance to everyone that all is indeed progressing to a sufficient standard. 

An assessment features a panel of our peers and comprises three parts: learning about the thing being assessed; asking questions of the delivery team; and providing recommendations. The usual format of assessments at the Ministry of Justice (MoJ) has converged on: 

  1. a fairly limited amount of information provided to the panel up front
  2. pre-calls for each discipline before the assessment day
  3. a four hour agenda on the day comprising a presentation by the team and Q&A
  4. a wash-up for the panel afterwards
  5. sharing the outcome and advice with the team

“That’s how we’ve always done it!” is due a challenge so, over the last couple of months, we’ve been experimenting with the formula for three different assessments. The MoJ’s strategic theme ‘We must be led by users’ dovetails nicely with the first point of the service standard, ‘Understand users and their needs’. It would be a sorry - and somewhat ironic - day if we in the Standards team were not talking to our users. We’ve spoken to multiple service owners, delivery teams, and assessors alike and have arrived at experiments that test how nudges and tweaks to the assessment format can help teams build better. 

More advance information 

In the existing format, assessors are faced with a deluge of information at the beginning of a 4 hour agenda and are expected to devise incisive questions on the hoof. We thought that if assessors had more time to cogitate and understand the service then they would have better conversations with teams about their work. This proved to be the case. 

Assessors were given advance access to demos, wires, test environments, previous assessment reports or the latest show and tell so they could learn about the service before the assessment. When information did come late in the day, this was keenly felt by assessors. 

Limited pre-calls

Calls before the assessment day between a team member and assessor from the same discipline have become the norm at the MoJ. They have morphed into mini, siloed assessments that are in danger of eclipsing the main event. While assessors must comprehend a service if they are to ask insightful questions, we should be cognizant of when enough is ‘good-enough’. 

The first experiment had no pre-calls at all, however, we learned that one is warranted for tech, as the rest of the audience doesn’t have much to add to conversations about Kubernetes. 

Once, we had a pre-call solely to introduce the panel to the team. It achieved its aim of reducing the fear factor but it was later mooted that a similar effect could be achieved more conveniently on Slack.

Breakout huddles

The biggest innovation we trialled was introducing huddles to the agenda, where an assessor and team member from a similar discipline break out for a deep dive into their area of expertise. The informality of the huddles was welcome and assessors felt they were able to get into the weeds more. However, it turns out one huddle for design and user research is unwieldy and rations air time for each, which leads to more than usual follow-up questions. And, while the huddles were intended to be a free-flowing conversation, many would have preferred a structure e.g. discussions directed to particular points of the standard.

When huddles were a relatively short 45 mins, there was a perception that a decent discussion was compromised. But at a longer 75 mins, assessors felt they were restricted to advising on their own disciplines and developed a little FOMO.

There are still legs in huddles but we need to nudge and tweak a little more.

Panel-only huddle

A huddle of only assessors was introduced after the team’s presentation and before the Q&A as, in the current format, assessors had to come up with questions on the spot and in isolation. A more holistic vantage point was missing. This panel huddle worked really well as assessors could fill in gaps in each other’s memories or interpretations and confer on where to focus their questions. At 30 to 40 mins, this did not feel rushed and certainly gave more structure to the ensuing Q&A. At the same time, it afforded the team a welcome breather after their presentation.

Shorter agenda on the day

A typical assessment at the MoJ is 4 hours. It’s a herculean task to find a spare half-day in diaries plus it makes for a tiring afternoon on Zoom. One experiment really sliced the time down to 2.5 hours but, after an hour’s team presentation, assessors weren’t able to have a good enough conversation in the remaining time. Hence more than usual follow-up questions that needed to be juggled with day jobs amid context-switching. 

Slack and team charter

Sometimes tech drives change rather than enables it. So it is with Slack. We made use of Slack in ways that proved fruitful, and have ideas for its continued use:

  • Straw polls in the wash-up on mets/not mets per each point of the standard
  • Introducing assessors with photos on Slack 
  • Pinning relevant documents to the channel

Though Slack is available 24/7, assessors are not. Assessing is a voluntary role and, however rewarding they personally find them, assessors also have day jobs and cannot devote themselves entirely to the task at hand. It struck me that something akin to a social charter co-created by teams and panels might be a worthy pursuit. 

What’s next?

Time is not always on our side with assessments. We don’t want to devalue the discursive aspects of assessments but are there ways we can make them more nimble, and reduce the load on all concerned? 

One way is to shift the balance from push to pull styles of communication. If we provide information in a flexible way, people will be able to consume it at a time that is convenient. Within that context, at our next assessment the team is recording their presentation and demo in advance so panel members can watch and cogitate at their leisure. The day before the assessment, the panel will huddle to agree areas for exploration. We may even share these thoughts with the team so they can start considering them. 

Additional benefits are fewer nerves on the day as the team’s presentation is done and dusted; plus we’re eliminating the possibility of overruns; and it’s easier to find 2 or 3 hours in 15 people’s diaries than 4 hours.

And beyond

There remain numerous ways for assessors to learn about a service, ask questions about a service and provide recommendations to a team. The insights we’ve gleaned from these experiments have driven changes in assessments for the better. 

We haven’t yet touched on continuous assessments, self-assessments, lighter weight discoveries, reconsidering met or not, ties to spend controls, etc. As long as we focus on what matters (offering advice that helps teams build better; and providing assurance that things are progressing well), the way we achieve that can evolve. An ongoing, multidisciplinary dialogue with teams, assessors and stakeholders is critical in building an assessment service that meets the needs of our users.

Sharing and comments

Share this page

2 comments

  1. Comment by Tom Stewart posted on

    There is a great deal here to love - great job. I particularly welcome the discussion on providing more advance information - something that would have assisted my team and I previously.

    Limited pre-calls is also a shout, though I also agree Tech should continue to have its own.

    I think there is something to said for a form of continuous assessment, and I'd love to be part of that conversation.

    A suggestion for consideration if I may? What about pre-programming a follow up call, perhaps two weeks after the main event, allowing the Service Delivery Team that time to address any concerns the panel may have had, before they are presented with a 'Not met'? It wont be relevant to every assessment, but I've been through assessments that only just fell short of the standard and where the service only need tweaking - we would have 'Met' if we had been able to come back a week later.

    Very happy to share my experience if I can assist. Otherwise, I'm delighted to read about this progress. Great stuff, ty.

    Reply
  2. Comment by Tim Kleinschmidt posted on

    Hi Susan, it sounds like you are really starting to break down hurdles in the assessment process for both assessors and service owners/delivery teams. I particularly like how you are enabling assessors to efficiently ask constructive questions and provide quality advice while also reducing the burden and anxiety of the teams going through the assessment.

    For delivery teams, I think part of the challenge of going through an assessment is the variation in the way assessments are done. Often, members of delivery teams are using past experience, whether it be from previous assessments within the department, experience in other departments, and/or GDS assessments to prepare for the assessment. It can be very frustrating when aspects of the presentation, demo, or service get criticised or questioned in the assessment, but similar aspects were not challenged in previous projects. It's that feeling of 'if it worked last time, shouldn't it also work this time?' Obviously, assessors will never catch everything in a 4-hour assessment, so there will inevitably be opportunities for improvements with any team, but I think setting expectations correctly is very important, especially when the process is changing.

    How are you handling the change management across the organisation and what is being done to ensure delivery teams are preparing correctly through these experiments?

    Reply

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.