Here at the Ministry of Justice (MoJ), myself and a group of User Centred Design professionals have been exploring how to make ethics a practical part of the design process. As designers our job is to make sure our services meet user needs, solve a whole problem, and make sure everyone can use our services. Now that technology is changing the way we live more than ever, it is important that we develop ways of assessing whether we are designing the right technologies in the right ways.
The MoJ includes a range of bodies and agencies such as HM Prisons & Probation, HM Courts & Tribunals, and Justice Services which covers the Criminal Injuries Compensation Authority, Legal Aid Agency, and Office of the Public Guardian alongside various arms-length bodies and support organisations.
These services often support people in vulnerable and challenging circumstances. That is why it’s vital for digital teams to consider not just user needs, but also the physical, emotional, and social contexts in which people interact with our services.
Searching for an Ethics Tool
We explored several existing tools, including Spotify’s Ethics Assessment, which prompts teams to consider how their product or service might cause harm. After comparing it with other resources, we adapted it for digital teams at MoJ and across government and follows our standards and principles of re-use. Point 13 of the Service Standard, use and contribute to open standards, common components and patterns encourages us to look beyond government for things that are tried and tested that we can re-use and therefore not duplicate work.
We tailored the tool to better reflect our context, refining the risk categories and adjusting the scoring system to avoid neutral results. This helps teams focus on what matters most and prevents arbitrary decisions.
The tool is similar to consequence scanning but more structured. Instead of open-ended questions, it starts with predefined risks and asks teams to explore how their service might lead to them. It’s designed to surface potential harms and guide mitigation, not replace consequence scanning. Ideally, teams would use both methods at different stages.
While consequence scanning fits into agile workflows, the ethics assessment can be run once per phase such as at the start or end of Alpha depending on when it offers the most value.
How the Tool Works
The assessment is made up of three sections; one section for physical, emotional, societal consequences.
Each section is split into 6 columns:
The first column is a list of consequences that we want to avoid.
The second column invites you to consider how your service could lead to that consequence.
The next three columns are for scoring (a) the chance of that consequence happening, (b) the level of concern about that consequence happening, and (c) an overall ‘risk score’.
The final column is for writing down actions. This could be a solution that you come up with on the spot, but more realistically it will be to create a task or investigate the issue further. If it’s something with a high-risk score, you might even consider focusing a design sprint on the issue.
The tool can be used in remote sessions, or in person sessions, and can be adapted to meet your team’s needs. The outputs you get are likely to vary significantly depending on what your service does, who uses it, which phase of delivery you are at and how information is stored and used. You might end up grouping issues, coming up with ideas of how to mitigate them, or focusing on one area to improve.
Try It Yourself
Here is what our MoJ Ethics Assessment tool looks like to give you an idea of how to run an assessment, we have trialled it across several MoJ teams and found it valuable for surfacing risks and guiding design decisions.
If you’d like to run an ethics assessment in your team, get in touch with me at Daniel.Guy@justice.gov.uk
Let us know if you are doing something similar, we would love to hear from you.
Leave a comment