I was excited (and slightly daunted) to be asked to speak in the ‘Walk in our Users Shoes’ series as part of the recent 'Transforming with Digital' campaign. As a user researcher, the opportunity to walk in users shoes is one of my favourite parts of my role so I was thankful for the chance to share this enthusiasm. Despite a potential audience of 200 and a bit of stage fright, my passion for understanding the users of our services and cultivating empathy with them and their experiences can truly help us design better services
One of the unexpected consequences of participating in this session was the opportunity to zoom out a bit and look at what we had learned about our users (legal aid providers, support workers, charities and applicants) over the last year. It was a useful exercise to take the time to make a more in-depth synthesis of what we had learned across different rounds of research and see the patterns and themes that emerged. Trust, vulnerability, accuracy, time, varying needs, and complexity surfaced as the main themes for our users across these past rounds of user research. It was powerful to look at this work in this way, as a whole, and see what surfaced - something I need to make more time to do more often.
Something I love about working at the Ministry of Justice is the positive impact our work can have for people in their time of most need. It is sometimes easy to forget who we are building for and the challenges they face when we get our heads down and find ourselves caught up in problem-solving as we develop our services. It was a wonderful experience to be able to share insights into some of what our users experience with a new audience of different professions and roles.
As a member of the MoJ ‘Future of User Research’ working group, I was invited to the workshop ‘Public services in the age of AI: An interdisciplinary approach to ethical artificial intelligence’ led by Oxford Insights. It was great to meet with colleagues from across the MoJ and discuss developments in AI and how they might impact our work. As well as sharing some real life cautionary tales about how AI has been used in the public sector, they shared data ethics frameworks that might support us in our work as we decide whether AI is the right tool for a service. There has been much written about the biases in artificial intelligence training data sets and how human intervention and collaboration is needed. We were given great examples of AI working well including satellitte images to estimate populations and transec tracking and alleviating urban congestion. They also have examples of it not working so well including an example of re-enforcement of racial bias and a childcare benefit scandal with wide reaching consequences.
The design of this workshop worked well - the first half was learning-based and in the second half of the session we could put into practice what we had learned by working in groups, looking at real scenarios where AI was implemented. The scenarios prompted discussion about the security, ethical and practical considerations. Real food for thought! My favourite quote of the day was from a fellow participant,‘AI feels like a solution looking for a problem.’ I think it can be easy to get carried away with the enthusiasm of the possible and with concern about being left behind, but it feels important to remember that AI is a tool, that we need to consider carefully where it is appropriate to use it - and ensure it is being used to meet real user needs.
Leave a comment