On 22 March 2023, the Future of Life Institute published a letter calling on AI labs to immediately pause training AI systems that were more powerful than GPT-4. To date, there have been 33,709 signatories. Why? Because of the widespread fear of the multiple harms our technical creations could pose to humanity and environmental ecosystems. While the letter didn’t have its primary desired impact – a pause in training AI systems – it did strap a rocketship to conversations about ethical and responsible technology and aim it towards the sky.
The growing number of publicly available AI tools, which have been shown to have incredible impacts depending on the hands they fall into, has meant that once again tech ethics has been given a boost to the top of the agenda. However, the conversation has moved on from when ethics was discussed in more theoretical terms; the kind that resulted in many organisations publishing principles and guidelines. There is a want and need for ethics to become more practical and more tangible. People want to know how they can put principles into action, especially those relating to privacy, fairness, inclusion, environmental sustainability and explainability of systems. We’re seeing this need being met through the increasing number of tech ethics and AI ethics toolkits that are being published. Ethics is also becoming more action-orientated through the diversity and number of voices contributing to calls for regulation and legislation where conversations about “should we or shouldn’t we” are being debated in terms of “cannot” and “must not”.
The move toward practical ethics must be applauded and must continue to be contributed to. However, we must not forget to update our principles. There will be many who say that principles are nothing but theory and built on idealism. Yes, there can be challenges with implementing some of the ethical principles and yes, there are principles that create friction with one another. However, starting from a place of idealism and the bluest of blue skies allows us to constantly be stretching towards excellence. Ambitious principles implore us to reflect and continuously improve. Another reason principles are important is because they create a moral boundary that helps to determine what good looks like. It’s the reason the Service Standard, with its principles relating to users, technology and data, and teams has been so pivotal in setting the high quality standards of public service design in the UK. When we’re asked to “Design for users and their needs”, it means we prioritise activities that allow us to collect valuable insights from people who use our services. I could say the same thing for the rest of the 13 Service Standards. For example, the call to use and contribute to open standards, common components and patterns has contributed to service teams across government championing re-use and building and maintaining common public digital goods. Toolkits are brilliant at putting theory into practice. Principles are remarkable at creating social norms that lead to desired behaviours being sustained over time.
This is why the Service Standard needs a refresh. In light of the enormous piles of evidence that show how technology continues to be woven into the microfibres of our lives and planet, we need a principle that asks service teams to consider the ethical impacts of their work. Right now there are teams at the Ministry of Justice, Home Office, NHS, Ministry of Defence, and others who are thinking of ethics and finding ways to apply it to their work. This has been great at starting conversations and bringing people together to share emerging best practices. But many aren’t part of this small sea change simply because it’s not on their agenda and hasn’t been flagged as a priority.
Elsewhere across the public sector, change is taking root. The NHS Design Principles and the Digital Service Standards for Wales both have been updated to clearly state that teams need to think of the impacts of their work. Principle two in the NHS Design Principles says that teams should “Design for Outcome” and reflect on what good will look like. Informed by the Well-being of Future Generations Act, standard one of the Digital Service Standards for Wales calls out the importance to “Focus on the current and future wellbeing of people in Wales”. There are also the Environmental Policy Principles Duties that have come into force which means that every policy professional must consider the environmental effects when developing policy. These duties will have a knock-on effect on the design of services and the impact that our technologies have on the environment, and as a result, on humanity. Alongside this, a team at the Department of Environment, Food and Rural Affairs have drafted 9 core principles and 24 guidelines for designing planet-centric public services.
As always, there are some fantastic people leading the way in the tech ethics space but more can be done and must be done to make ethics a priority in public service design. There is no better place to start than updating the common standard we all know, respect, champion, and use to hold ourselves and others in our community to account. The impact would be remarkable.
1 comment
Comment by Jack Rigby posted on
So happy to see the work we're doing in Wales recognised and the principles behind what we're trying to achieve understood. We're lucky to be supported by the Well-being of Future Generations Act, but it's clear that we're part of a wider landscape here. I'm the owner of our standards as part of the Centre for Digital Public Services and would be happy to speak to anybody who's interested on our priorities or share notes with anybody who's iterating their standards to be ready for the future.
The Digital Service Standard for Wales: https://digitalpublicservices.gov.wales/guidance-and-standards/digital-service-standards-wales