Algorithms Unchecked; Bias and Mistrust
More and more, work within governments is now done by algorithms. They can analyze huge amounts of data, identify patterns, and make split-second decisions based on those patterns. In theory, this should increase efficiency, avoid human errors and bias, and make all processes more streamlined.
But, without access to those algorithms and without advanced technical knowledge, the work done by them is effectively hidden from the communities they are intended to serve. For the average person, it can be hard to understand how algorithms impact them. This means that algorithms practically evade democratic oversight, ultimately feeding into a vicious cycle that makes it difficult to (re)establish trust.
Because computer-administered algorithms are generally thought of as objective, rational, and not ideologically motivated, it can be easy to forget that they can be just as biased as the people who created them, or the data they rely on. Without a sustained effort to be looking out for potential biases, relying on algorithmic decision-making can perpetuate societal prejudices. In New Zealand, this particularly affects the indigenous Māori population, who struggle to be recognized by algorithms. This has led to cases of false-identification when facial recognition algorithms used by police have not been developed on Māori faces. There are also a number of specific cultural and spiritual concerns about the unsupervised capturing, dissemination, and retaining of facial mapping data. Without including Māori people in the development and use of these technologies, there are no regulations to protect Māori cultural beliefs. The consequences of algorithms running unchecked can have very serious consequences for the people they leave out of the development and deployment processes. This bias is by no means limited to facial recognition algorithms, but it is a useful illustrative example.
Taking Action to Guide Algorithm Use
New Zealand, as well as several other OGP members, are now using their action plans to advance transparency, participation and accountability of government algorithms. In New Zealand’s action plan, there are a few different elements that, taken together, should make algorithms more accessible and easier to understand for the public.
In July 2020, the government of New Zealand collaborated with a number of civil society stakeholders to implement the Algorithm Charter for Aotearoa New Zealand, which regulates algorithms that directly impact groups or individuals. The Charter has a number of requirements: Algorithms must be explained in plain English, relevant communities must be engaged, and that their use and handling of data is disclosed. Additionally, human oversight and accountability must be maintained so that the data’s limitations and biases are understood and managed, and that privacy, ethics, and human rights are safeguarded. Good guidance, oversight and transparency makes it easier for New Zealanders to trust that, when the government is using algorithms, they have clear intentions that are met, with minimal unintended consequences.
Having increased transparency, tackling algorithmic bias is the next key issue. In order to do so, the government works in close partnership with the Data Iwi Leaders Group, which is made up of indigenous leaders in the data sector. Acknowledging that algorithms may perpetuate bias, they cooperate in order to find ways to counteract it. One key measure is data ethics training for civil servants, which helps manage risks and define responsibilities.
Algorithm Charter Remains Voluntary for Now
New Zealand’s approach to algorithms is based on a public commitment from government agencies. As with any voluntary approach, one of the challenges is that not all agencies have fully committed to implementing the charter. There are also challenges around training officials in the use of algorithms, helping them update and refine them, and in building community awareness and engagement in their use. The fact that many of these technological changes are happening before the respective legal frameworks are fully established makes the challenge more complex – and also more urgent.
While New Zealand’s charter for algorithms is a step forward in the fight for data transparency, the uptake has been slow. This is largely because the voluntary participation in the charter means that there hasn’t been much pressure or urgency for a lot of government agencies to implement it. Future steps to incentivize participation could, for example, include a mandate that all algorithms used by the government need to be made publicly accessible. This could accelerate the project of making the use of algorithms more open and widely understood, and give the people of New Zealand the tools to decide on how algorithms affect them.