Artificial Intelligence and Racism – Commentary by Peter Gietz

Neonlichter die menschlichen Silouhette formen mit Gehirn im Hintergrund (dekorativ)

 

In our digitised world, we encounter Artificial Intelligence (AI) all the time, and in almost all aspects of our daily lives. It is possible to simply ask the voice-activated assistant on your smartphone for directions, or you can take funny photos with a filter, which automatically recognises the human face. But Artificial Intelligence also impacts our lives in much more subtle yet crucial ways. For instance, if you apply for a loan, the application is processed with the help of an automated software. The crucial criteria are determined and weighed against each other by the software using AI, or to be more precise, machine learning, which is now more commonly referred to as Deep Learning (DL). These kinds of automated processes are not always perfect, as different examples show: For instance, when Google identified black men as gorillas, or the Twitter bot which became a raging racist over night, to name just two incidents of AI drawing the wrong conclusions.

In regards to the technological possibilities of AI we are at the very beginning. This is also precisely why it is so important to decidedly take a stance against racism and raise awareness for related issues with AI, and the resulting micro-aggressions. Even though most companies today use at least some kind of AI, only a few are aware of the mechanisms operating in the background. In this context, the company giants developing AI software are often criticised for their lack of diversity – and rightfully so. As a consequence, the people programming and training the AI, more often than not, lack experience with systemic racism and thus are not aware of how deeply rooted racism is in public administration. Additionally, the insufficient diversity in AI leads to sometimes rather curious incidents, just like the two already mentioned. Another example of the problems caused by insufficient diversity was a soap dispenser failing to recognise black skin as it was never trained to do so. However innocent the consequences of mishaps like this may seem, the same principle stands true for other applications, and then has the potential of causing actual, serious harm.

Just like any other software, AI depends on the human developing it at first. Every person comes with their own bias, with some being conscious of it while others are unaware, this bias is then passed on to the learning algorithm. AI or DL are enabled by neural networks, which recognise patterns in data relationships based on data input and apply these findings to new data. This is also how a streaming service is able to identify personal preferences and the suggest new films based on previously watched ones. It is precisely this point which reveals a systemic problem with AI: explicitly discriminating examples from the past, i.e. in previously established law, train the AI to consider these cases the norm and make future decisions based on discriminatory input data. One example for this is the programme „Compas“, which marked black inmates to be more dangerous than their white counterparts who had committed the same crimes. Only through investigative journalism this injustice was uncovered, and was consequently published in a report on the website Propublica.org. AI can only act as inclusively as we train it to act. Therefore, it is important to always be conscious of the context a data set was compiled in. If everyone was more conscious of these problems caused by the dearth of diversity in AI, as both, developers and customers, we would be already so much closer to a solution.

At this point in time, AI is still a relatively new innovation. Yet, there is already a lot of mistrust towards it especially amongst people of colour as its use more often than not is to the disadvantage of already marginalised groups. For this reason, there is not only an excess of latently (or even overtly) racist data, but there is an acute deficiency of authentic data. In order to reverse this grievance it is thus essential to attain speech samples by different ethnic groups in order to also properly train the AI for more diverse culture-specific speech patterns and dialects (i.e. AAE).

Nonetheless, society still benefits from AI use. It can help in our everyday lives in many different ways: From handy smart assistants in our fridge to order more of the food which we just ran out of; suggestions on media platforms to find our new favourite artist; or even DL supported diagnostics to detect cancer as early as possible. Its positive effects can, however, only be fully exhausted if we all make a conscious choice to eliminate faulty approaches which could lead to further discrimination against minorities. Hence, it is of the essence that we all – no matter if we are personally affected or not – to speak up against racism in technology, and also reflect this stance with our actions.

 

Menu
WordPress Cookie Plugin by Real Cookie Banner