contact ME

Use the form on the right to leave your feedback, questions, or suggestions. I will get back to you as soon as I can!

           

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

AI

Unintended Politics of AI: A new theory

As a huge believer in artificial intelligence (AI) and machine learning, I try to keep up with new research and articles that come out about the topic. While the majority of these tend to be positive, there have been a few concerning issues coming up that made me look at AI from a different angle.

As part of an HCI theory class, I have developed a hybrid theory (based on two existing theories about technology and society) of unintentionally embedded undesirable values in artificial intelligence. Both of these existing theories put a stong emphasis on the idea that technologies have embedded values. They both talk about deliberate design and its positive and negative outcomes, but they only briefly touch on its unintended consequences.

One of these theories views it from the point of inherently political technologies with the attributes of authority and liberty (as in the examples of nuclear and solar power) which the designer can’t really “intend” or deliberately design. The other makes a remark on emerging technologies, where the possibilities for deliberate design can be limited, so there can be unintended (and often undesired) consequences of value embodiment. But neither of these theories further develops this conversation about unintended politics of technology design nor offers suggestions on how to prevent (if that’s even possible) or repair any negative outcomes. 

For example, knowing that some AI technologies source their “knowledge” from human data, we could design better filtering mechanisms that keep the abnormalities out, so that the data AI originally learns from is already “purified”. This could potentially be done easier with verbal or numerical data, than with visual, as bad words and numbers are presumably more objective and easier to identify than say, standards of beauty. In the latter case, there are many more variables and contextual/cultural differences to consider. 

Thus, my new hybrid theory suggests that to embody values or certain politics in technologies we design, our intended plan is not enough to account for possible outcomes. The modern AI technology differs from any other technology in the sense that it takes its shape not the way we design it, but the way it gets designed by some outer input, like human data. This means that while we don’t have a direct control of the technology itself, we can have some control over the data it uses. But with that comes another ethical implication known as the “cherry-picking” of data, which can actually skew the data and bring in other biases. 

The discussion of the two cases and the application of the hybrid theory to examining some problematic spaces of modern-day AI technology unveils new possibilities for further research in the field. As AI is entering HR and criminal justice arenas, right now is the time we should pay critical attention to unintended consequences of AI design and the social, racial, and political biases it carries. Without the development of a uniform data filtering or smart neural engineering initiatives, the side-effects of AI can severely damage our socio-political atmosphere. Fortunately for us, but unfortunately for AI, we live in the time of acute socio-political sensitivity and activism, which ultimately raises the standards for technology researchers and developers. For example, further research may tackle the questions of how much data is enough for one AI project, where it should be sourced from, and how it should be categorized/analyzed, considering ethical implications. If a public source of data is used, then there is a dilemma to consider – how are we going to get results that will satisfy everybody if we take into account all variables of public data, both positive and negative? And how will this bias our machines if we remove the negative connotations to let them learn what the majority wants to hear or see? And how do we decide what’s good or bad and what we need to do with implied (not explicit) meanings, as in sarcasm? We have yet to discuss and test out these issues in AI practice. 

0001.jpg

AI is based on machine learning – computer learning of the patterns of human activity and transforming it into actions that mimic human cognition, often producing very human-like activities like reading, writing, drawing, playing, and so on. It consists in the idea that it learns by analyzing a large set of data and finding patterns that it can reuse. And it works pretty well with non-verbal and non-visual data as it has a smaller chance of failing in a sense that undermines social or political norms among groups of people. Thus, AI is successfully used in areas like robot control and transportation, where there is a smaller potential for a “grey area” or misinterpretation. But the examples I discuss in this paper illustrate some failures of AI to conform to the socio-political norms of humans, crossing the ethical lines by acting racist and rude.

These unfortunate cases demonstrate that AI technologies that are based on neural networks (like chat bots and image/language recognition technologies) learn from human biases and social behaviors, sometimes negative, which can’t always be fully manipulated by engineers. Thus,  AI is not protected from having ethical implications, even though it is designed with good morals, or politics, in mind (friendship, communication, entertainment, etc). Thus, the proposed hybrid theory of unintended politics of technology could serve as a new lens which we could look through to examine these cases and potentially prevent future breaches.