Yikes. The majority might have this one wrong. But we’ll get into why in a moment.
The survey
Researchers interviewed 2,769 Europeans representing varying demographics. Questions ranged from whether they’d prefer to vote via smartphone all the way to whether they’d replace existing politicians with algorithms given the chance. Per the survey: On the surface, this makes perfect sense – younger people are more likely to embrace a new technology, no matter how radical. But it gets even more interesting when you drill things down a bit. According to a report from CNBC: It’s difficult to draw insight from these numbers without resorting to speculation – when you consider the political divide in the UK and the US, for example, it’s interesting to note that people in both nations still seem to prefer the status quo over an AI system. In the U.K., 69% of people surveyed were against the idea, while 56% were against it in the Netherlands and 54% in Germany. Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.
Here’s the problem
All those people in favor of an AI parliament are wrong. The idea here, according to the CNBC report, is that this survey captures the “general zeitgeist” when it comes to public perception of their current human representatives. This seems to indicate that the survey tells us more about how people feel about their politicians than it does about how people feel about AI. But we really need to consider what an AI parliamentarian would actually mean before we start throwing our support behind the idea. Governments may not operate the same in every country, but if enough people support an idea – no matter how bad it is – there’s always a chance the people will get what they want.
Why an AI parliamentarian is a terrible idea
Here’s the conclusion right up front: It would not only be filled with baked-in bias, but trained with the biases of the government implementing it. Furthermore, any applicable AI technology in this domain would be considered “black box” AI, and thus it would be even worse at explaining its decisions than contemporary human politicians. And, finally, if we hand over our constituent data to a centralized government system that has parliamentarian rights, we’d essentially be allowing our respective governments to use digital gerrymandering to conduct mass-scale social engineering.
Here’s how
When people imagine a robot politician they often conceptualize a being that cannot be corrupted. Robots don’t lie, they don’t have agendas, they’re not xenophobic or bigoted, and they can’t be bought off. Right? Wrong. AI is inherently biased. Any system designed to surface insights based on data that applies to people will automatically have bias built into its very core. The short version of why this is true goes like this: think about the 2,769 person survey mentioned above. How many of those people are Black? How many are queer? How many are Jewish? How many are conservative? Are 2,769 people really enough people to represent the entirety of Europe? Probably not. It’s just a pretty close guess. When researchers conduct these surveys, they’re trying to get a general idea of how people feel: this isn’t scientifically accurate information. We simply have no way of forcing every single person on the continent to answer these questions. That’s how AI works. When we train an AI to do work – for example, to take data related to voter sentiment and determine whether to vote yay or nay on a particular motion – we train it on data that was generated, curated, interpreted, transcribed, and implemented by humans. At every step of the AI training process, every bias that’s crept in becomes exacerbated. If you train an AI on data featuring a disproportionate amount of representation between groups, the AI will develop and amplify bias against those groups with less representation. That’s how algorithms work inside of a black box. And therein lies our second problem: the black box. If a politician makes a decision that results in a negative consequence we can ask that politician to explain the motive behind that decision. As a hypothetical example, if a politician successfully lobbied to abolish all traffic lights in their district and that action resulted in an increase in accidents, we could find out why they voted that way and demand they never do it again. You can’t do that with most AI systems. Simple automation systems can be looked at in reverse if something goes wrong, but AI paradigms that involve deep learning and surfacing insights – the very kind you’d need to use in order to replace members of parliament with AI-powered representation – cannot generally be understood in reverse. AI developers essentially dial-in a system’s output like they’re tuning in a radio signal from static. They keep playing with the parameters until the AI starts making decisions they like. This process cannot be repeated in reverse: you can’t turn the dial backwards until the signal is noisy again to see how it became clear.
Here’s the scary part
AI systems are goal-based. When we imagine the worst things that could possibly go wrong when it comes to artificial intelligence we might be thinking killer robots, but the experts tend to think misaligned objectives is the more likely evil. Basically, think about AI developers like Mickey Mouse in Disney’s “The Sorcerer’s Apprentice.” If big government tells Silicon Valley to create an AI parliamentarian, it’s going to come up with the best leader it can possibly create. Unfortunately, the goal of government isn’t to produce or collect the best leaders. It’s to serve society. Those are two entirely different goals. The bottom line is that AI developers and politicians can train an AI system to surface any results they want. If you can imagine gerrymandering, as it happens in the US, but at the scale of which “constituent data” gets weighted more in a machine’s parameters, then you can imagine how politicians could use AI systems to automate partisanship. The last thing we need to do, as a global community, is use AI to supercharge the worst parts of our respective political systems. Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.