A couple of weeks ago Google released his own AI principles how it wants to conduct the development of AI in the future. These principles will guide Google in development of AI.

I want to show here how broad these principles are and why they are fundamentally flawed because of that broad spectrum.

Be socially beneficial

(…) we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.  

The Social & Economic factors mean nothing because they are a subjective standard. Google will just do what they want to do with no objective standard we all could check and observe. There is no standard behind any of the social factors Google could use AI for and it’s foreseeable that we don’t talk here about the economic factors of users or countries but of Googles own company.

As for the risks and downsides – I get what they want to say here.

It makes no sense to invent the self-driving car, just because the AI could potentially harm or kill people in an accident (as it has already happened).

I do think however they could lay a solid foundation for their risk/reward calculation.

For example: Self driving cars will still save more lives in the long run then they will harm and the harm will be very limited against the risk of normal drivers, just because the streets would become more save in general.

That is why the self-driving car sector is so big and profitable – it has huge monetary & lifesaving upsides.

But what about News aggregation via AI – the upside here is a better flow of information and good articles, the downside that we sabotage our tool of a free press that is essential for democracy.

So where is the risk analysis there? – Does it matter that we need a free press that can publish anything? Or is that risk “balanced” by the monetary interests of Google?

 

Avoid creating or reinforcing unfair bias

Oh yeah, that is a really fun one – so let’s dive in this

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.  We recognize that distinguishing fair from unfair biases is not always simple and differs across cultures and societies.

So what Google will generate as data-driven results with AI will be subjective to the culture and society where it is created! Yeah, I’m sorry but I think it is not ethical in any way to suggest that the same data could have different results depending if it is used in the USA or in the Middle East. I can think of a bunch of examples and problems with that “principle” alone.

Of course there are cultural things that impact data points – for instance, if you would make a model for the typical consumer you will have a bias in there because you will include demographics like gender or race – depending on the product, this is wanted and not even bad because the mentioned demographics will be impacting the consumer decision.

Moreover, who decides what is an “unfair bias”? And how will Google correct for that?
Historically Google tried to disadvantage/discriminate against certain groups of people without looking at the individual and so I ask myself, what will Google do to get to their goal this time?

We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

The big question is, however: How we can do that so it is transparent but also not impacting/hurting others!

I’m sorry but especially Google has not the best track record for the diversity of political and/or religious beliefs in the last decade and I doubt that people working there even have a clue about a standard that could be applied to something like race, gender or sexual orientation. – We do have a standard in society that you should not be judged as a Group but always as an individual, but people right & left will try to imply your group has always some influence and therefore some relevancy – even if we talk about something that does not need these demographics.

Categories: General