In this week’s article, we ponder The Future of Morality in the artificial intelligence age.
Recently, we heard Lawrence Krauss, the theoretical physicist, describe how leading scientists told him the biggest challenge they are facing with A.I. is whether or not to program A.I.s with ‘human values’ or not.
To which, Krauss replied ‘what exactly are human values’ with particular reference to the age of Trump.
He went on to compare future A.I.s or robots, who may exhibit closer to what we’d call general A.I. or AGI, as similar to our future generation of children.
If what we want for our kids is to be better than us, to have access to better information, to be able to do things we can’t, then shouldn’t we also wish this for our robot creations?
It’s a fascinating thought.
A.I.s might even look like us if that’s how we make them. Or eventually if that’s what they want to look like.
Let’s take a deeper look at some of the latest trends and issues…
Is There Such A Thing As A Prejudiced AI Algorithm?
In this Forbes article, the author explains how artificial intelligence models are made by feeding it data.
From that data, the A.I. makes predictions that humans can use to make decisions.
But what this data is, where the boundaries begin and end, is determined by humans, the data or A.I. scientists.
This data can be prejudiced or unintentionally can lead to prejudiced outcomes, as was the case when showing STEM career ads on Facebook and letting its algorithm maximize ROI in terms of applications. This led to 20% more males seeing the ads than females due to the comparative extra cost of showing the ads to females.
Two other examples are cited in the article:
- Google Photos being unable to properly recognize black people due to a lack of diversity in training data
- Tay, Microsoft’s A.I. Twitterbot, becoming overtly racist when trained by the Twitter community
This type of racism isn’t overtly being built into algorithms — it’s just that machines learn everything from us.
So, what’s the answer?
To embrace diversity, whether inside or outside of the organization, and teach company stakeholders about the importance of diversity and test for bias.
Read more at Forbes
How Do We Align Artificial Intelligence with Human Values?
This article raises some interesting points:
At the 2017 Asilomar Conference on Beneficial AI, 100+ thought leaders from various sciences and industries created a working document about the rules that should govern A.I. development: The 23 Asilomar AI Principles
One of the 23 rules is specified in the quote below:
Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. – The Value Alignment Principle
It’s obviously a good thing for researchers and regulators to get together and try to think about the boundaries of A.I. and what should govern its development.
A.I. is both exciting and threatening in equal measure. Planning ahead now is the only proper way to deal with this technological revolution.
“The issue, of course, is to define what exactly these values are, because people might have different cultures, [come from] different parts of the world, [have] different socioeconomic backgrounds — I think people will have very different opinions on what those values are. And so that’s really the challenge.” – Stefano Ermon, Assistant Professor at Stanford
For current versions of A.I. that are more rudimentary and play the role of data crunchers rather than robot overlords, the value alignment principle may be something that applies further down the road.
However, with the work of A.I. and humans seemingly to be stuck on a very integrated pathway for the future, we need to think about this from the very start.
Find out more at Future of Life
Do Robots Deserve Rights? What if Machines Become Conscious?
What shall we do once machines become conscious? Do we need to grant them rights?
Watch this video… and be intrigued.
Ex Machina’s Ava (2015)
So, if we look at all this and then think about what a sentient A.I. would be like, are there any useful reference points from art, or science fiction?
Enter… Ex Machina’s Ava.
Spoiler Alert! If you haven’t seen it, don’t read or watch on but rather go watch the movie.
It’s a terrific modern examination of some of the moral implications of A.I.
The film and story was created by writer, Alex Garland, famous for The Beach and 28 Days Later.
“I think there’s a growing sense of fear about artificial intelligence that you see manifested a lot at the moment. There’s tons of films about A.I. which take a sideways or fearful look at it. Part of my starting point in this was that, on an instinctive level, I don’t feel affiliated with that sense of concern. My instinctive position is that I actually want it” – Alex Garland
The above quote is reflected in the film. Although it is deeply troubling what happens, ultimately the human gets his just desserts, and the sentient robots are viewed with compassion and ironically, more humanity.
What do you think? Are there other movies or TV shows that hit home the morality of the future about artificial intelligence? How about Westworld, Gattaca or Black Mirror. What are your favourites?
Related Articles from Nikolas Badminton
Nikolas Badminton is a Futurist Speaker that drives world leaders to take action in creating a better world for humanity. He promotes exponential thinking along with a critical, honest, and optimistic view that empowers you with knowledge to plan for today, tomorrow, and for the future.