In the EXPONENTIAL MINDS’ Artificial Intelligence Bulletin – Street Wars we see cyclists vs. driverless cars, a grocery store for the future, a discussion on the limits of artificial intelligence, and talk of ethics with an expert.
Street wars 2035: can cyclists and driverless cars ever co-exist?
Picture yourself cycling down a city street in the year 2035. You’re late for a meeting, but the road you must cross ahead has recently been designated an “Autonomous Vehicle-only” route, where platoons of driverless cars whizz past, mere centimetres apart. You can’t ride across it, as cyclists and pedestrians have been banned for fear they would slow the driverless traffic. You must find a way around.
The clock is ticking. Do you attempt to climb the barrier and make a dash through the traffic? As you wait, you see a group of kids on a side street which is open to all vehicles. They are darting between driverless pods and forcing them to a stop. It’s a popular game.
Rewind to today. A report last month estimated that by 2035 up to 25% of new vehicles sold could be fully autonomous. Humans can be terrible drivers, and many proponents believe AV could reduce the 1.34 million annual global road death toll.
But cities have some urgent questions to answer and failure to address the issues raised could see us sleepwalking back into the problems of the 1960s and 70s, where cities became thoroughfares for traffic first … and places for people second.
Read more at The Guardian
The Grocery Store Of The Future Is Mobile, Self-Driving, And Run By AI
In Shanghai, a prototype of a new 24-hour convenience store has no staff, no registers, and the whole thing is on wheels, designed to eventually drive itself to a warehouse to restock, or to a customer to make a delivery.
The startup behind it believes that it’s the model for the grocery store of the future–and because it’s both mobile and far cheaper to build and operate than a typical store, it could also help bring better access to groceries to food deserts and rural areas.
Read more at Fast Company
The Limits of Artificial Intelligence
But there’s remarkably little talk of the limits of automation. What is the acceptable failure rate of these projects? Outside of games like Go or poker, just how suited are machines to the corporate world? Are some algorithms too expensive, as Netflix once found out? There’s a risk that disappointing results lead to an exaggerated corporate pullback, as the Harvard Business Review warned in April.
Machines can fail. Chatbots do so very publicly: Microsoft shut down a bot called Tay after pranksters pushed it to make racist, sexist and pornographic remarks. Earlier this year, Facebook went back to the drawing board after its bots hit a failure rate of 70 percent, according to The Information.
Failure is fine, but the acceptable failure rate of an intelligent vehicle or a computer-controlled turbine is probably different to a bum steer on an electricity bill. That can be the difference between an easy path to cost savings and a complex, long-term investment that doesn’t work as intended.
Read more at Bloomberg
Ethics And Artificial Intelligence With IBM Watson’s Rob High
Listen to The Modern Customer Podcast with Rob High here.
Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.
One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson. When customers interact with a chatbot, for example, they need to know they are communicating with a machine and not an actual human. AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.
Read more at Forbes