The AI killswitch or what have we learned?
In 2016, a Microsoft AI went racist on Twitter in less than 24 hours. In 2010, the stock market had a flash crash in trillions of dollars due to algorithms. Facial recognition is racist; Google’s fired its AI ethics guru Dr Timnot Gebru.
On a smaller scale, when I was on a project with P&G, it took procurement a month to give me three suppliers based on my requirements, which were laughably inadequate. Organisations are complex and historically flawed. If they can get simple things wrong that they use systems for, how can we beat Ashby’s Law and evolve to tackle the road ahead?
Systems are often garbage-in-garbage-out. People believe they are great systems, but they produce garbage, because they feed it with bad information. Organisations are exponential people systems. What do AI and other tools hold for them?
In The Wicked Podcast episode #50, our 3rd special, we want to take the 50 books we read and see how they can comment on some of the promises AI and other organisational trends are making.
For those out there, who believe in AI or are generally interested, we have some questions that will contribute to that conversation if you find it relevant.
Do AIs need a killswitch? Can people design an AI to understand the risk and impact? Can they even predict if the effect is positive or negative? The complex system of Facebook is ruining not only democracies but populations who believe in truth. It has played with our emotions, likely having the potential to create depression in people who use it.
We will be using it because it exists, just like the US used the atom bomb. Twice!
What should we have in place that has a chance to guide an exponential system?
What are your questions and experiences? What would you like us to talk about?
Let us know in the comments and I hope we will see you at the podcast.