AI and humanity

Everyone has their favorite podcasts, and one of mine is Krista Tippett’s On Being Project.  Krista hosts some of the wisest, deepest conversations with neuroscientists, poets, priests, and behavioral economists, exploring how these wildly different fields shed light on what it means to be human.

In a time where knowledge is abundant but wisdom seems hard to come by, her podcasts always give a new perspective  to topics that have been talked to death. In the Silicon Valley, AI is one of those topics.

So when I heard that Krista Tippett was hosting a conversation with AI experts Jerry Kaplan and Mehran Sahami on AI and humanity in Stanford, I immediately signed up.

Here are some of my takeaways:

In gist, where AI will take us is not a matter of technology, but of our values and vision for humanity.

AI amplifies the current state of humanity. AI works by identifying patterns in data, many of which are invisible to the human eye. Hence when we look at what AI has predicted based on these historical patterns, what we see is a starker, clearer reflection of our historical blemishes. How we have discriminated against certain races, genders and neighborhoods are on full display.

Hence, we shouldn’t blindly automate what we did for the past 100 years. We need to think about where we want to go instead (a question of values) and design AI accordingly. 

But who are the arbiters of values today? It is private companies who design algorithms which build-in value judgments on fairness, privacy and a host of other civil rights. But are private companies designed for such civilizational impact? They aren’t accountable to the people they impact in the same way that democratic Governments are. Are regulations the only way that a Government (as a proxy for the people) exerts influence over the values behind AI? This seems too blunt, but do we have realistic alternatives? 

We don’t need to worry about “General AI” that you see in sci fi. AI and humans have far different comparative advantages. Hence, the goal of AI development is not to replicate human beings but to use machines for what they’re particularly good at (many areas that humans will never match) – in a way that serves humanity.

On that note, remember that when it comes to decisions about using AI, “efficiency is always a second order principle”, – it should only have value in relation to some other value.

Humans need to make value judgments on what should be made more efficient through AI, and what shouldn’t or needn’t (We can automatically fine you each time you go above the speeding limit, but should we?)

Needed: AI practitioners in Govts, shaping how society, values and AI interact for public good.

I’m currently working on bringing tech talent into the Government to do precisely this, and would love to talk to you if you’re a technologist thinking about how to maximize public good in your career.